Category Archives: Best Buy

Tech Cities 2016 – The Agile Architecture Game

Coming up in February 2016 I’ll be facilitating the Agile Architecture game with my former colleague and game inventor Kevin Matheny.  We’ve used the Agile Architecture game within BestBuy.com to help project managers, business analysts, product managers, engineers, and others learn about the tradeoffs involved with long term software architecture choices.  It’s a fast way to learn about the hard choices that architects make every day.

One of the comments from a person at Best Buy that played the game was “It felt like work.”   This person was an architect so we felt like we got the game right.

Tech Cities 2016 is a conference sponsored by the Carlson School of Business to foster the vision of Minneapolis being the tech center of the North.

It should be a fun conference!

Internal Open Source Projects

What does it mean to start an open source project internal to an organization?  Does that make any sense?

Many large organizations have very large systems within them, systems which are mission critical to the delivery of their business model.  These systems are often bottlenecks for development as, in some fashion, they cannot be avoided and some development work is needed to complete any enterprise scale capability.  They limit the change available to an organization.

What if there were a way to unlock the capacity limit on these systems?  There is, open source the project.

If you open source a project internal to a company you are opening up the codebase for anyone in the company to work on.  Instead of supplying a dedicated development team, you now need a dedicated team of system stewards, people that ensure the stability of the system and that the code being added meets the criteria of the project’s sponsors.

You can now do this fairly easily with Git based source control, where anyone in the company could write a module or patch and submit a pull request.  The stewards review the pull request and whether the code takes them in the direction of their roadmap for the project and potentially accept the request into the main repo.

If done correctly you’ve opened up the system to the teams with the greatest need, while still maintaining control over the system and its direction.  If done incorrectly you’ll probably have the biggest mess of your life.  To push an entire enterprise forward at higher velocity the risk may be worth it.

 

The Art of Large Systems

I was discussing art with my daughter, someone who is extremely talented at what most people consider art.  That is, being able to draw and paint things that look amazing and you know you could never do yourself, not in a million years.

In any case, she made the comment that all artists hate their work.  This is an intriguing statement because if it were true than there is no incentive to actually make art.  If you know you’ll hate it what’s the point.  With further questioning we clarified that the statement really means an artist is never happy with the outcome.  This is far more reasonable, nothing ever turns out like the perfect image you have in your head for how something should be.  Try as you might, you know if you could just figure out how to get there, the piece would be infinitely better.

Now this makes sense to those of us who work on large dynamic constantly changing systems.  Systems that are constantly under varying stressors such as high volumes of traffic appearing in ways that were not anticipated, frequent code releases that can cause unintended consequences, and the connection to numerous other systems that often go awry.

All of these things are done in code, and we all know that if we could just puzzle it out, there’s a better way to construct this system; something which is eminently simple.  Maybe we’re using Tinker Toys when we should be using Lincoln Logs.  There’s a seismic shift that can happen if we could just force our brains to make a jump, we can often feel it out there waiting to be discovered.   But the reality of having a job and a deadline kick in and we have to deliver something that works.  Perfection not achieved, again.

When writing code you get used to this feeling because you are delivering every day, often on a long timeline and things just have to get done, sometimes badly.  We’re not happy about it but the world moves on.

Looking back at the large systems you’ve built you can always point out the things you wish you could change.  Sometimes you get the opportunity to refactor them, possibly finding out the idealized new architecture actually was worse than the original system.  Sometimes your amazing ideas fall down when confronted with the complexity inherent in large systems.  However, sometimes the new system is fantastic, there’s just some things that could still be better…

It is hard to love the outcome, it’s the child that rebelled and ran away from home after stealing all your money and taking your car.

Creating large systems is essentially a form of art.  There’s no defined methods to ensure a positive outcome.  For works like www.bestbuy.com, the work is in the public domain and constantly being judged by individuals and the media.  Some people love it, some hate it, but everyone has an opinion.  And, finally, you can sell it (well technically the company could sell it).

 

 

Layered Cloud versus Hybrid Cloud Architecture

We had a great week at Openstack Summit in Portland.  See the article in Wired magazine for a short summary.  Or watch the Best Buy Openstack keynote.

One thing I learned from three days at the Openstack Summit is that I have always misconstrued the definition of Hybrid cloud architecture.  When we started making plans for our cloud architecture, I always thought of it as a Hybrid cloud.  At Openstack, there were numerous presentation on Hybrid cloud and all of them revolved around using the cloud to provide additional scaling for an application that runs in the datacenter.  In all cases, the datacenter architecture stack was simply recreated in the cloud and used for peak load.  The database master is in the datacenter and a slave exists in the cloud.  The Hybrid cloud architecture simply means using a cloud to elastically horizontally scale an existing application.

When I originally thought about Hybrid cloud I thought of an application that has one or more layers in the cloud, and the remaining layers in the datacenter.  I now call this a Layered Cloud architecture.  In our case we built our new product browse capability in the cloud and kept the remaining application in the datacenter.  All the data in the cloud was non-secure, basically public data so there was little to no security issues.  We are keeping the commerce pipeline in the datacenter simply because it is easier to keep the commerce data and transactions in our secure datacenter.

joel-breakout-sessionThis is a good example of assumptions clouding my view of reality.  I’ve read plenty of articles and information about Hybrid cloud, but until I was sitting in a presentation having someone tell me about Hybrid cloud, I never noticed my definition was incorrect.   Than after recognizing this, I watched every presentation to determine which definition was used more frequently.  Unfortunately for me, all the definitions were the same and they did not support my original view.