A new conference has sprung up in town and it shows great promise, Open Source North.
Hey, maybe you should go!
If you do, check out my talk. It’ll either be great or a massive waste of time, let me know what you think after the conference.
What does it mean to start an open source project internal to an organization? Does that make any sense?
Many large organizations have very large systems within them, systems which are mission critical to the delivery of their business model. These systems are often bottlenecks for development as, in some fashion, they cannot be avoided and some development work is needed to complete any enterprise scale capability. They limit the change available to an organization.
What if there were a way to unlock the capacity limit on these systems? There is, open source the project.
If you open source a project internal to a company you are opening up the codebase for anyone in the company to work on. Instead of supplying a dedicated development team, you now need a dedicated team of system stewards, people that ensure the stability of the system and that the code being added meets the criteria of the project’s sponsors.
You can now do this fairly easily with Git based source control, where anyone in the company could write a module or patch and submit a pull request. The stewards review the pull request and whether the code takes them in the direction of their roadmap for the project and potentially accept the request into the main repo.
If done correctly you’ve opened up the system to the teams with the greatest need, while still maintaining control over the system and its direction. If done incorrectly you’ll probably have the biggest mess of your life. To push an entire enterprise forward at higher velocity the risk may be worth it.
We’ll be at MinneBar on April 12, 2014 which is again at Best Buy campus this year. It’s always nice to spend Saturdays at work! My colleague Kannan Swaminathan and I will be presenting our Cassandra and Riak at BestBuy.com presentation that we previously presented at CodeFreeze. Hopefully the Twin Cities conference attendees will not notice.
My article on the BestBuy.com Cloud Architecture appears in the March/April 2014 edition of IEEE Software Magazine. I’ve been busy writing that article instead of writing here.
I will be presenting with my colleague Kannan Swaminathan at this year’s Code Freeze at the University of Minnesota January 16, 2014. We will be doing the breakout sessions so you’ll have two chances to attend. It should be an informative talk on how Best Buy is using Cassandra and Riak. Hope to see you there!
BestBuy.com will be presenting at the University of Minnesota Computer Science department Tech Talk series on October 9th at noon. We will be presenting on the Architecture & Technology of BestBuy.com.
You probably have to be a student of the UMN to attend; CS students of Minnesota, I hope to see you there.
So why am I still talking about this?
In the early 2000s, somehow the industry got convinced that software was just another form of manufacturing, if you defined a process and applied it rigorously, little chunks of perfectly coded software would come spewing out the end of your assembly line. Since it was manufacturing processes, labor could be sourced from anywhere and we could all get our software faster, better and at lower cost.
In 2001 my job got offshored, just like many of us that worked through that period. However, my particular offshoring is remarkable in that I truly got offshored. The firm hired a company which purchased an old cruise ship and had it parked somewhere off the coast of San Diego in international waters. Some poor sods from various countries were relegated to a permanent offshore vacation where they coded 24 hours a day. Yes! The CTO explained how at the end of one person’s 12 hour shift, they would simply step aside and the new person would hop in the chair and just pickup where he/she left off. All that work we were trying to tell people would take 3 more months would be done in a few weeks!
It’s now 10+ years later and I haven’t heard about a massive flotilla of cruise ships blocking the entire western coastline of the USA so I’m assuming this model didn’t catch on. Actually I know it didn’t catch on as the CTO absolutely failed to deliver any software at all after six months of trying. Being that it was a startup, it than promptly disappeared.
In any case, the renaissance of the local software engineer took over a few years ago and shows no sign of stopping. Yet I still find myself in conversations regarding the commodity nature of developers. Does this happen to anyone else?
Half life (not the game) is the term used to describe the decay of radioactive isotopes. The longer the half life, the slower the decay. If you have a gram of radioactive material, it will change over time until eventually all the radioactive material decays.
I like to think about the code we write as having a half life. Well written code in a slowly changing area of an application has a long half life. It doesn’t mean the code never changes, it just means only small changes occur over long periods of time. The half life of the code may be in years (Caesium 134, half life of about 2 years).
However, brand new code in a rapidly changing area, say the new UI of your brand new site, has a half life of days (Manganese 52, half life of 5 or 6 days). This would mean you’d expect half the code to be replaced in one work week. The next week one quarter of the remaining code would be replaced, etc. until virtually no code from the original work is left.
Thinking about half life is useful because it tells you how much effort you should be devoting to testing and ensuring the code is rock solid. Long half life code should be well tested, documented and vetted for scalability. Short half life code should be thrown out with little testing and few thoughts about scalability or maintainability. Why? Because the code will be gone by next week.
Unlike isotopes, the half life of code changes once the code is complete and in production. Production marks a point where half life increases dramatically. In fact, you should be actively cranking up the half life by making the code clean and scalable.
Still, there’s a limit depending on the velocity of change in the various parts of the application. These days UIs evolve rapidly for consumer driven applications. The half life is short and the amount of effort put into this code is low. It should still work, but may not be something your proud to say you wrote. Then again, you should be pleased as you put forth the appropriate amount of effort.
One thing I learned from three days at the Openstack Summit is that I have always misconstrued the definition of Hybrid cloud architecture. When we started making plans for our cloud architecture, I always thought of it as a Hybrid cloud. At Openstack, there were numerous presentation on Hybrid cloud and all of them revolved around using the cloud to provide additional scaling for an application that runs in the datacenter. In all cases, the datacenter architecture stack was simply recreated in the cloud and used for peak load. The database master is in the datacenter and a slave exists in the cloud. The Hybrid cloud architecture simply means using a cloud to elastically horizontally scale an existing application.
When I originally thought about Hybrid cloud I thought of an application that has one or more layers in the cloud, and the remaining layers in the datacenter. I now call this a Layered Cloud architecture. In our case we built our new product browse capability in the cloud and kept the remaining application in the datacenter. All the data in the cloud was non-secure, basically public data so there was little to no security issues. We are keeping the commerce pipeline in the datacenter simply because it is easier to keep the commerce data and transactions in our secure datacenter.
This is a good example of assumptions clouding my view of reality. I’ve read plenty of articles and information about Hybrid cloud, but until I was sitting in a presentation having someone tell me about Hybrid cloud, I never noticed my definition was incorrect. Than after recognizing this, I watched every presentation to determine which definition was used more frequently. Unfortunately for me, all the definitions were the same and they did not support my original view.
As we remake BestBuy.com into a new platform, we are building a culture of architecture at the same time. Previous to 2010, BestBuy.com had no holistic architecture team guiding its development. Instead, a long series of projects simply bolted on more and more functionality until the resulting system was impossible to deterministically change. With little test and low regression coverage, any change in the system often resulted in unintended consequences.
In 2010 an architecture team was built and claimed ownership over BestBuy.com. We began to involve ourselves in projects that affected the site. We built a path and vision to remake the site into a next generation eCommerce platform. But over all, we established that architecture mattered, and agile architecture would be our culture. Our group of architects share similar architecture values, high involvement in development, decoupled flexible systems, TDD, small focused teams, high quality engineers, and letting architects lead projects rather than delivery managers.
The path of architecture has worked, teams with projects come find us now and we are involved with all aspects of the site. We are slowly working our way towards an infinitely scaling cloud/datacenter SOA. It is the architects who intervene when necessary, set engineering direction and mediate between all parties. To make it work, the culture of architecture must be in place first.