Part XVII of a multipart series, to start at the beginning, goto Part I.
In the ongoing development of TWLER.com (The World’s Largest Electronics Retailer), the actual process of building of the system was severely neglected over the years. The first thing that we established as a team back in 2010 was a new set of build infrastructure (Git, Jenkins, Artifactory, Chef) so that engineer’s had actual continuous integration tools to work with.
Aside: Continuous Delivery was just appearing at this time. We had Jez Humble, author of the seminal Continuous Delivery book, in to talk to us about his ideas in early 2011. This was a small gathering of about 15 engineers and architects who were interested in the concept that we eventually implemented at TWLER.com.
The biggest problem with writing software at TWLER was the lower environments. As we were breaking up the monolithic ATG application into distributed components, the lower environments were still geared towards single ATG application delivery. We had DEV, TEST, INT, STAGE, BREAKFIX, and PROD environments. STAGE and BREAKFIX were marginally close replicas of PROD, but it was really not possible to create true replica environments. The environments were not well segregated either, we found on numerous occasions PROD applications accessing services and data from non-PROD environments. In the overwhelming realm of things to fix, this wasn’t even a top priority.
We decided we had to replace the DEV and TEST environments. DEV was basically non-functional, it was one large server that all 1000+ engineers were trying to use to integrate their code. Builds constantly failed, even when they succeeded the results were often unusable. It could take you all day to just try and get one change built and hope the results were something you could work with. Most people just skipped DEV and went right to TEST, which was an actual managed environment.
We decided that a lower environment PaaS was the direction we wanted to go. We wanted every engineer at TWLER to be able to spin up every application at TWLER in their own private sandbox. That way engineers would have to ability to test their systems in isolation, while still using the latest component versions of everyone else’s system.
To achieve that we decided to build an OpenStack environment and a homegrown PaaS. But how to get the capital to create the OpenStack environment? At the time, mid-2011, I had excess dollars assigned as resource capital, it was meant to only be used on hiring engineers. In my projects at the time, we were still proving out the chosen architecture and were ending the year with excess resource capital. At the time, TWLER was extremely controlling of capital, you either had resource capital, or hardware investment capital, but not both.
So I went on a reconnaissance mission to find someone with investment capital that needed resources to see if we could trade. After a couple weeks of hunting, and getting the financial managers of TWLER involved, we finally found a team that was willing to trade their investment capital for resources. We swapped $150k and used that money to buy three OpenCompute racks to build out our OpenStack environment. We estimated with high density OpenCompute, we could house about 5000 VMs on OpenStack. If we stuck with the company standard hardware, HP, we would only be able to build out an environment that housed 1000 VMs with those dollars.
We knew that if we were successful, we would want to expand the OpenStack environment quickly. We decided on full automation from bare metal to operating OpenStack and proceeded down the path using Crowbar and Chef to build out the roll-in racks to OpenStack. At the same time we started building out the PaaS framework, Omnitank, and getting teams on-board to truly fully automate their deployments so their components could be included in the PaaS. This adventure was documented at OpenStack summit as one of the keynotes in front of 2000+ attendees.
This PaaS was so successful, we were able to shutdown our DEV and TEST environments. We also allowed engineers from around the company to use the OpenStack environment. We actually had hundreds of forward thinking engineers from the enterprise side of TWLER using our OpenStack PaaS as it was the fastest way to get a test environment created.
It was so successful that when we had our first major outage and lost all environments, there were numerous enterprise teams that were down for weeks rebuilding their environments. We used this as a lesson to learn which teams actually automated their infrastructure, and which teams were polishing virtual chrome. But we did have to explain to IT VPs that this whole environment was still in beta, and it was use at your own risk.
Goto Part XVIII