A Digital Ecommerce Transformation – Modernizing ATG? – Part XVI

Part XVI of a multipart series, to start at the beginning, goto Part I.

As much as I love disparaging ATG, it was actually built as an all-inclusive platform, which, at the time, was a modern way to build systems. ATG had a built in Inversion of Control framework prior to Spring. It had a set of tag libraries and did its best to help teams build ecommerce sites quickly in a J2EE application server manner. It just wasn’t ever meant to scale to level we needed in 2012, the architecture was wrong. Everything you did trigged calls to the single database instance, there was no caching so a promotion calculation would be run every time a customer wanted to see the promotional price.   With two million concurrent users, all interested in the latest promotions, this is a disaster. Any problem in any part of the system brought down the whole application. We needed to evolve from monolith to distributed.

Similarly, by 2012, deploying massive EAR files had been a dead architecture choice for at least five years. Tying everything together into one massive deployment was the opposite direction of the Agile/DevOps movement springing up with the Continuous Deployment mania.

But this was the legacy we faced at TWLER (The World’s Largest Electronics Retailer). We were ten years into unchecked ATG development with thirteen applications deployed on the platform, all rolled up into that 2GB EAR. You can only imagine the quarterly deployment nightmare, literally a waking nightmare as all deployments were overnight affairs, but that’s the fodder for another post.

We dedicated a team to figuring out how to deconstruct our ATG implementation and code. The codebase was series of interconnected, circularly coupled, packages. There were five or ten main packages that were used by all the applications so that a change to one package, would inevitably break 2-3 applications. Thus, an eight week QA cycle was necessary to ensure all the integrated code from 1000s of engineers would result in thirteen working applications.

But if you can’t throw it out, and you can’t break functionality, what do you do?

The first course of action was to modernize the build. It wasn’t actually possible to build the ATG application on a developer machine. The only way to build the application was by kicking off a build in the developer integration environment. Thus, if you were an engineer working on ATG, you could make a bunch of changes to your code, check them in to the CVS source control system, and kick off a build. Imagine 1000 engineers all trying to do this at the same time.

The developer integration environment was overwhelmed by the number of build requests so instead of on-demand, it would schedule 2-3 builds per day. So 1000 engineers trying to get their code in for 30 different projects to build together, guess the result of the build? If you guessed broken, than good job! You’ve worked in an enterprise environment in the mid-2000s!

To solve all these problems wasn’t easy, but the playbook is straightforward:

  • We had to automate the build so anyone could run it.
  • We had to separate the applications so you could just build the one you were working on.
  • We had to come up with a solution for the developer laptop, so you could build and run the ATG server locally.
  • We had to decide what portions of the ATG code to invest in and what to throw out.
  • We had to break the dependencies between the ATG packages so developers could work on things in isolation.
  • We had to automate the changes in the database so a local DB could be built and modified on the developer’s machine.
  • We had to fix the integration environment so if wasn’t constantly failing.

Many might think this was a throwaway investment and not worth the time or effort. We faced those challenges internally to TWLER as well. But, we were able to convince leadership that in order to move quickly in the future, we needed to move faster now in our legacy architecture. We would be using portions of ATG for the next three years at least, and we wouldn’t be able to complete our new work without augmenting the existing systems.

Looking back on it, I’m happy to report that the decision to invest in ATG as we dismantled it, was a good one.

Goto Part XVII

A Digital Ecommerce Transformation – Cloud Home Page – Part XV

Part XV in a multipart story.  To start at the beginning goto Part I.

Our goals for 2012 were to deliver two things, a new browse architecture for TWLER.com and a Holiday without issues. Not a small task for a team saddled down with a giant monolithic application and a mandate to deliver features and not affect the business during the rewrite.

We started in on the new browse architecture. It was our feeling that if we could simply scale up the browse capabilities of TWLER.com, we had a chance of making our aging ATG monolithic application survive another Holiday.

About 98% of traffic on an ecommerce system is people browsing the site, the other 2% is people actually trying to buy stuff. If you are planning for 10X increase in traffic for one week out of the year, than attacking the 98% seemed like a good place to start. Also, since we were dealing with a monolith, any traffic we removed from the 150 node ATG cluster was more power towards the checkout process. In fact, as we did the math, if we took off 70% of the traffic in the first year at Holiday, we’d actually have 3X the capacity we needed for the checkout process and remaining components left on the ATG servers.

We started with a project we called Cloud Home Page. The home page is the most served page on the site and at Holiday we had to make it static and cache it at the CDN. The business teams didn’t like this because, without dynamic content, there wasn’t any way to adjust what people were seeing as they landed on TWLER.com and lead them towards the new sales events. The Cloud Home Page plan was for a dynamic, cloud-based home page, with minimal personalization but modifiable within a 15 minute window.

Since we were coming from a J2EE style JSP on top of Servlets architecture, we first wanted to upgrade our front-end. The new architecture specified a thin and dynamic UI layer with zero coupling to the back end. That meant only HTML, CSS and Javascript were allowed. All data communication was done with JSON, and we tried to get the number of calls to the backend down to one.

That is, during a request for TWLER.com, the controller would make only a single request to the backend for all the data it needed to build the page. The request was handled by a service aggregator that would then manage the 20-50 service requests for data and build a JSON response in the specified SLA of less than one second (for the home page). By specifying a single request, we could let the service aggregator determine what calls to cut-off and what data to serve from cache to meet the page SLA. We also had a single point where we could add or remove functionality depending on the load. If the load was light, we could add a few more personalization services such as recommendations. If we were at peak loads, we could turn off all personalization and even some of the dynamic page elements to lessen the load on our servers. We created a highly dynamic Home Page that was tuneable given the system load.

However, to build just the home page we needed many teams to work together and deliver simultaneously. We had a new front-end team designing the Backbone based page structure and components. We had a team building a front end controller framework to drive the UX. Another team was building the service aggregation and distributed service management layer (sadly this was right before Hystrix became available). A team was building up a new distributed caching layer to gather together all the data needed to drive the home page. We continued the team that was extracting the item catalog from ATG and distributing it across our datacenters and AWS. And we imagined a team building a new customer management system we called Customer Graph, although we only had an architect working on that project. To ease the gridlock in our lower environments, we kicked off an OpenStack based lower environment PaaS just because we didn’t have enough other things going on. Topping it all off, our digital operations teams was learning how to manage systems in the AWS cloud.

We did have one final team, their job was to Mavenize the ATG build. This might sound trivial but it was the most harrowing undertaking of them all. We felt we needed to try to modernize the ATG deployment process to allow teams to move faster as we estimated it would take at least three years to exit the ATG platform. To make that palatable, we had to automate the entire ATG deployment process. Given that the current build process was a 10,000 line recursive ANT script, we put some of our most masochistic personalities on it. Besides automating ATG, they had to separate the thirteen intertwined applications that were deployed together as an ear file.  A 2GB ear file.

Goto Part XVI