Category Archives: Platform

A Digital Ecommerce Transformation – Fun With Clouds – Part VI

Part VI – Start at the beginning with Part I

While the business was busy stuffing their sorrys in a sack, our team was having some fun.

From July of 2010 to April of 2011 my role on the team was Architect, it was probably one of the most productive stints I can remember. During that time I learned Infrastructure as Code by building servers for Artifactory, Confluence, Jira, Crowd, and a number of other products using Chef. There we were reading the insanely poor documentation on the Chef site, 4-5 of us all learning Chef at the same time trying to get something working. Chef is similar to Grails in my head, too many magic mushrooms growing everywhere. If something doesn’t work, it might be your code, but it might be some unknown configuration that you missed. However, once you know where the magic starts and ends, they can both be quite useful.

I spent days on end building and stripping down infrastructure in the AWS cloud. We learned about Availability Zones, and Regions and how to operate in multiple locations at once. We watched AWS go down two or three times in that period but managed to weather all those outages with a little luck and forethought. We talked to vendors and startups building tools for clouds. We talked to other consumer enterprises building high scale websites for customers. We spent a lot of time reading High Scalability and the first edition of The Art of Scalability.  We wrote up comparisons between Riak, Cassandra, MongoDB and HBase. We tried to decide what might work best for a new distributed item catalog. We guessed, and hoped we didn’t end up like the guy that picked Cassandra for Digg in 2010.

For better or worse, the Digg disaster and the good relationship we struck up with Basho led us to choose Riak for our first NoSQL system in late 2010. We had great collaboration with the Basho engineers, we were helping them find the bugs in their system but the underlying technology was rock solid. In six years Riak never failed us, the only times we had problems were completely self-inflicted.

In the end we had numerous systems operating in AWS. The first was the failover site mentioned in Part I. If TWLER.com went down, we would switch over to the browse only site in a few minutes. We got to exercise this capability more than once. The second was the build infrastructure, our Atlassian suite, Artifactory and Jenkins were all cloud deployed. What we learned running production systems in the AWS cloud gave us the confidence to push towards a whole new architecture for TWLER.com.

GOTO Part VII

Fall Speaking Schedule

I have two talks coming up in the Fall, they both will be at excellent conferences so I hope to see you there:

October 16, 2017 O’Reilly Software Architecture Conference in London.  I’ll be giving a talk on Platform architecture in the retail space.  Having worked in retail for the past seven years, the number of systems required and the breadth of systems is staggering.  Applying a common architecture pattern to all systems is essential to allow reuse and rapid development in all aspects of retail.

Next I’ll be staying local for the Midwest Architecture Conference in Minneapolis on November 9, 2017.  I’ll be giving one of the keynotes about how to restructure Enterprise Architecture so that it might actually be useful.  Let’s all support out local architecture conferences, there’s not many left!

 

A Digital Ecommerce Transformation – First Architecture Forays – Part IV

Part IV – Start at the beginning with Part I

As the first team of architects to work for TWLER.com, we started mapping out the current situation and planning for the future. ATG was the base eCommerce engine and Oracle the RDBMS. Some attempt to provide enterprise services had been undertaken in the past and tax, inventory and payment had all been removed from the ATG codebase and were now called as enterprise services. There were a number of other integrations but these were the main services that caused issues in the digital world due to the disconnect in service levels that were present in the TWLER (The Worlds Largest Electronics Retailer) environment.

The basic architecture of where we started is below, I take no responsibility for the IT side of the architecture:

 

It’s worth mapping out the organizational structure to start to understand the additional frictions present in TWLER’s attempts to run a digital eCommerce system.

The Digital teams were separated out from the IT teams many years ago and had remained in that state.  Digital was viewed as unimportant since stores drove the revenue, Dotcom was a sideline play, even in 2010.  Digital was run like a business and software development was matrixed out to the IT team. The Digital team had taken over its own operations at some point because the IT teams were unable to support 24×7 operations in their model. There were dotted lines between the Digital VP Operations and the IT VP Digital Portfolio, as well as between the Digital Chief Architect and IT Senior Director Digital.

The Business VPs in the Digital team drove the business projects, with input from the wider enterprise business teams in marketing and merchandising. Between the Digital Business and IT was a Business Relationship Manager, who was supposed to translate the business asks into IT requirements. The IT requirements were then shipped off to an enterprise integrator to spin up a new team and deliver the projects. Your success varies.

In addition, internally to the Digital team was an innovation team that had spun up a completely separate Mobile site on ATG, and managed the Mobile Apps. This team was quite clear in their goals to replace all of Dotcom with their mobile site at some point in the future. So now I knew that there were two teams with the goal to replace the main www.TWLER.com eCommerce site.

Finally, there was the Senior Director of Digital in the IT team who was mainly responsible for hiring Accenture to manage projects, and WiPro/TCS to execute projects. This was a very important role as someone had to manage our vendor partners and ensure quality delivery (since I can’t get sarcasm across in writing, read this last line with as much sarcasm as you can imagine). This Senior Director also had plans to rewrite the Dotcom site, he had not gained support from the Digital VPs because they were extraordinarily unhappy with the quantity and quality delivered thus far.

To recap, in the first six months at TWLER I learned that there were at least three teams who felt they were responsible for rewriting TWLER.com. Additionally, the software development on the ATG platform was spread across multiple divisions, multiple vendors and multiple countries. All features were added by project teams who appeared when the money started, and vanished the second the money stopped, all support fell to the Digital VP of Operations. This VP was forced to contend with the mess of code and integrations that 1000s of developers were contributing to every day. I finally understood why she had moved forward with hiring her own team to rewrite TWLER.com. We owned the codebase but lacked capital, in year one we had $7M to get the team started. By the time 2011 rolled around, that amount was cut back to $3.5M.

GOTO Part V

A Digital Ecommerce Transformation – A Little More Background Before We Get Started – Part III

Part III – Start at the beginning with Part I

The TWLER.com (The Worlds Largest Electronics Retailer) architecture team started with the chief architect and five additional people, myself for high scale Java applications and NoSQL, a TWLER consultant/architect that had lived through the entire life of TWLER.com and recently converted to an employee, a second TWLER employee architect specializing in APIs and Product catalogs, a cloud infrastructure architect, and another systems thinking high scale Java architect. With a six person team, we tasked ourselves with converting TWLER.com from its massive monolithic state to a new not yet known state. What we decided early, or was decided for us, was that we were going to evolve out of the current state to a future state. It was decided for us by two constraints, not enough capital to build a new system separately and run the current system, and the stipulation that business must continue unaffected by our efforts. At this time revenue was around $1.5 billion.

There were multiple projects that we started that first year: infrastructure automation of a modern build infrastructure deployed in a cloud, cloud based outage site, distributed product catalog and QA automation. There was one more task assigned to me as exploratory which was to look into the ATG Ant build and see if there was any way of automating it with a dependency management framework.

The selection of these projects was built around the strategy to establish a modern engineering infrastructure that we could then leverage with all our future work. Having a robust Continuous Integration environment, universally available Git and SVN version control systems, artifact repository, wiki, task management and user authentication system would allow us to move quickly in the future. The distributed product catalog was built off the theory that to exit ATG, the basis of that plan required a new distributed product catalog outside of the ATG system. QA automation was the final card needed to move past six week manual release testing windows so that we could run thousands of regression tests in hours and speed up the release process.

My main project was QA automation, which was a successful disaster. Successful in that we were able to automate many of the manual test cases using Selenium and JBehave, but that the UI was so inconsistently rendered that all fields had to be accessed directly with XPath, and even then the pages sometimes rendered differently causing the tests to fail. Additionally, we were unable to setup test data in a consistent manner causing tests to fail randomly when underlying data was changed or deleted. In reality it was still better than the manual tests, but we struggled to maintain even 80% test coverage for more than a few days.

The most interesting work for me was delving into the ANT build file for ATG. I had plenty of experience with ANT in its heyday in the late 90s and early 2000s. I thought I knew ANT pretty well, but when faced with the 20+ ANT files that made up the build of 14 separate ATG applications, I had met my match. In my spare time I started tracing through the ANT files, determining how variables were setup up and finding the main path through the files. I had to diagram out the actual workings of the files as there was multiple instances of recursion occurring within the build process. My goal was to figure out how to convert the build over to Maven, which I’m sure many of you hate, but bringing in dependency management and forcing a standard file system layout we felt were the most important considerations.

After a few weeks, it was my estimate that it would probably take a couple people 2-3 months to convert the ATG build from ANT to Maven, and that we could tackle it in smaller pieces by starting with the 13 builds that were not the main TWLER.com site. We shelved this idea for the time being, but when we did get back to it a year later, my estimate proved exceedingly optimistic.

GOTO Part IV

A Digital Ecommerce Transformation – Front End Madness – Part II

Part II – Start at the beginning with Part I

In 2010, the cloud was not new but it was ignored by large companies, particularly in non-technology focused segments such as retail. While it is clear now, at the time large retailers had not yet awakened to the new reality that a company’s prowess in software might decide its future outcomes.

As an example, in my first year at TWLER (The Worlds Largest Electronics Retailer), store sales during December were not going well. Since many retailers make 50% or more of their annual sales in November and December, this spelled impending disaster. Traffic was down in stores and the company’s reaction was to propose that the digital channel stop its free shipping offer. This clearly showed the company leadership’s inability to fathom online shopping. The logic would be that if a customer could buy it online and have it shipped for free, they would not go to a TWLER store. Seems reasonable, but if the customer is shopping online and we did not offer free shipping, they would simply click to the next store, probably Amazon, and buy their electronics from them with free shipping. This customer was not going to a TWLER store that Holiday, ever. We were actually saving sales for the company, it just wasn’t understood.

A minor digression on the state of the ATG system is necessary to understand what we were dealing with. The original ATG system was built in 2003, at the time it was an excellent decision for a mid-sized retailer to build its first ecommerce engine. But over time, and numerous one-off projects, the codebase had morphed, intertwined and been generally neglected and abused. As an example, one ongoing project when I arrived was to widen the product detail page (PDP) and move the Add To Cart button from the left side of the page to the right side. This seemed fairly innocuous, but it took six months and well over one million dollars to accomplish this task. This seems ridiculous to me so as an architect, I dug into why this was happening.

It turns out there were multiple reasons why this project was practically impossible to complete. To start with, there were nine separate versions of the PDP, each made for a different category such as TVs, Music, Computers, etc. The nine separate PDPs all had common origins in some ancestral PDP, but after years of projects aimed at the individual categories, they had all strayed in different ways, including using different Javascript frameworks and versions to accomplish dynamic page elements. These PDPs were written in JSP/Javascript and were each well over 10,000 lines long intermixing actual Java into the JSPs themselves. Imagine trying to figure out how to change nine different pages all implemented slightly differently in monstrous JSP files, with no test automation to determine if you broke anything in the process.

This sounds bad enough, but the executable for ATG was in the GB range, built as an ear, with a special Ant file which only one or two people understood (more on this later). It was necessary to build and run the entire ear to determine if the page changes worked since the JSP code was so intertwined with the server side code. However, it was impossible to actually run this ear on a developer’s machine because it also required a full working copy of the Oracle database. No one had actually figured out how to make all these things work yet on a single desktop or laptop machine.

Instead, there was one shared development server for the entire Dotcom division. This server was a large Unix box but still too small to serve thousands of developers trying to build and run the ATG codebase. This server alone routinely failed due to lack of disk space and not enough CPU. But, it was the only place to build and run code you had worked on in your IDE, so everyone had to deal with it.

If you weren’t crying yet, the next step was to actually deploy the code to the staging environment (skipping the integration environment altogether) because the reality was the front end code only worked if it could access the Internet as there were so many externally downloaded components to the page. Even though you deployed the code to the shared developer environment, you couldn’t actually run it there. The staging build happened once every night.

To sum this up, the normal front end development cycle is change some code, save it locally, have it automatically picked up by your running app server and test it. This cycle time should be in the seconds range so you can work quickly and efficiently through all the little tweaks necessary to make a UI page look good and work as expected. The cycle at TWLER was change some code, save it locally, do your best to make sure the page compiled, check in the code, push to your developer environment, do your best to check it compiled, wait for the overnight stage push (go home), come back the next morning and see if the change worked in stage (assuming the push didn’t fail, which it often did). Instead of a cycle time of seconds for each code change, the cycle time was one day. One entire 24 hour day!

Did I mention zero automated regression tests?

Now I bet you think that $1M was cheap. In fact, I still don’t know how anyone actually got any work done in these conditions, but I do know that the churn in the front end development team was enormous.

GOTO Part III

A Digital eCommerce Transformation – A Multipart Series

It took six long years but we successfully transformed a monolithic 10 year old ATG based commerce system into a completely distributed, infinite scaling eCommerce platform.  I’ll call the place TWLER, or the world’s largest electronics retailer.

I joined TWLER in 2010 as a Hadoop Architect on the small team that was tasked with rewriting TWLER.com.   I interviewed in May and was offered the role, but didn’t start until July. About two weeks before I started the VP called me and let me know that the Hadoop project was cancelled and if I declined the offer they would understand. I had spent the last two years learning about Hadoop and installing one of the first Hadoop cluster in the Twin Cities on a bunch of old Dell towers for a company called Peoplenet. The VP there had the idea that they would build a remote data collection system that could take in one million messages per second back in 2008. I tried to startup a Hadoop consulting arm for the consulting company I was working for at the time, Object Partners. However, it was a bit too early in the Twin Cities for companies to be interested in BigData and NoSQL. We spent a lot of time talking to companies about the technology, but no contracts were forthcoming. I put together an Introduction to Hadoop presentation and gave that numerous times, all to no avail. So when TWLER came calling with a Hadoop Architect position, I jumped at the chance to actually use Hadoop at a large company. All that to say I was quite disappointed when the role fell through and seriously considered declining the position as I had not yet resigned, and after two years of pushing a technology I did want to see Hadoop in action. But I was quite tired of my second stint in consulting and ready to move on.

So I arrived at TWLER with no assigned role. At the time we were five architects under an Operations Vice President, and I had no idea of the organizational politics that were hindering a major system rewrite. The cancelling of one project should have been a clue to the future as already, funds were being removed from this team.

We started out under the tutelage of Michael N., a renowned architect that had worked on the initial version of TWLER.com. Our first task was to setup a modern development pipeline using Chef in AWS. A reminder that this was in 2010 when a large retailer doing anything in AWS was highly uncommon, and Chef was practically brand new.

The pipeline we stood up at the time was fairly standard, Atlassian Stash for Git, Crowd for user management, Confluence for knowledge management, Jira for issue tracking, Bamboo for continuous integration, and Artifactory for artifact storage and a Maven repo. We used Chef to completely automate the deployment of these tools into AWS. While none of these was new to TWLER, they had not been combined together, made externally accessible or offered to anyone who wanted to use them throughout the company.

We used this infrastructure to start a small AWS based project to put together a small site that could be used during outages. The site would allow customers to lookup locations, products and prices, but full commerce capabilities would not be available. It would reside in the cloud, ready to be deployed within minutes. It says something that the first thing we built was an outage site, TWLER.com was not a stable platform at the time.

GOTO Part II

Internal Open Source Projects

What does it mean to start an open source project internal to an organization?  Does that make any sense?

Many large organizations have very large systems within them, systems which are mission critical to the delivery of their business model.  These systems are often bottlenecks for development as, in some fashion, they cannot be avoided and some development work is needed to complete any enterprise scale capability.  They limit the change available to an organization.

What if there were a way to unlock the capacity limit on these systems?  There is, open source the project.

If you open source a project internal to a company you are opening up the codebase for anyone in the company to work on.  Instead of supplying a dedicated development team, you now need a dedicated team of system stewards, people that ensure the stability of the system and that the code being added meets the criteria of the project’s sponsors.

You can now do this fairly easily with Git based source control, where anyone in the company could write a module or patch and submit a pull request.  The stewards review the pull request and whether the code takes them in the direction of their roadmap for the project and potentially accept the request into the main repo.

If done correctly you’ve opened up the system to the teams with the greatest need, while still maintaining control over the system and its direction.  If done incorrectly you’ll probably have the biggest mess of your life.  To push an entire enterprise forward at higher velocity the risk may be worth it.

 

Layered Cloud versus Hybrid Cloud Architecture

We had a great week at Openstack Summit in Portland.  See the article in Wired magazine for a short summary.  Or watch the Best Buy Openstack keynote.

One thing I learned from three days at the Openstack Summit is that I have always misconstrued the definition of Hybrid cloud architecture.  When we started making plans for our cloud architecture, I always thought of it as a Hybrid cloud.  At Openstack, there were numerous presentation on Hybrid cloud and all of them revolved around using the cloud to provide additional scaling for an application that runs in the datacenter.  In all cases, the datacenter architecture stack was simply recreated in the cloud and used for peak load.  The database master is in the datacenter and a slave exists in the cloud.  The Hybrid cloud architecture simply means using a cloud to elastically horizontally scale an existing application.

When I originally thought about Hybrid cloud I thought of an application that has one or more layers in the cloud, and the remaining layers in the datacenter.  I now call this a Layered Cloud architecture.  In our case we built our new product browse capability in the cloud and kept the remaining application in the datacenter.  All the data in the cloud was non-secure, basically public data so there was little to no security issues.  We are keeping the commerce pipeline in the datacenter simply because it is easier to keep the commerce data and transactions in our secure datacenter.

joel-breakout-sessionThis is a good example of assumptions clouding my view of reality.  I’ve read plenty of articles and information about Hybrid cloud, but until I was sitting in a presentation having someone tell me about Hybrid cloud, I never noticed my definition was incorrect.   Than after recognizing this, I watched every presentation to determine which definition was used more frequently.  Unfortunately for me, all the definitions were the same and they did not support my original view.