Listen to this story
The early 2010s saw a massive shift with IT teams of most companies moving towards serverless or microservices architecture from monolithic architecture. As platforms grew bigger, it simply became a difficult balancing act. Monolithic architectures were famously hard to scale and had all the disadvantages that come with a huge computing network. Making any changes to the application meant reworking the entire stack which slowed updates down to a glacial pace.
Shift from Monolithic to Microservices
They could be convenient during the early phase of a project because the code base would be easier to manage but aside from that it became clear that the world now looked serverless. In 2009, Netflix became a pioneer in microservices architecture because of their growing pains. The traditional infrastructure of the video streaming services company eventually couldn’t contain its skyrocketing demand. The company then decided to migrate its IT operations from private data centres to a public cloud while also replacing its monolithic architecture with a microservices one.
While it wasn’t as well known what microservices exactly meant then, it became a welcome change because it deployed all services independently. Each service ran on its own logic, database and was able to update, test, deploy and scale services of its own accord. This didn’t immediately make things less complex but the complexities became more visible because the tasks were distinctly separated.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Microservices were also better suited for startups which had mushroomed all over because startups would obviously have smaller tech teams. Monoliths normally required less work than distributed systems but didn’t have the flexibility needed either. But despite these well-established facts, the microservices and monolithic architecture debate gathered steam again after Amazon Prime Video shifted its live video monitoring service from microservices to a monolithic one, in the beginning of this month. Ironically, Amazon had been one of the first ones to jump aboard the microservices bandwagon initially.
Will Amazon’s decision pay off?
The Prime Video team posted a blog in March titled ‘Scaling up the Prime Video audio/video monitoring service and reducing costs by 90%’ explaining just how the shift from a distributed microservices architecture to a monolithic style had helped them achieve scalability while also reducing costs by a wide margin doing essentially what serverless had promised to do.
“Moving our service to a monolith reduced our infrastructure cost by over 90%. It also increased our scaling capabilities. Today, we’re able to handle thousands of streams and we still have the capacity to scale the service even further. Moving the solution to Amazon EC2 and Amazon ECS also allowed us to use the Amazon EC2 compute saving plans that will help drive costs down even further. Some decisions we’ve taken are not obvious but they resulted in significant improvements,” the post said.
But does bucking the general trend place Amazon in danger of sacrificing flexibility for cost? Amazon execs have since then been scampering to explain themselves. Dr Werner Vogels, CTO at the retail giant, wrote a blog on the ‘All Things Distributed’ site, saying, “Building evolvable software systems is a strategy, not a religion. And revisiting your architecture with an open mind is a must.”
Vogels explained that to think architectures are like a “one-size-fits-all” is a false notion based on a trend. “There is not one architectural pattern to rule them all. How you choose to develop, deploy, and manage services will always be driven by the product you’re designing, the skillset of the team building it, and the experience you want to deliver to customers (and of course things like cost, speed, and resiliency),” he said.
A former AWS exec, Adrian Cockcroft also weighed in on the move. “In contrast to commentary along the lines that Amazon got it wrong, the team followed what I consider to be the best practice,” Cockcroft said. “The result isn’t a monolith, but there seems to be a popular trigger meme nowadays about microservices being oversold, and a return to monoliths,” he stated.
Cockcroft believes there’s some truth to this. “I think this may have arisen from vendors who wanted to sell Kubernetes with a simple marketing message that enterprises needed to modernise by using Kubernetes to do cloud native microservices for everything,” he noted.
Marcin Kolny, Amazon Prime Senior Software Development Engineer discussed how ironically AWS’ skyrocketing costs had hit them too. He also went on to admit that the decisions made “may not work in all instances.”
Last year, Amazon reportedly invested USD 7 billion across Amazon Originals, live sports and licensed third-party video content included with Prime, its earnings show. Like any tech company trying to tide over the current lows, Amazon’s recent earnings calls revealed that there was significant pressure on growth as clients tried to cut down cloud costs.
But more than anything else maybe this just goes on to show that the IT world is cyclical and something that has been set aside can suddenly turn into the trend to follow the next year.