Chances are if you’re a developer or part of a DevOps team you’ve had a polarizing conversation or two about containers versus serverless. In this post we recap a debate hosted by NetApp on this topic. Arguing for containers is Kevin McGrath, Chief Technology Officer, Spot by NetApp. On the side of serverless is Forrest Brazeal, Director of Content and Community at A Cloud Guru. In this post we will cover the key arguments on both sides.
The case for containers – The perfect packaging construct
Before containers, organizations had their own way to package and deliver applications. Each team had its own language, policies, and pipelines that were customized ways to get applications to production and then to scale them. With the advent of Docker containers, everybody was able to use a single construct to package and deploy applications. That construct is great at defining how we package applications. It allows us to put any code we want with any requirements and with any libraries, all in a container and then allow others to run this code in any environment they choose.
At the same time we were able to cut down on code dependencies, decoupling code from the server layer. As a result you can take these tiny objects (containers) and easily move them from dev to staging production without worrying about underlying server dependencies like an operating system. Containers separate the guest OS from the host OS which results in better decoupling. This also makes them more lightweight than VMs which need their own host operating system for every instance. This was a big paradigm shift. We finally had a universal way to deliver applications.
Containers became complex with orchestration
However, things were not all rosy with containers. We needed a way to run containers in production and at scale. This required new ways to handle container-to-container networking, stateful storage, and security. The new challenge became how to run containers despite all this complexity.
The solution was to use container orchestration tools like Mesos, Docker Swarm, or Kubernetes. It wasn’t long before the industry consolidated around a single orchestrator – Kubernetes. However, Kubernetes is operated differently from traditional VM management platforms like VMware. It is declarative, has a strong focus on automation, and relies heavily on open source tooling like Istio and Prometheus. All this meant that Kubernetes had a steep learning curve (read on to see how that changed).
Serverless is simpler, but sacrifices control
When Serverless came into the mix it seemed to address the complexity that came with running containers. With Serverless we just hand over our code to somebody and let them run it for us. While this removes some complexity, we lose control over the underlying layers. When we attempt to control things like dependencies, cold starts, custom libraries, and limit resource consumption, it requires many vendor-specific custom tools and tactics which takes the focus away from the applications themselves. In this sense, serverless becomes more and more like containers as it becomes just as complex after a certain point.
This led to a hybrid approach of implementing containers as serverless. The biggest takeaway from serverless for containers is ‘how can we make containers as simple as possible to run?’ However, we still cannot compromise on control. We still need access to the best hardware, networking, storage, and more, and this is closer to what containers offer than serverless.
Kubernetes brings the best of both worlds – simplicity and control
Fast forwarding to today, containers and container orchestration has become a lot easier. Kubernetes has become the de facto container orchestrator in the cloud and thanks to its vibrant community, it has become much simpler to manage. Kubernetes is able to deliver the ease of use that serverless promises but without compromising on control.
That said, for organizations choosing between containers and serverless, the questions to ask are:
- What utility are we getting out of the platform?
- What utility do I need for my DevOps teams and my operation teams?
- Can my operations teams handle service failures?
When you start running millions of transactions per second, performance optimization, cost efficiency and everything below the application level matters. With serverless you still need to support everything you deploy even if you’re leaning on somebody else to run your code. You need a backup strategy for when a service goes down. While it’s easy to get started, serverless is unable to keep operations simple as your needs mature. This is especially true for large organizations. Other disadvantages like vendor lock-in, lack of open source tooling, and a bias towards modern applications over legacy ones, weigh the odds heavily in favor of containers.
Containers allow organizations to run both legacy and modern applications in a way that allows DevOps teams and engineers to move fast together. It gives them the flexibility to choose any cloud or system they want to move to. Containers enable applications to scale easily.
Of course,it helps to take learnings from serverless and apply them to container management where possible. However, for big applications and large organizations, containers have really changed the way that applications are deployed to production, and the way they use the cloud.
If you are looking for a hybrid of containers and serverless, AWS Fargate allows you to run containers in a serverless environment. Of course you do lose control over the underlying infrastructure and costs can be high. Another option (near and dear to us here at Spot by NetApp) is Ocean by Spot. It delivers a hands-free, serverless-like experience while offering deep control over the infrastructure layer, as well as providing dramatic cost reduction and performance guarantees.
The case for serverless – Own less, build more
The first thing to note about serverless is that it isn’t as popular as containers. There are pockets of the industry where serverless is being used with great success. For example, Amazon says they’re using Lambda to run about half of all their new development. However, serverless and FaaS as a code shipping paradigm is not taking over the world in a way some thought that it might. Lambda recently announced that they will now support containers as a packaging abstraction for Lambda and not just zip files. This proves that containers have won the battle of packaging abstractions. However, that isn’t the whole story.
The serverless mindset is larger than packaging abstraction, and that is the biggest paradigm shift happening right now. It is a mindset that can be described as ‘own less build more.’ There are many examples of this mindset in services like DynamoDB, Zapier, Twilio, Auth0, and Stripe. All these services break off little pieces of the application workflow that organizations used to manage on their own, and make them something organizations can consume so that they can focus more on building what’s truly important than on management and maintenance of a stack.
The container mindset, on the other hand, is about delivering the same software that we did 10 years ago, just with a new skin of cloud best practices on top of it. For a lot of legacy use cases that’s great, but ultimately, containers are about repackaging the past, while serverless is about reimagining the future.
There are three counter-intuitive reasons why serverless is fantastic. Let’s look at each of them.
1. Serverless violates the second law of thermodynamics
The traditional IT world is such that any time a line of code is deployed to production it begins to decay. It requires constantly pouring engineering and maintenance hours into that code and the hardware it runs on just to keep up with competitors. The serverless mindset takes that whole fundamental law of IT physics and turns it on its head. Serverless makes it possible for these services to actually get better underneath over time, not worse. This happens because of the aggregate use cases of thousands of users of serverless. With much of the infrastructure abstracted away, the cloud vendor has complete control and can make improvements that benefit all customers. The cloud vendor is able to optimize performance for a wide range of workloads. It’s like having a great hive mind working on your behalf.
A great example of this is AWS DynamoDB. As a serverless database, DynamoDB used to run under a billing model called provisioned capacity where you estimate how much compute capacity you’d like underneath your database, and you’d pay for that capacity every month. Then AWS released a new way of paying for DynamoDB called ‘DynamoDB on-demand’ with this you pay for your actual usage. That is expensive if you’re using a lot of data, but if you’re not using a lot of data like in a development environment or in a staging environment, DynamoDB on-demand can save thousands of dollars a year. And all it takes is to check that on-demand capacity box – there is no engineering innovation required, no refactoring. The power of serverless makes it possible for DynamoDB tables to run faster and cost much less than before.
2. Serverless is expensive (but that is good)
Serverless is expensive and that is a good thing. The reason is that it’s just money. This is of lesser value than the amount of additional time and brain power it takes to keep services up and running. Developers have this tendency to want to build rather than buy. For example, to use the open source ELK stack rather than a managed logging service. However, if developers could use a managed logging service that’s run by domain specialists who know everything about logging, they can focus on the core of their business.
Opportunities to efficiently trade dollars for engineering time are rare. Serverless makes that trade-off so much easier than it ever was in the past. You can take that chance to spend the money and free up time to build what provides more value long-term. This is another strong reason in favor of serverless over containers.
3. Serverless locks you in
Serverless does involve vendor lock-in, or cloud lock-in. This is actually a good thing, and a huge selling point for serverless.
The truth is everybody is locked in on something. Whether that’s programming languages, or architectural choices, business constraints, or regulatory constraints – organizations are locked in on choices that they made some 20 years ago whether they like it or not. The question, then, is what do you want to be locked in on.
AWS IAM is a great example of cloud lock-in as it undergirds everything that is done in the AWS cloud. This is similar to Active Directory which was undergirding enterprise authentication 20 years ago. Active Directory made total sense in these large enterprise contexts as everybody needed it, it was easy to find talent to support it, and it was well documented. Serverless is similar where despite cloud lock-in you’re able to take deep advantage of native integrations within that cloud platform. There are numerous triggers that plug into Lambda functions and other managed services. The cloud provider has a vested interest in making it as easy as possible for all these services to talk to each other and play nice and that lets organizations move faster and build faster.
On the contrary, with containers, organizations don’t necessarily have the advantages of avoiding vendor lock-in though they might think they do. In fact, it often shows up as a kind of back door locking because containers are complex, difficult to manage, and we end up putting our Kubernetes instances back in the hands of cloud providers through managed services like EKS and AKS and GKE. These are all cases where organizations do not want to deal with container orchestration, and instead, let cloud vendors run containers for them. However, this doesn’t enable a portable, vendor-neutral solution. If and when you decide to move to another cloud vendor, there will be a high cost to move. It may look like portability up front, but it’s cloud locking on the back end.
The serverless mindset is about owning less and building more.
Serverless is structured to run your code on compute you don’t manage, freeing you to pursue your true business advantage. It lets you write a function, test and deploy, and let the cloud do what it does best.
Conclusion
As you can tell, there are perfectly sound arguments on both sides of this debate. It is a close call whether containers or serverless is best for your organization. It would be apt to conclude with a comment from Cheryl Hung of CNCF who was a panelist at the debate. She said ‘think about day two from the beginning. Because it’s actually pretty easy to move on to either of these technologies today, but the day two stuff is wildly different, and that’s what you need to make a decision on.’
Make sure to catch the entire debate for the whole scoop, and to watch a lively discussion between the panelists.