Serverless CaaS: Rethinking App Infrastructure

As automation takes center stage in managing underlying cloud resources and supporting applications, there is a complete rethink in how organizations leverage infrastructure. Extreme e is being placed on removing the friction between developers and application deployment by abstracting and automating infrastructure requirements.

The success of AWS Lambda shows that development teams are more interested in delivering business value to end users than worrying about the complexities in managing the underlying infrastructure. The level of trust that once stopped with OS management has now moved up the stack to include application deployment, event management, and scale. With the right automation, this idea can be extended to support different types of applications including containers. In this blog post, we will discuss how Serverless is reshaping application deployment.

The emergence of FaaS and CaaS

Functions as a Service (FaaS) platforms like AWS Lambda, Google Functions, and Azure Functions have changed developer expectations on modern application platforms. Developers don’t want to provision the underlying virtual machines or servers and they want to deploy their applications without the operational overhead. Cloud providers make use of automation to provide the developers with easy hooks to deploy applications without any need to manually provision and manage the virtual machines. While FaaS offerings cater to running functions based on event triggers, there is further innovation in the market where automation is extensively used to support other application architectures using containers. AWS Fargate, Azure Container Instances, Google Cloud Run are examples of services where developers can deploy their applications without having to worry about the underlying virtual machines. Hence, the idea of Serverless Containers as a Service (CaaS) is fast becoming a reality.

The advantages of Serverless CaaS platforms are

  • Improved productivity – No operational overhead associated with managing the virtual machines. This not only serves as a cost saver by reducing the operational costs but also increases the velocity of application delivery by removing friction points through automation
  • Simple to scale – Since management of underlying virtual machines is the responsibility of the cloud provider, scaling is even more seamless
  • Greater efficiency – With Serverless CaaS, users pay based on the container size rather than the virtual machine size. This is more cost efficient than paying for virtual machine instances

The resource efficiencies offered by Serverless CaaS makes it more attractive for developers and the enterprises are increasingly adopting Serverless CaaS to cut down their operational costs and to streamline the DevOps pipeline. Along with Functions as a Service, most cloud providers are also offering Serverless CaaS to meet the needs of containerized applications.

Examples of Serverless CaaS offerings

AWS Fargate

AWS Fargate allows you to run containers without managing underlying virtual machines. In other words, it is the compute engine for running containers with AWS Elastic Container Service (ECS). This allows users to deploy their containers and only pay for the container capacity than the underlying virtual machine capacity. However, at this time, Fargate can only be used with ECS. With Kubernetes emerging as a standard for container orchestration, the users are forced to pick between the advantage that comes with Serverless infrastructure vs Kubernetes for container orchestration. Right now, AWS Fargate and AWS Lambda are two separate services without any integration between the platforms.

Azure Container Instances

Azure Container Instances (ACI) offered by Microsoft provides a similar environment for containers on top of Azure. Unlike AWS Fargate, Microsoft supports Kubernetes through Azure Kubernetes Service (AKS). AKS users can use Azure Container Instances as the infrastructure fabric for running Kubernetes clusters. By embracing Kubernetes, Microsoft is giving customers a standard based container orchestration tool. By integrating with Azure Logic Apps and Azure Functions, Microsoft is offering a continuum of services from containers to Functions as a Service.

Google Cloud Run

Google Cloud Run is different from AWS Fargate and Azure Container Instances. While people compare Google App Engine Flexible Environment to AWS Fargate and Azure Container Instances, Google Cloud Run is much closer to the Serverless environments Fargate and ACI represents. Cloud Run is a managed compute environment where stateless containers can be deployed and invoked using HTTP requests. In the spectrum of CaaS to FaaS, Cloud Run sits somewhere in the middle giving customers to deploy any language, any library or any binary without having to worry about managing the underlying infrastructure. It offers more flexibility than FaaS but is limited to certain stateless workloads compared to CaaS.

Spot by NetApp Ocean

Spotinst Ocean is similar to AWS Fargate and Azure Container Instances, providing an infrastructure to run Kubernetes or Amazon ECS clusters. It provides an abstraction to run these clusters without having to manage the underlying virtual machines. However, Spotinst Ocean provides a lot more pricing flexibility by allowing users to take advantage of Spot Instances and Reserved Instances while keeping on-demand instances as a backup. Since AWS Fargate and Azure Container Instance pricing are based on On-Demand pricing, Spotinst Ocean allows cost savings up to 80% compared to other options. Spotinst Ocean abstracts the complexity associated with provisioning, auto-scaling, and management of Kubernetes worker nodes.

Read more about EC2 vs Fargate

Considerations for using CaaS

In this section, we will briefly highlight some considerations for picking CaaS.

  • The biggest advantage of CaaS lies in removing the operational overhead of managing underlying virtual machines. If you have a small operations team or if your organization has a DevOps culture where developers deploy their applications to production, CaaS is the right fit. This completely eliminates the operational costs of managing the VMs and accelerates application delivery
  • If your application can be containerized and doesn’t depend on the underlying networking of the virtual machines, CaaS is the right option. While CaaS fits the needs of all stateless applications, care should be taken to ensure that the container orchestration plane can support storage volumes in order to deploy stateful applications
  • While most event-driven functions are fit for FaaS, CaaS provides a more versatile platform to run multiple workloads. While FaaS is suitable for one-off tasks or workloads with long downtimes, CaaS is more suitable for workloads running continuously. In other words, FaaS adds limitations to provide a more fine-grained pricing model than CaaS and, hence, is restricted in the workloads it can support
  • If the cold start of FaaS is unacceptable for your workload needs, running CaaS provides an optimal way to manage the workload needs. While using CaaS ensures that the cold start problem goes away, you do not incur higher costs due to running virtual machines and managing them. You only pay for the lowest size container needed to run the workload.
  • If your VMs are running excess capacity in spite of densely packing the VMs with containers, you are using the resources inefficiently. You can gain better resource efficiencies and cost savings by moving to a CaaS
  • Containers, with their faster boot times, are perfect candidates to take advantage of different compute options like spot instances and reserved instances. With the right abstraction, using the spot and reserved instances can offer significant cost savings while using on-demand instances as a fallback. CaaS running on top of such a heterogeneous compute environment is, even more, cost-effective

Conclusion

Modern application architectures require a more elastic infrastructure with a continuum of services from Container as a Service to Functions as a Service. CaaS makes it easy to use the infrastructure resources more efficiently while also giving the flexibility to run many different workloads. While every cloud provider has a CaaS offering to help its consumers deploy modern applications, SpotInst Ocean lets users deploy containerized applications on any cloud provider by tapping into spot instances and reserved instances. SpotInst Ocean provides both resource efficiencies of CaaS along with additional cost efficiencies using analytics and automation, with many market-leading organizations using it with great satisfaction.

So, why not give it a try yourself?