Which Serverless Platform Should You Use?

Important Functional and Monitoring Considerations when Selecting a Serverless Platform

We previously discussed core serverless concepts in an Intro to Serverless Computing. Here, we’ll discuss important considerations of the myriad of serverless platforms available today, split between proprietary cloud providers and self hosted open source solutions.

Open Source – OpenFaaS, Kubeless, Fn, OpenWhisk and numerous others; it’s a hot topic at the moment. Most of the open source offerings will run on Kubernetes. Therefore, they could run on Kubernetes as a Service (KaaS) in the cloud or on your internal Kubernetes cluster if you need to keep it in house. Here’s a fun question: Is running a serverless platform on your own servers an oxymoron?

All of these open source projects are still in their early days. None have released a version 1.0 yet, and there is currently no clear indication as to which one will be the most popular.

Runtime support for these open source platforms is broad with a wide range of popular languages included along with the ability to build custom runtimes. Each function is typically deployed as a Docker container. As long as that container meets the interface requirements, it will run. Serverless COBOL functions, anyone?

With all these serverless platforms, observability is vital as they are adding another layer of complexity on top of Kubernetes, an already complex platform. The smooth operation of both the serverless platform and Kubernetes is imperative to the smooth operation of the hosted functions. Some of these projects have already thought about observability and provide a Prometheus metrics endpoint. Fn has also included Open Tracing implementations for Zipkin and Jaeger.

Cloud Providers – The usual suspects are present: Amazon Web Services Lambda, Google Cloud Functions, Microsoft Azure Function Apps and recently IBM has entered the space with a hosted version of OpenWhisk. Lambda from Amazon Web Services (AWS) has been around the longest and is the most mature offering; it is already running significant parts of Amazon’s Alexa service.

serverless runtime support

All these hosted offerings provide the same basic functionality of functions hosted in the cloud that cost nothing when they are not used and are then billed by the microsecond as they are executed. All platforms provide a web user interface and CLI tools to manage the functions. Triggering of the functions can be plumbed into the cloud platform’s other services, AWS has the richest set available.

All the platforms provide basic monitoring and log aggregation facilities. AWS Lambda is the leader in observability with X-Ray which provides end to end tracing across various AWS services. Google’s Stackdriver tracing ability is currently only available as a preview release and does not yet support automatic tracing for serverless functions. Microsoft Azure and IBM OpenWhisk do not offer any tracing capability.

Operating Heterogeneous Services

With such a wide choice of serverless platforms to choose from, the question is which one is best suited to your needs? The good news is that you don’t have to make a choice. The Serverless project provides both common tooling for managing the functions and an Event Gateway for mapping events to functions.

Management Tooling

Using one definition file and one command line tool it is possible to deploy serverless functions to many providers in any language runtime supported by those providers. This level of automation makes moving functions from one provider to another less painful. However, functions are not truly portable as there is currently not any standard for function entry points, returning data or for the libraries that will be available at run time.

Event Gateway

While each cloud provider has their own API Gateway they do not typically provide much convenience for for multiple provider solutions nor ease of portability. The Serverless Event Gateway provides a vendor agnostic solution offered both as a service and as a Docker image to be run where you want. As this API Gateway is not tied to any vendor it is possible to receive events from any provider or external source and route to any other provider or external destination.

Utilising a third party gateway enables the swapping out of severless endpoints with minimum configuration.

serverless gateway

Serverless Gateway flow

For example the client calls the Event Gateway via HTTP, the event is initially routed to AWS Lambda and processed. With a simple change of configuration, the same client call could be routed to Google Cloud Functions to be processed; the client client would not need to be reconfigured.

The Future for Serverless

It is still a wild frontier out there with many offerings and no real standards. Increasing the fragmentation of applications into discrete functions does offer advantages for CI/CD and compute resource efficiency but at the cost of greater complexity and the risk of being tied to a platform.

With the open source offerings still very early in their development the reliability is not upto production standards. For example, I tried to deploy some of the projects to Google Kubernetes Engine using their supplied helm files and only one successfully deployed.

Serverless Monitoring

The ability to observe the performance of both the serverless framework and the functions it is running is essential for production environments. The leader of the commercial offerings is Amazon with CloudWatch and X-Ray. For open source the leader is Fn as it already includes both Prometheus metrics and Jaeger/Zipkin tracing.

Deploying an open source serverless platform to Kubernetes creates a number of Deployment, Pod and Container components.

severless deployment components

The above example shows OpenFaaS with one function hosted. The current implementation technique of most of the open source platforms is to use a separate Docker image for each function, resulting in a separate Deployment on Kubernetes.

servless function container

Serverless function container

With Instana’s support for Kubernetes cluster monitoring, all these Deployments are automatically detected and monitored. As standards for tracing through these platforms evolves Instana will adopt them to provide fully automatic distributed tracing.

Serverless is still very much in its infancy and Instana is watching its first faltering steps with interest.

Part 3: Writing a Kubernetes Operator in Java In the previous post of this series, we created an example Quarkus project using the Kubernetes client extension. The example retrieved a list of...
Part 2: Getting Started with the Quarkus Kubernetes Client Extension This is the second Blog post in our series on writing a Kubernetes operator in Java. The first post gave a general...
Part 1: What is a Kubernetes Operator? We recently published the first alpha version of our Instana Agent Operator, built with Quarkus and the upcoming Quarkus Kubernetes Client Extension. In this Blog...

Start your FREE TRIAL today!

Automatic Application Performance Monitoring (APM) solutions for microservices, Instana has developed the automatic monitoring and AI-based analysis DevOps needs to manage the performance of modern applications. Instana is the only APM solution that automatically discovers, maps and visualizes microservice applications without continuous additional engineering. Customers using Instana achieve operational excellence and deliver better software faster. Visit https://www.instana.com to learn more.