On the rise of Java on AWS Lambda, and the automation of its observability

Post

Instana has previously announced automated tracing of AWS Lambda applications in Node.js and Python. We are extending our best-in-class distributed tracing to include AWS Lambda functions written in Java and Go.

The rise of Java on AWS Lambda

In terms of the amount of functions running on AWS Lambda, Node.js and Python have a large lead over the other runtimes. Among Instana customers, and we have every reason to believe this to hold true across the entire AWS Lambda user-base, Java has steadily increased its adoption, especially in terms of the collective duration of function runs. When talking with Instana customers about what is leading this trend, a few patterns emerged:

  • Familiarity of Java as a runtime
  • Cold starts are often not as bad as feared
  • Ease of porting code, especially for batch applications

Let’s go a little more in detail in the two aspects above.

Java is familiar to many developers

There is a large, dependable foundation of Java know-how in many enterprises and teams, large and small alike. While Node.js and Python are more traditionally serverless-y, developers are orders of magnitude more productive with tools they know, rather than arguably better, but unfamiliar tools.

Cold starts are not as bad as feared

A cold start in AWS Lambda is the additional latency incurred by requests that are served by not-yet-initialized runtimes. As the workload of your Lambda functions ebbs and grows, more runtimes are spun up when needed, and idle ones are torn down. Depending on your workload and how long it takes to initialize your application, the impact of cold starts may be a deal breaker in latency-sensitive applications.

However, while many organizations initially nurture concerns in terms of start-up times for Java Lambda functions and their impact on cold starts, namely delays in serving some requests when the runtime allotted to them is being initialized on-demand, these concerns are broadly disproven. Of course, the amount of overhead to initialize your runtime widely varies with what the Java Virtual Machine is doing, especially in terms of loading libraries, but it is easy to code efficient Java applications on top of Spring Boot (with additional facilities provided optionally by Spring Cloud Functions) or Micronaut.

Ease of porting code

Another interesting trend we witnessed regards the nature of the software to run on AWS Lambda Java. While new microservices with the benefit of a clean-room implementation seem to gravitate towards Node.js or Python (when there is no significant disparity in familiarity with the runtimes in the development team, see the previous point), porting existing applications to Lambda tends to be far, far simpler than a rewrite, especially when the framework, e.g. Spring Boot, remains largely unchanged.

Interestingly, batch applications having to crunch infrequently across datasets large and small are a frequent target to be ported to AWS Lambda due to its “pay-only-when-it-runs” pricing model, which is extremely advantageous over, for example, keeping an EC2 instance mostly idle most of the time. Granted, the same “pay-only-when-it-runs” could largely be achieved using Fargate (and at Instana we get to monitor a lot of Fargate running Java applications
), but it still requires a significant amount of extra setup in terms of service autoscaling, while the “scale to zero” mechanic is built right into Lambda.

Technical challenges of monitoring Java applications on Lambda

Observability of Java applications is not a trivial matter, especially on platform like Lambda where there are:

  • Many functions running entirely distributed and cooperating to serve requests
  • No access to the runtime for debugging
  • Hard to reproduce and troubleshoot bugs locally

As argued in the The Right Way of Tracing AWS Lambda Functions, the distributed complexity of Lambda functions and the lack of debugging tools makes effective, no-sampling distributed tracing paramount.

Another important aspect is the ease of delivery of the instrumentation: the more functions you have, the more expensive it is to retrofit them for distributed tracing and other monitoring characteristics. At Instana, we have years of evidence supporting the incredible importance of striving for the maximum ease-of-adoption of observability, and that shows, for example, in what our customers say about us.

Achieving Full Observability of Java on AWS Lambda

To achieve full observability and performance monitoring of Java applications running on Lambda, a different approach is required than what is used in a more traditional application environment. By design, Lambda’s infrastructure is run entirely by AWS, leaving developers limited control when it comes to debugging performance issues. This makes observability even more critical to ensuring the best possible end user experience, while, at the same time, drastically restricting the options for troubleshooting.
When dealing with serverless infrastructures like Lambda, observability solutions must trace every single call end-to-end as each call can be uniquely important. And, the only proven way to ensure that every call is traced, no matter the frequency of deployment, is to:

  1. automate the delivery of the instrumentation making it just a matter of configuration
  2. ensure that tracing happens across all systems, not just Lambda
  3. deliver instrumentation that has extremely little overhead (because you pay by how long your functions run, by increments of 100ms) and is native to Lambda functions

Full observability of Lambda is just a configuration away

Delivering instrumentation to a Java Lambda function is disarmingly simple with Instana: we make the instrumentation available within a Lambda layer, that we will keep updating every time our Java instrumentation is updated. All it takes to activate the instrumentation is setting the JAVA_TOOL_OPTIONS environment variable, which tells the Java Virtual Machine to activate the Instana instrumentation during the initialization phase. As the Java Virtual Machine loads the default Java libraries, the code of your application and its dependencies, the Instana instrumentation is automatically applied to it, and just once per Lambda runtime.

As an added benefit of using Instana’s native Java instrumentation, our Lambda monitoring does not require X-Ray at all, which means you get to eliminate that unpredictable operational cost completely. Instana’s Lambda monitoring price is based on how many functions you have actively serving requests at any one time, no matter how many requests they serve.

Understand Every Dependency Across all Platforms with Trace Continuity

When the instrumentation is applied, it automatically traces every distributed request flowing through Lambda in a way that is entirely compatible with other systems traced by Instana, easily visualizing every trace and every function. This enables developers to immediately understand what went wrong, where it went wrong, and how to fix it. With Instana’s Application and Service dependency map you can quickly see what the actual architecture is at any given point in time, even as services and functions scale up and down.

Word Image 358

^Instana’s always accurate, Service Dependency Map

With Instana, all of your services, applications, and databases are automatically discovered and each request is traced no matter where or how they run. Developers will have access to every request into Lambda functions and out into services whether those services are running on-prem, in another cloud provider, or any other SaaS offering. By capturing all the distributed traces and contextual information, ensuring you always have the exact trace you need, when you need it.

Word Image 359

^Instana’s trace view, which includes all Lambda and non-Lambda workloads, is coming to Java

Start optimizing your Lambda applications today

Instana’s best-in-class Java tracing is available today for your AWS Lambda functions. It is blazing fast, entirely interoperable with anything else you monitor with Instana and incredibly easy to configure in your environments.
Give yourself and your functions a boon and monitor your applications on AWS Lambda today. And if you do not have Instana just yet, a free trial of Instana, no string attached, is just one click away.

Play with Instana’s APM Observability Sandbox

Featured
Instana prides itself in being the first Observability tool to launch support of Google Cloud Run via a Cloud Native Buildpack. The Instana Cloud Native Buildpack for Cloud Run makes adding Instana...
|
Featured
Kubernetes (also known as k8s) is an orchestration platform and abstract layer for containerized applications and services. As such, k8s manages and limits container available resources on the physical machine, as well...
|
Featured
CircleCI is a CI/CD platform that lets teams build and deliver software quickly and at scale. They make delivering great software simple in both the cloud and on your own infrastructure. New...
|

Start your FREE TRIAL today!

As the leading provider of Automatic Application Performance Monitoring (APM) solutions for microservices, Instana has developed the automatic monitoring and AI-based analysis DevOps needs to manage the performance of modern applications. Instana is the only APM solution that automatically discovers, maps and visualizes microservice applications without continuous additional engineering. Customers using Instana achieve operational excellence and deliver better software faster. Visit https://www.instana.com to learn more.