Automated Observability for Cloud Run Applications

Automated Observability for Cloud Run Applications

Cloud Run is a Container-as-a-Service compute platform that enables users to run stateless containers that are scaled up and down depending on the amount of traffic, only charging you for the resources you use. Cloud Run also has an interesting Function-as-a-Service flavor to it, meaning no containers will be run (unless you want them to) when no requests are served, that is, Cloud Run scales to zero.

Cloud Run is built from Knative, according to Google, this means you can “choose to run your containers fully managed with Cloud Run, in your Google Kubernetes Engine cluster, or in workloads on-premises with Cloud Run for Anthos.”

Word Image 326

Similar to all serverless platforms, as easy as it is to scale application resources up and down, it is extremely challenging to observe all that is happening, especially when things break.

Cloud Run is easy for you, but what about your Observability provider?

It is really easy and satisfying to deploy your containerized applications to Cloud Run. Observing and monitoring them, however, is a totally different matter even though it is vital to have full visibility into your Cloud Run applications. It is challenging to create observability and monitoring for workloads when you do not manage or control the orchestration.

The lack of control of the orchestration, however, is just one of the issues affecting legacy Application Performance Monitoring (APM) tools trying to handle serverless technologies. They are ill equipped to monitor and visualize Cloud Run based applications, as they far too often sample traces and require manual instrumentation. With Cloud Run’s ability to autoscale containers, having an APM solution that uses sampled or partial traces simply does not suffice. In essence, you are left with mostly building blocks to build your own monitoring, when in reality, it doesn’t need to be difficult.

To properly monitor Cloud Run, or any other serverless technology, a complete end-to-end view is critical for optimizing performance and controlling cloud spend. To this end, Instana has brought our industry leading distributed tracing technology to Cloud Run, making it easy to gain full visibility into your applications with no code changes.

Automated Observability & Monitoring for Cloud Run Applications

Instana is supporting monitoring all fully-managed Cloud Run workloads, and tracing applications running on Cloud Run that are written in:

  • Go
  • Java
  • .NET Core
  • Node.js

Python applications will get added to the list in a matter of weeks.

Instana’s application performance monitoring for Cloud Run includes the following capabilities:

  • Monitor all your Cloud Run services and revisions running on Google’s managed infrastructure
  • Trace every single request going through and leaving your Cloud Run container
    • Out-of-the-box support for HTTP and gRPC triggers
    • Out-of-the-box support for Google Cloud Storage and Google PubSub, more coming soon
  • First-class citizen support for Cloud Run’s Serving mode, supporting natively each and every one of the HTTP frameworks like Java’s Spring Boot, Vert.x and Micronaut, Go’s net/http, Node.js Express.js and .NET Core’s APT.NET Core that Instana supports everywhere else with our best-in-class tracing technology.

We are currently focusing on fully-managed Cloud Run. Your Cloud Run containers running on Cloud Run on GKE can already be traced by Instana the same way we trace other Kubernetes-based workloads.

Immediate Observability and Monitoring

With Instana’s immediate observability and monitoring of Cloud Run applications, you always know that your applications are being fully monitored, even as containers are automatically scaled up or down. Instana immediately discovers new Cloud Run service revisions while providing users with the information they need to understand the performance of each new version. With minimal changes in the building of your container images, you’ll immediately understand the impact, good or bad, of every single deployment.

Instana also provides built-in infrastructure monitoring for Cloud Run, so users will not only get automatic distributed tracing but also receive a complete overview of the containers powering the Cloud Run service.

Word Image 327

^Cloud Run service revisions show up in the Infrastructure map, and the single instances show up as separate containers.

Word Image 328

^Built-in dashboards for Cloud Run service revisions, showing you how your multiple containers work across the board.

Automated Distributed Tracing of Cloud Run Applications

Instana collects a distributed trace for every request. Instana’s distributed tracing is easily incorporated in the Docker-image at build-time ensuring you never have unmonitored applications.

Word Image 329

^Cloud Run services have out-of-the-box dashboards in Instana that show their behavior in terms of serving requests. All of them. Because Instana collects every single trace.

Traces are correlated across Cloud Run, GKE, Google AppEngine, Google Compute Engine, apps running on other Cloud Providers or in your own data centers, as long as they are monitored by Instana. This ensures you always have an end-to-end view of every application request. Additionally, every trace is made viewable and searchable with Instana’s Unbounded Analytics.

Visualizing Every Cloud Run Application Dependency

Instana automatically visualizes every dependency in a service map that serves as a blueprint of your architecture. This dependency map details how your system is structured and highlights all service and application dependencies, providing easy understanding of how all of your application components interrelate. These fully automated dependency maps are not limited to Google services, but map every dependency throughout the entire system regardless of where those dependencies originate.

Word Image 330

^A service dependency map mixing Cloud Run, Google Cloud Pub/Sub, and and application running on Google Compute Engine.

Getting Started Observing and Monitoring Cloud Run Applications with Instana

Tracing Cloud Run applications is as easy as adding the Instana collector to your Docker image and adding two environment variables to your service revision. You find all the steps documented in-product, one copy paste away from deployment.

Word Image 331
^In-product guidance to get your Cloud Run containers traced with Instana.

We are also going to look into integrating with the Google Cloud Buildpacks to further streamline the experience. As a side note: at Instana we love the concept and ease of use of Cloud Native Buildpacks, and we are very, very excited to put our hands in it 😉 .

If you do not already have an Instana instance, you can see how Instana’s automatic observability and monitoring works with Cloud Run by signing up for a free trial today.

Play with Instana’s APM Observability Sandbox

I am thrilled to share that Instana has been recognized as a Top 3 observability platform by EMA.     Download the full EMA Top 3 Enterprise Decision Guide 2021. Built for...
Developer, Thought Leadership
In my last blog on preparing for Black Friday, I teased how easy it is to gain insight into the state of your eCommerce retail applications and all its components with an...
Instana, an IBM company, has been recognized as a Customers’ Choice in the 2021 Gartner Peer Insights ‘Voice of the Customer’: Application Performance Monitoring, and we are thrilled. Our team takes great...

Start your FREE TRIAL today!

Instana, an IBM company, provides an Enterprise Observability Platform with automated application monitoring capabilities to businesses operating complex, modern, cloud-native applications no matter where they reside – on-premises or in public and private clouds, including mobile devices or IBM Z.

Control hybrid modern applications with Instana’s AI-powered discovery of deep contextual dependencies inside hybrid applications. Instana also gives visibility into development pipelines to help enable closed-loop DevOps automation.

This provides actionable feedback needed for clients as they to optimize application performance, enable innovation and mitigate risk, helping Dev+Ops add value and efficiency to software delivery pipelines while meeting their service and business level objectives.

For further information, please visit