Memory Leak Detection in Production Go Applications

October 14, 2016

Continuous Application Profiling in Production

Memory leaks are very common in almost any language, including garbage collected languages. Go is not an exception. A reference to an object, if not properly managed, may be left assigned even if unused. This usually happens on an application logic level, but can also be an issue inside of an imported package.

Unfortunately, it is very hard to detect and fix memory leaks in development or staging environments, firstly because the production environment has different and more complex behavior, and secondly because many memory leaks take hours or even days to manifest themselves.

What is needed to find memory leaks in production

Golang has a very powerful profiling toolset, pprof, that includes a heap allocation profiler. The heap profiler gives you the size of the allocated heap and the number of objects per stack trace, i.e. the source code location where the memory was allocated. This is critical information but is not sufficient as a single profile. To detect if there is an actual leak over a period of time, regular allocation profiles need to be recorded and compared.

There are issues when using pprof against production environments:

  • The profiler’s HTTP handler, which accepts profiling requests, needs to attach itself to the application’s HTTP server (or have one running), which means extra security measures should be taken to protect the listening port.
  • Locating and accessing the application node’s host to run the go tool pprof against may be tricky in container environments such as Kubernetes.
  • If the application has crashed or is unable to respond to pprof request, no profiling is possible.
  • To have the historical, per stack trace view of heap allocations, a regular manual pprof execution, interactive result analysis and comparison is needed.

Using Instana AutoProfile™ for automatic memory leak detection and profiling

Instana AutoProfile completely automates the collection of heap allocation profiles, solving all of the above mentioned issues. Instana’s Go Profiler, initialized in the application, continuously records and reports allocation profiles to the Dashboard.

If you aren’t already an Instana user, you can get started with a free two week trial. See Instana’s Profiling documentation for detailed setup instructions.

After restarting/deploying the application, the profiles will be available in the Dashboard in a historically comparable form.

Similar profile history is automatically available for:

CPU, memory and GC metrics from Go runtime are also automatically available in the Dashboard.

Play with Instana’s APM Observability Sandbox

Announcement, Engineering, Product
Instana Adds Production Ruby Profiling In the latest Instana release (Instana Release 201), AutoProfile for Ruby is generally available. This allows teams developing and supporting Ruby based applications to continuously collect and...
|
Developer, Engineering, Product
One look at the Instana website lets you know that Instana is in the Observability (and APM) business — providing the nascent Enterprise Observability Platform, especially useful for Cloud-native and containerized microservices...
|
Profiling
There are multiple reasons why a program will consume more CPU resources than excepted. In the case of a high computational complexity of an algorithm, the amount of data it operates on...
|

Start your FREE TRIAL today!

Instana, an IBM company, provides an Enterprise Observability Platform with automated application monitoring capabilities to businesses operating complex, modern, cloud-native applications no matter where they reside – on-premises or in public and private clouds, including mobile devices or IBM Z.

Control hybrid modern applications with Instana’s AI-powered discovery of deep contextual dependencies inside hybrid applications. Instana also gives visibility into development pipelines to help enable closed-loop DevOps automation.

This provides actionable feedback needed for clients as they to optimize application performance, enable innovation and mitigate risk, helping Dev+Ops add value and efficiency to software delivery pipelines while meeting their service and business level objectives.

For further information, please visit instana.com.