Detecting Lock Contention in Go

March 11, 2016

Continuous Application Profiling in Production

Mutexes are often a source of contention, resulting in performance issues or deadlocks. This is no different in Go. Locating the root cause is often very challenging. In obvious deadlock situations where all goroutines are waiting, the runtime may be able to detect/predict mutex-related issues and panic. Generally, the problems will manifest themselves at the application logic level.

Let’s look at this simple example.

lock := &sync.Mutex{}

// goroutine1
go func() {

// here we make other goroutine1 wait
time.Sleep(500 * time.Millisecond)

fmt.Printf(“%v: goroutine1 releasing…\n”, time.Now().UnixNano())

// goroutine2
go func() {
fmt.Printf(“%v: goroutine2 acquiring…\n”, time.Now().UnixNano())
fmt.Printf(“%v: goroutine2 done\n”, time.Now().UnixNano())

time.Sleep(1 * time.Second)

The lock is obtained in the first goroutine, and the second goroutine has to wait for it.

Problems like this will most likely not be detected in the development phase, when there is no concurrent use of an application, and will result in a performance issue only in the production environment. As a side note, it is always a good idea to have automated performance regression testing in place, which will simulate concurrent live traffic.

Go has a built-in block profiling and tracing toolset for such situations: pprof. Basically, an application has to expose the profilers on an HTTP port by importing the net/http/pprof package. Afterwards, different profiles can be requested by running go tool pprof http://localhost:6060/debug/pprof/block.

While pprof’s block profiler or tracer can be extremely helpful in identifying contention issues, there are a few obstacles in using pprof against production environment:

  • The profiler’s HTTP handler, which accepts profiling requests, needs to attach itself to the application’s HTTP server (or have one running), which means extra security measures should be taken to protect the listening port.
  • Locating and accessing the application node’s host to run the go tool pprof against may be tricky in container environments such as Kubernetes.
  • If the application has a deadlock or is unable to respond to pprof requests, no profiling or tracing is possible. Profiles recorded before the problem was detected would be very helpful in cases like this.

For production and development environments, Instana provides automatic blocking call profiling. It reports regular and profiles to the Dashboard with are accessible in Hot spots / Time section.

Getting started with Instana AutoProfile™

If you’r enot already an Instana user, you can sign up for a free two week trial. Then make sure profiling is turned on. See profiling documentation for detailed setup instructions.

Similar profile history is automatically available for:

Play with Instana’s APM Observability Sandbox

Announcement, Engineering, Product
Instana Adds Production Ruby Profiling In the latest Instana release (Instana Release 201), AutoProfile for Ruby is generally available. This allows teams developing and supporting Ruby based applications to continuously collect and...
Developer, Engineering, Product
One look at the Instana website lets you know that Instana is in the Observability (and APM) business — providing the nascent Enterprise Observability Platform, especially useful for Cloud-native and containerized microservices...
There are multiple reasons why a program will consume more CPU resources than excepted. In the case of a high computational complexity of an algorithm, the amount of data it operates on...

Start your FREE TRIAL today!

Instana, an IBM company, provides an Enterprise Observability Platform with automated application monitoring capabilities to businesses operating complex, modern, cloud-native applications no matter where they reside – on-premises or in public and private clouds, including mobile devices or IBM Z.

Control hybrid modern applications with Instana’s AI-powered discovery of deep contextual dependencies inside hybrid applications. Instana also gives visibility into development pipelines to help enable closed-loop DevOps automation.

This provides actionable feedback needed for clients as they to optimize application performance, enable innovation and mitigate risk, helping Dev+Ops add value and efficiency to software delivery pipelines while meeting their service and business level objectives.

For further information, please visit