Monitoring NGINX

nginx logo nginxplus logo

Instana has the ability to collect both metrics and distributed traces of requests that pass through NGINX.

Enabling Metrics Collection

Once Metrics are enabled as described below, Instana will automatically begin to collect and monitor your NGINX processes.

Metrics for NGINX

For NGINX metrics collection, Instana uses the ngx_http_stub_status_module for remote metrics collection. To enable this, make sure that the module is enabled/available and add the following at the beginning of your NGINX configuration:

location /nginx_status {
  stub_status  on;
  access_log   off;
  allow; # Or the Remote IP of the Instana Host Agent
  deny  all;

By default, the Instana agent searches for the location of the configuration file in any available process arguments; otherwise, it fallbacks to /etc/nginx/nginx.conf.

Metrics for NGINX Plus

To enable NGINX Plus metric monitoring, make sure the ngx_http_api_module ( * ) is installed/available and add the following block to enable the module:

location /api {
    api write=off;
    allow; # Or the Remote IP of the Instana Host Agent
    deny all;

Metrics for Kubernetes NGINX Ingress

From Kubernetes NGINX Ingress version 0.23.0 onwards, the server listening on port 18080 was disabled. For Instana to monitor this NGINX instance, restore the server by adding the following snippet to the configmap:

http-snippet: |
  server {
    listen 18080;

    location /nginx_status {
      stub_status on;
      access_log  off;
      allow; # Or the Remote IP of the Instana Host Agent
      deny  all;

    location / {
      return 404;

For full details see the NGINX Ingress Release Notes.

Distributed Tracing

Distributed Tracing for NGINX / NGINX Plus / OpenResty

In order to install the technology preview in your own setup, you will need to:

  1. Get the right binaries for your NGINX version
  2. Copy the binaries where your NGINX server can access them
  3. Edit the NGINX configurations
  4. Restart the NGINX process or trigger a configuration reload sending a reload command

Download the Binaries

Our NGINX HTTP tracing modules are based on the nginx-opentracing v0.18.0 module, with customizations that enable additional functionality and easier usage.

The download links for the binaries we provide for the supported distributions of NGINX are available on the NGINX Distributed Tracing Binaries page.

Copy the Binaries

The two binaries you have downloaded in the previous step must be placed on a filesystem that the NGINX process can access, both in terms of locations as well as file permissions.

If NGINX is running directly on the operating system, as opposed to running in a container, it's usually a good choice to copy the two Instana binaries into the folder that contains the other NGINX modules. You can find where NGINX expects the modules to be located by running the nginx -V command and look for the --modules-path configuration option, see, e.g., this response on StackOverflow.

In a containerized environment, this may mean to add them to the container image, or mount the files as volumes into the container; see, for example, Docker's bind mounts documentation or how to mount volumes to pods in Kubernetes.

Edit the NGINX Configurations

# The following line adds the basic module Instana uses to get tracing data.
# It is required that you use the version of this module built by Instana,
# rather than the one shipped in many NGINX distros, as there are some
# modifications in the Instana version that are required for tracing to work
load_module modules/;

# Whitelists environment variables used for tracer configuration to avoid
# that NGINX wipes them. This is only needed if instana-config.json
# should contain an empty configuration with "{}" inside to do the
# configuration via these environment variables instead.

events {}

error_log /dev/stdout info;

http {
  error_log /dev/stdout info;

  # The following line loads the Instana libsinstana_sensor library, that
  # gets the tracing data from and converts
  # them to Instana AutoTrace tracing data.
  # The content of instana-config.json is discussed below.
  opentracing_load_tracer /usr/local/lib/ /etc/instana-config.json;

  # Propagates the active span context for upstream requests.
  # Without this configuration, the Instana trace will end at
  # NGINX, and the systems downstream (those to which NGINX
  # routes the requests) monitored by Instana will generate
  # new, unrelated traces

  # If you use upstreams, Instana will automatically use them as endpoints,
  # and it is really cool :-)
  upstream backend {
    server server-app:8080;

  server {
    error_log /dev/stdout info;
    listen 8080;
    server_name localhost;

    location /static {
      root /www/html;

    location ^~ /api {
      proxy_pass http://backend;

    location ^~ /other_api {
      proxy_set_header X-AWESOME-HEADER "truly_is_awesome";

      # Using the `proxy_set_header` directive voids for this
      # location the `opentracing_propagate_context` defined
      # at the `http` level, so here we need to set it again.
      # It needs to be set for every block where `proxy_set_header`
      # is found. This can also be the case at `server` level.

      proxy_pass http://backend;

Special case opentracing_propagate_context:

Besides on main (http) level, the opentracing_propagate_context directive needs to be added for every block (server or location) where a proxy_set_header directive is set as well. The reason is that OpenTracing context propagation is based on proxy_set_header internally and it gets void by it otherwise. This is a limitation of the NGINX module API.

The following is an example of instana-config.json:

  "service": "nginxtracing_nginx", # Change this line to give your NGINX service a different name in Instana
  "agent_host": <host_agent_address>, # Change this line with the IP address or DNS name of the Instana agent on the same host as your NGINX process
  "agent_port": 42699, # This is the default, and you should never change it unless instructed by the Instana support
  "max_buffered_spans": 1000

The configurations in the snippet above mean the following:

  • service: which name will be associated in the Instana backend with this NGINX process. If unspecified, service names will be calculated based on, for example, HTTP host name or other means.
  • agent_host: the IP address or DNS name of the local host agent. You must change this configuration to match the network name of the Instana agent on the same host as the NGINX process.
  • agent_port: the port on which the NGINX tracing extension will try to contact the host agent. Notice that this port is not configurable agent side. The NGINX tracing extension allows you to configure it in case of settings requiring port forwarding or port mapping.
  • max_buffered_spans: The maximum amount of spans, one per request, that the NGINX tracing extension will keep locally before flushing them to the agent; the default is 1000. The NGINX tracing extension will always flush the locally-buffered spans every one second. This setting allows you to reduce the amount of local buffering when your NGINX server is serving more than 1000 requests per second and you want to reduce the memory footprint of your NGINX server by flushing the tracing data faster.

The alternative is to configure the tracer via environment variables. Those take precedence but the file instana-config.json is still required. So do the following:

  • put an empty configuration {} into instana-config.json
  • do the whitelisting of the environment variables in the NGINX configuration as shown above
  • set the environment variables before starting NGINX

This method is especially useful to set the Instana agent host to the host IP in a Kubernetes cluster.

The following example Kubernetes deployment YAML part shows this method:

        - name: INSTANA_SERVICE_NAME
          value: "nginxtracing_nginx"
        - name: INSTANA_AGENT_HOST
              fieldPath: status.hostIP

For details see the Environment Variable Reference.

Support for other NGINX OpenTracing module builds

We do not support using builds of the NGINX OpenTracing module from 3rd parties, including those supported by NGINX itself. The reason for requiring the Instana build of the NGINX OpenTracing module are technical: we do not support self-compilation (that is, you building your own version) as that would strain unduly our support to try and figure out what in the compilation process goes wrong in entirely different and unpredictable setups; similarly, we do not support the modules provided by F5, because they lack functionality our tracing needs and they use dynamic linking to the standard C++ library and that would lead in many cases to segfault. Indeed, to avoid segfault, we use in our build of the NGINX OpenTracing module a statically linked standard C++ library for unifying testing and for the benefit of modern C++ code even on older distributions.

Distributed Tracing for Kubernetes NGINX Ingress

Instana provides a technology preview based on init containers and YAML files to enable the Instana NGINX tracing for selected Kubernetes NGINX Ingress versions.

For details see:

Distributed Tracing for Kubernetes NGINX Ingress with Zipkin Tracer

The Kubernetes NGINX Ingress allows for distributed tracing via the OpenTracing project. As the Instana Agent is capable of ingesting also Jaeger and Zipkin traces, it is possible to configure the NGINX Ingress in such a way that traces are forwarded to Instana.

Note: While this setup is supported, Instana will not be able to take over the trace-context from OpenTracing traces, meaning insight is limited to only NGINX spans presented in isolation. Only when all services are traced via OpenTracing is the context retained and will Instana show the full distributed trace.

Note: Requires nginx-ingress version 0.23.0 or higher; earlier versions do not support variable expansion.

Note: All limitations of the support for Jaeger or Zipkin apply.

Configure NGINX Ingress for Instana Agent

The following configuration values need to be specified.

  • To the ConfigMap for the NGINX ingress, add the following:
    enable-opentracing: "true"
    zipkin-collector-host: $HOST_IP
    zipkin-collector-port: "42699"
  • To the NGINX Pod Spec add the following environment variable (it should have already POD_NAME and POD_NAMESPACE):
- name: HOST_IP
        fieldPath: status.hostIP

This configuration uses the Kubernetes DownwardAPI to make the host IP available as environment variable ("HOST_IP") and the config map will pick this up. The port can be fixed to 42699, our Agent port.

Also note that the service will be named either as the default; nginx or it needs to be overwritten via the parameter zipkin-service-name which can be configured in the ConfigMap.

For more information about the NGINX ingress and OpenTracing see the Kubernetes NGINX Ingress documentation.

Metrics collection

Configuration data

  • PID
  • Number of Worker Processes
  • Number of Worker Connections
  • Started at
  • Version
  • Build ( * )
  • Address ( * )
  • Generation ( * )
  • PPID ( * )

Performance metrics

  • Request
  • Connections
  • Processes ( * )
  • SSL ( * )
  • Caches ( * )
  • Server zones ( * )
  • Upstreams ( * )

Health Signatures

For each sensor, there is a curated knowledgebase of health signatures that are evaluated continuously against the incoming metrics and are used to raise issues or incidents depending on user impact.

Built-in events trigger issues or incidents based on failing health signatures on entities, and custom events trigger issues or incidents based on the thresholds of an individual metric of any given entity.

For information about built-events for the NGINX sensor, see the Built-in events reference.


Nginx API is not accessible

Monitoring issue type: nginx_api_not_accessible

To resolve this issue please refer to the steps as described in Enabling Metrics Collection for how to configure the Instana Agent to collect all Nginx metrics.

Nginx Status endpoint is not accessible

Monitoring issue type: nginx_status_not_accessible

To resolve this issue please refer to the steps as described in Enabling Metrics Collection for how to configure the Instana Agent to collect all Nginx metrics.

Nginx API is not found

Monitoring issue type: nginx_api_not_found

To resolve this issue please refer to the steps as described in Enabling Metrics Collection for how to configure the Instana Agent to collect all Nginx metrics.

Nginx Status is not found

Monitoring issue type: nginx_status_not_found

To resolve this issue please refer to the steps as described in Enabling Metrics Collection for how to configure the Instana Agent to collect all Nginx metrics.

Nginx Config location not discovered

Monitoring issue type: nginx_config_location_not_discovered

To resolve this issue please refer to the steps as described in Enabling Metrics Collection for how to configure the Instana Agent to collect all Nginx metrics.

Cpp Collector is not installed

Monitoring issue type: cpp_collector_not_installed

The NGINX process has not connected with the agent to send traces. This may be due to one of the following reasons:

  1. NGINX has not been configured properly for Distributed Tracing.
  2. The NGINX process cannot report to the host agent on the same host due to network connectivity issues.