Installing the Host Agent on Kubernetes

Installation Methods

There are several available methods to install the instana-agent onto a Kubernetes cluster. While manually installing the agent on a host monitors containers and processes, it does not collect Kubernetes data. We recommend installing the agent using the Helm Chart or YAML file (DaemonSet) or Operator.

Current Versions of Installation Methods

New versions of the Helm Chart, YAML file and Operator are released fairly frequently. To keep up with the latest updates for fixes, improvements and new features, please ensure you are running the current version of either Helm Chart, YAML file or Operator.

This information can be found in the following locations:

Install Using the Helm Chart

Via a DaemonSet, the Helm chart adds the Instana agent to all schedulable nodes in your cluster.

  1. Sign in to Instana, click More -> Agents -> Installing Instana Agents -> Kubernetes.

    From this page, you'll need your host agent endpoint and your agent key.

  2. From the Technology list, select Helm chart.
  3. Enter the cluster name and (optionally) the agent zone.

    The cluster name (INSTANA_KUBERNETES_CLUSTER_NAME) is the customised name of the cluster monitored by this daemonset.

    The agent zone (INSTANA_ZONE) is used to customize the zone grouping displayed on the infrastructure map.

    All of the other required parameters are pre-populated.

  4. Run the following command with Helm 3:

    kubectl create namespace instana-agent && \
    helm install instana-agent --namespace instana-agent \
    --repo https://agents.instana.io/helm \
    --set agent.key='<your agent key - as described above>' \
    --set agent.endpointHost='<your host agent endpoint - as described above>' \
    --set cluster.name='<your-cluster-name>' \
    --set zone.name='<your-cluster-name>' \
    instana-agent

To configure the installation, specify the values on the command line using the --set flag, or provide a yaml file with your values using the -f flag.

For a detailed list of all the configuration parameters, please see our Instana Helm Chart.

Instana Agent Service

Note: The functionality described in this section is available with the Instana Agent Helm chart v1.2.7 and above, and requires Kubernetes 1.17 and above.

The Helm chart has a special configuration option called --set service.create=true. This option creates a Kubernetes Service that exposes the following to the cluster:

Install as a DaemonSet

To install and configure the Instana agent as a DaemonSet within your Kubernetes cluster, customize the instana-agent.yaml file to create the instana-agent namespace in which the DaemonSet is created. This enables you to tag agents for quick identification or to stop all of them by deleting the namespace.

  1. Sign in to Instana, click More -> Agents -> Installing Instana Agents -> Kubernetes.
  2. From the Technology list, select DaemonSet.
  3. Enter the cluster name and (optionally) the agent zone.

    The cluster name (INSTANA_KUBERNETES_CLUSTER_NAME) is the customised name of the cluster monitored by this daemonset.

    The agent zone (INSTANA_ZONE) is used to customize the zone grouping displayed on the infrastructure map. It also sets the default name of the cluster.

    All of the other required parameters are pre-populated.

  4. Click Copy and save the YAML file.
  5. Edit the YAML file, replacing with actual values the following dangling anchors:

    • *agentKey: A base64 encoded Instana key for the cluster to which the generated data should be sent
    echo YOUR_INSTANA_AGENT_KEY | base64
    • *endpointHost: The IP address or hostname associated with the installation.
    • *endpointPort: The network port associated with the installation.
    • *clusterName: The name to be assigned to your cluster in Instana.
    • *zoneName: The agent zone to associate with the nodes of your cluster.
  6. To install Instana within your Kubernetes Cluster, run this command:
kubectl apply -f instana-agent.yaml

Note: Any additional edits you make to the instana-agent.yaml require that the DaemonSet is recreated. To apply changes, run the following commands:

kubectl delete -f instana-agent.yaml
kubectl apply -f instana-agent.yaml

RBAC

To deploy for Kubernetes versions prior to 1.8 with RBAC enabled, replace rbac.authorization.k8s.io/v1 with rbac.authorization.k8s.io/v1beta1 for RBAC api version.

To grant your user the ability to create authorization roles, for example in GKE, run this command:

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user $(gcloud config get-value account)

If you don't have RBAC enabled, you need to remove the ClusterRole and ClusterRoleBinding from the instana-agent.yaml file.

PodSecurityPolicy

To enable a PodSecurityPolicy for the Instana agent:

  1. Create a PodSecurityPolicy resource as defined in our Helm chart.
  2. Authorize that policy in the instana-agent ClusterRole. Note that RBAC has to be enabled with the ClusterRole and ClusterRoleBinding resources created as defined in the aforementioned instana-agent.yaml file.
  3. Enable the PodSecurityPolicy admission controller on your cluster. For existing clusters, it is recommended that policies are added and authorized before enabling the admission controller.

Install Using the Operator

Instana provides a Kubernetes operator to install and manage the Instana Agent.

Also see this section below that describes configuration options you can set via the Instana Agent Custom Resource Definition and environment variables.

Install Operator Manually

  1. Sign in to Instana, click More -> Agents -> Installing Instana Agents -> Kubernetes.
  2. From the Technology list, select Operator. This page will provide you with the essential configuration values you need to edit the custom resource yaml file in Step 4.
  3. Deploy the operator as follows:

    kubectl apply -f https://github.com/instana/instana-agent-operator/releases/latest/download/instana-agent-operator.yaml

    Now the operator should be up and running in namespace instana-agent, waiting for an instana-agent custom resource to be created. Please note that each new version of the instana-agent-operator.yaml in version v1.x.x and onwards will reference that same version of the instana/instana-agent-operator container image. We will no longer be updating the latest tag for the Instana Agent Operator image in DockerHub and the RedHat Registry. In order to get a new version of the Instana Agent Operator, you will need to update to the latest operator yaml from the Operator's GitHub Releases page as mentioned above.

  4. Create the custom resource yaml file, following this template.

    Edit the template and replace at least the following values:

    • agent.key must be set with your Instana agent key.
    • agent.endpoint must be set with the host agent endpoint.
    • agent.endpoint.port the port of your host agent endpoint, generally "443" (wrapped in quotes).
    • agent.zone.name should be set with the name of the Kubernetes cluster that is be displayed in Instana.
    • agent.env can be used to specify environment variables for the agent, for instance, proxy configuration. See possible environment values here. For instance:

      agent.env:
        INSTANA_AGENT_TAGS: staging
    • config.files can be used to specify configuration files, for instance, specifying a configuration.yaml:

      config.files:
        configuration.yaml: |
          # Example of configuration yaml template
      
          # Host
          #com.instana.plugin.host:
          #  tags:
          #    - 'dev'
          #    - 'app1'

      In case you need to adapt configuration.yaml, view the documentation here.

    Apply the edited custom resource:

    kubectl apply -f instana-agent.customresource.yaml

    The operator will pick up the configuration from the custom resource and deploy the Instana agent.

Uninstalling

In order to uninstall the Instana agent, simply remove the custom resource:

kubectl delete -f instana-agent.customresource.yaml

And to uninstall the operator:

kubectl delete -f https://github.com/instana/instana-agent-operator/releases/latest/download/instana-agent-operator.yaml

Operator Configuration

Custom Resource Values

The Instana Agent custom resource supports the following values:

Parameter Description
agent.key Instana agent key
agent.endpoint host agent endpoint
agent.endpoint.port The port of your host agent endpoint - generally "443" (wrapped in quotes)
agent.zone.name Name of zone to display for things discovered by these agents
cluster.name Name of this Kubernetes cluster to display in Instana
agent.env (optional) Can be used to specify environment variables for the agent, for instance, proxy configuration. See possible environment values here
agent.image (optional) Can be used to override the agent image (defaults to instana/agent:latest)
agent.imagePullPolicy (optional) Can be used to override the image pull policy (defaults to Always)
config.files (optional) Additional files to mount for configuration. Each entry in this object is mounted in the agent as a file in /root/<key>
agent.downloadKey (optional) Download key for agent artifacts (usually not required)
agent.cpuReq (optional) CPU requests for agent in CPU cores
agent.cpuLimit (optional) CPU limits for agent in CPU cores
agent.memReq (optional) Memory requests for agent in Mi
agent.memLimit (optional) Memory limits for agent in Mi
opentelemetry.enabled (optional) set to True to enable the OpenTelemetry ingestion endpoint for the Agent (defaults to False)

Environment variables

Currently, it is also possible to configure the agent.image by specifying the RELATED_IMAGE_INSTANA_AGENT environment variable in the instana-agent-operator Deployment:

env:
- name: "RELATED_IMAGE_INSTANA_AGENT",
  value: "instana/agent:latest"

The operator first looks at the agent.image parameter in the CRD to determine the agent image. If this is null, it then checks the environment variable above. Finally if both of these are null, it uses the default instana/agent:latest.

Configure Network Access for Monitored Applications

Some types of applications need to reach out to the agent first. Currently they are

  • Node.js
  • Go
  • Ruby
  • Python
  • .NET Core

Those applications need to know on which IP the agent is listening. As the agent will listen on the host IP automatically, use the following Downward API snippet to pass it in an environment variable to the application pod:

spec:
  containers:
    env:
      - name: INSTANA_AGENT_HOST
        valueFrom:
          fieldRef:
            fieldPath: status.hostIP

Monitor master nodes

Per default, we don't schedule our agent on Kubernetes master nodes, as we respect the default taint node-role.kubernetes.io/master:NoSchedule that is set on most master nodes. To overwrite these add the following toleration to the agent daemonset:

kind: DaemonSet
metadata:
  name: instana-agent
  namespace: instana-agent
spec:
  template:
  ...
    spec:
      tolerations:
        - key: "node-role.kubernetes.io/master"
          effect: "NoSchedule"
          operator: "Exists"
    ...

For more direct control install the Agent separately on the master nodes. Please contact support on advice for your environment.

Monitor Kubernetes NGINX Ingress

For guidelines on how to configure the Kubernetes NGINX Ingress and our Agent for capturing NGINX metrics, see the Monitoring NGINX page. Tracing of the Kubernetes NGINX Ingress is also possible via the OpenTracing project, see Distributed Tracing for NGINX Ingress on guidelines how to set that up.

Secrets

Kubernetes has built-in support for storing and managing sensitive information. However, if you do not use that built-in capability but still need the ability to redact sensitive data in Kubernetes resources the agent secrets configuration is extended to support that.

To enable sensitive data redaction for selected Kubernetes resources (specifically annotations and container environment variables), set the INSTANA_KUBERNETES_REDACT_SECRETS environment variable to true as shown in the following agent yaml snippet:

spec:
  containers:
      env:
        - name: INSTANA_KUBERNETES_REDACT_SECRETS
          value: "true"

Then configure the agent with the desired list of secrets to match on as described in the agent secrets configuration.

It is important to note that enabling this capability can possibly cause a decrease in performance in the Kubernetes sensor.

Report to Multiple Backends

To enable reporting to multiple backends from a Kubernetes agent, see the docker agent configuration.

Example YAML file

To run the agent as a DaemonSet in Kubernetes, here is an example instana-agent.yaml file.

Download this file and view the latest changelog.

---
apiVersion: v1
kind: Namespace
metadata:
  name: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.2.13
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: instana-agent
  namespace: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.2.13
---
apiVersion: v1
kind: Secret
metadata:
  name: instana-agent
  namespace: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.2.13
type: Opaque
data:
  key: *agentKey # Replace this with your Instana agent key, encoded in base64
  downloadKey: ''
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: instana-agent
  namespace: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.2.13
data:
  cluster_name: *clusterName
  configuration.yaml: |
  
    # Manual a-priori configuration. Configuration will be only used when the sensor
    # is actually installed by the agent.
    # The commented out example values represent example configuration and are not
    # necessarily defaults. Defaults are usually 'absent' or mentioned separately.
    # Changes are hot reloaded unless otherwise mentioned.
    
    # It is possible to create files called 'configuration-abc.yaml' which are
    # merged with this file in file system order. So 'configuration-cde.yaml' comes
    # after 'configuration-abc.yaml'. Only nested structures are merged, values are
    # overwritten by subsequent configurations.
    
    # Secrets
    # To filter sensitive data from collection by the agent, all sensors respect
    # the following secrets configuration. If a key collected by a sensor matches
    # an entry from the list, the value is redacted.
    #com.instana.secrets:
    #  matcher: 'contains-ignore-case' # 'contains-ignore-case', 'contains', 'regex'
    #  list:
    #    - 'key'
    #    - 'password'
    #    - 'secret'
    
    # Host
    #com.instana.plugin.host:
    #  tags:
    #    - 'dev'
    #    - 'app1'
    
    # Hardware & Zone
    #com.instana.plugin.generic.hardware:
    #  enabled: true # disabled by default
    #  availability-zone: 'zone'
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: instana-agent
  namespace: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.2.13
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: instana-agent
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app.kubernetes.io/name: instana-agent
        app.kubernetes.io/version: 1.2.13
        instana/agent-mode: "APM"
      annotations: {}
    spec:
      serviceAccountName: instana-agent
      hostIPC: true
      hostNetwork: true
      hostPID: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: instana-agent
          image: "instana/agent:latest"
          imagePullPolicy: Always
          env:
            - name: INSTANA_AGENT_LEADER_ELECTOR_PORT
              value: "42655"
            - name: INSTANA_ZONE
              value: *zoneName
            - name: INSTANA_KUBERNETES_CLUSTER_NAME
              valueFrom:
                configMapKeyRef:
                  name: instana-agent
                  key: cluster_name
            - name: INSTANA_AGENT_ENDPOINT
              value: *endpointHost
            - name: INSTANA_AGENT_ENDPOINT_PORT
              value: *endpointPort
            - name: INSTANA_AGENT_KEY
              valueFrom:
                secretKeyRef:
                  name: instana-agent
                  key: key
            - name: INSTANA_DOWNLOAD_KEY
              valueFrom:
                secretKeyRef:
                  name: instana-agent
                  key: downloadKey
                  optional: true
            - name: INSTANA_AGENT_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          securityContext:
            privileged: true
          volumeMounts:
            - name: dev
              mountPath: /dev
            - name: run
              mountPath: /run
            - name: var-run
              mountPath: /var/run
            - name: var-run-kubo
              mountPath: /var/vcap/sys/run/docker
            - name: sys
              mountPath: /sys
            - name: var-log
              mountPath: /var/log
            - name: var-lib
              mountPath: /var/lib/containers/storage
            - name: machine-id
              mountPath: /etc/machine-id
            - name: configuration
              subPath: configuration.yaml
              mountPath: /root/configuration.yaml
          livenessProbe:
            httpGet:
              path: /status
              port: 42699
            initialDelaySeconds: 300
            timeoutSeconds: 3
          resources:
            requests:
              memory: "512Mi"
              cpu: 0.5
            limits:
              memory: "512Mi"
              cpu: 1.5
          ports:
            - containerPort: 42699
        - name: leader-elector
          image: "instana/leader-elector:0.5.4"
          env:
            - name: INSTANA_AGENT_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          command:
            - "/busybox/sh"
            - "-c"
            - "sleep 12 && /app/server --election=instana --http=localhost:42655 --id=$(INSTANA_AGENT_POD_NAME)"
          resources:
            requests:
              cpu: 0.1
              memory: "64Mi"
          livenessProbe:
            httpGet: # Leader elector liveness is tied to agent, published on localhost:42699
              path: /com.instana.agent.coordination.sidecar/health
              port: 42699
            initialDelaySeconds: 300
            timeoutSeconds: 3
          ports:
            - containerPort: 42655
      volumes:
        - name: dev
          hostPath:
            path: /dev
        - name: run
          hostPath:
            path: /run
        - name: var-run
          hostPath:
            path: /var/run
        # Systems based on the kubo BOSH release (that is, VMware TKGI and older PKS) do not keep the Docker
        # socket in /var/run/docker.sock , but rather in /var/vcap/sys/run/docker/docker.sock .
        # The Agent images will check if there is a Docker socket here and, if so, adjust the symlinking before
        # starting the Agent. See https://github.com/cloudfoundry-incubator/kubo-release/issues/329
        - name: var-run-kubo
          hostPath:
            path: /var/vcap/sys/run/docker
        - name: sys
          hostPath:
            path: /sys
        - name: var-log
          hostPath:
            path: /var/log
        - name: var-lib
          hostPath:
            path: /var/lib/containers/storage
        - name: machine-id
          hostPath:
            path: /etc/machine-id
        - name: configuration
          configMap:
            name: instana-agent
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.2.13
rules:
- nonResourceURLs:
    - "/version"
    - "/healthz"
  verbs: ["get"]
- apiGroups: ["batch"]
  resources:
    - "jobs"
    - "cronjobs"
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
    - "deployments"
    - "replicasets"
    - "ingresses"
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources:
    - "deployments"
    - "replicasets"
    - "daemonsets"
    - "statefulsets"
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
    - "namespaces"
    - "events"
    - "services"
    - "endpoints"
    - "nodes"
    - "pods"
    - "replicationcontrollers"
    - "componentstatuses"
    - "resourcequotas"
    - "persistentvolumes"
    - "persistentvolumeclaims"
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
    - "endpoints"
  verbs: ["create", "update", "patch"]
- apiGroups: ["networking.k8s.io"]
  resources:
    - "ingresses"
  verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.2.13
subjects:
- kind: ServiceAccount
  name: instana-agent
  namespace: instana-agent
roleRef:
  kind: ClusterRole
  name: instana-agent
  apiGroup: rbac.authorization.k8s.io