Installing the Host Agent on OpenShift

Installation Methods

The installation of the Instana Agent on OpenShift is similar to Kubernetes, but with some extra security steps required. There are several available methods to install the instana-agent onto an OpenShift cluster namely via YAML file (DaemonSet) or Operator.

Current Versions of Installation Methods

New versions of the YAML file and Operator are released fairly frequently. To keep up with the latest updates for fixes, improvements and new features, please ensure you are running the latest version of either YAML file or Operator.

This information can be found in the following locations:

Prerequisites

You need to set up a project for the Instana Agent and configure it's permissions.

Create the instana-agent project and set the policy permissions to ensure the instana-agent service account is in the privileged security context.

oc login -u system:admin
oc new-project instana-agent
oc adm policy add-scc-to-user privileged -z instana-agent

Install as a DaemonSet

The Instana Agent can be installed into OpenShift by following the steps below:

First perform the prerequisite steps mentioned above.

By default the Instana Agent DaemonSet will start up on all nodes tagged with type=infra. Tag the nodes you want to Agent to run on:

oc label node my-node type=infra

OpenShift 3.9 needs additional annotation in order to match node selectors:

oc annotate namespace instana-agent openshift.io/node-selector=""

A typical instana-agent.yaml file looks like the following:

Download this file and view the latest changelog.

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: instana-agent
  namespace: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.0.30
---
apiVersion: v1
kind: Secret
metadata:
  name: instana-agent
  namespace: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.0.30
type: Opaque
data:
  key: # echo YOUR_INSTANA_AGENT_KEY | base64
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: instana-agent
  namespace: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.0.30
data:
  configuration.yaml: |
    # Manual a-priori configuration. Configuration will be only used when the sensor
    # is actually installed by the agent.
    # The commented out example values represent example configuration and are not
    # necessarily defaults. Defaults are usually 'absent' or mentioned separately.
    # Changes are hot reloaded unless otherwise mentioned.

    # It is possible to create files called 'configuration-abc.yaml' which are
    # merged with this file in file system order. So 'configuration-cde.yaml' comes
    # after 'configuration-abc.yaml'. Only nested structures are merged, values are
    # overwritten by subsequent configurations.

    # Secrets
    # To filter sensitive data from collection by the agent, all sensors respect
    # the following secrets configuration. If a key collected by a sensor matches
    # an entry from the list, the value is redacted.
    #com.instana.secrets:
    #  matcher: 'contains-ignore-case' # 'contains-ignore-case', 'contains', 'regex'
    #  list:
    #    - 'key'
    #    - 'password'
    #    - 'secret'

    # Host
    #com.instana.plugin.host:
    #  tags:
    #    - 'dev'
    #    - 'app1'

    # Hardware & Zone
    #com.instana.plugin.generic.hardware:
    #  enabled: true # disabled by default
    #  availability-zone: 'zone'
    # Place agent configuration here
---
kind: ClusterRole
apiVersion: v1
metadata:
  name: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.0.30
rules:
- nonResourceURLs:
    - "/version"
    - "/healthz"
  verbs: ["get"]
- apiGroups: ["batch"]
  resources:
    - "jobs"
    - "cronjobs"
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
    - "deployments"
    - "replicasets"
    - "ingresses"
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources:
    - "deployments"
    - "replicasets"
    - "daemonsets"
    - "statefulsets"
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
    - "namespaces"
    - "events"
    - "services"
    - "endpoints"
    - "nodes"
    - "pods"
    - "replicationcontrollers"
    - "componentstatuses"
    - "resourcequotas"
    - "persistentvolumes"
    - "persistentvolumeclaims"
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
    - "endpoints"
  verbs: ["create", "update", "patch"]
- apiGroups: ["networking.k8s.io"]
  resources:
    - "ingresses"
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps.openshift.io"]
  resources:
    - "deploymentconfigs"
  verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: v1
metadata:
  name: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.0.30
subjects:
- kind: ServiceAccount
  name: instana-agent
  namespace: instana-agent
roleRef:
  kind: ClusterRole
  name: instana-agent
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: instana-agent
  namespace: instana-agent
  labels:
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.0.30
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: instana-agent
  template:
    metadata:
      labels:
        app.kubernetes.io/name: instana-agent
        app.kubernetes.io/version: 1.0.30
    spec:
      nodeSelector:
        type: "infra"
      serviceAccountName: instana-agent
      hostIPC: true
      hostNetwork: true
      hostPID: true
      containers:
        - name: instana-agent
          image: "instana/agent:latest"
          imagePullPolicy: Always
          env:
            - name: INSTANA_AGENT_LEADER_ELECTOR_PORT
              value: "42655"
            - name: INSTANA_ZONE
              value: "k8s-cluster-name"
            - name: INSTANA_AGENT_ENDPOINT
              value: "Enter the host your agent will connect to. (U.S./Rest of the World: ingress-red-saas.instana.io or Europe: ingress-blue-saas.instana.io)"
            - name: INSTANA_AGENT_ENDPOINT_PORT
              value: "443"
            - name: INSTANA_AGENT_KEY
              valueFrom:
                secretKeyRef:
                  name: instana-agent
                  key: key
            - name: JAVA_OPTS
              # Approximately 1/3 of container memory requests to allow for direct-buffer memory usage and JVM overhead
              value: "-Xmx170M -XX:+ExitOnOutOfMemoryError"
            - name: INSTANA_AGENT_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          securityContext:
            privileged: true
          volumeMounts:
            - name: dev
              mountPath: /dev
            - name: run
              mountPath: /run
            - name: var-run
              mountPath: /var/run
            - name: sys
              mountPath: /sys
            - name: var-log
              mountPath: /var/log
            - name: var-lib
              mountPath: /var/lib/containers/storage
            - name: machine-id
              mountPath: /etc/machine-id
            - name: configuration
              subPath: configuration.yaml
              mountPath: /root/configuration.yaml
          livenessProbe:
            httpGet:
              path: /status
              port: 42699
            initialDelaySeconds: 300
            timeoutSeconds: 3
          resources:
            requests:
              memory: "512Mi"
              cpu: 0.5
            limits:
              memory: "512Mi"
              cpu: 1.5
          ports:
            - containerPort: 42699
        - name: instana-agent-leader-elector
          image: "instana/leader-elector:0.5.4"
          env:
            - name: INSTANA_AGENT_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          command:
            - "/busybox/sh"
            - "-c"
            - "sleep 12 && /app/server --election=instana --http=localhost:42655 --id=$(INSTANA_AGENT_POD_NAME)"
          resources:
            requests:
              cpu: 0.1
              memory: 64Mi
          livenessProbe:
            httpGet: # Leader elector liveness is tied to agent, published on localhost:42699
              path: /com.instana.agent.coordination.sidecar/health
              port: 42699
            initialDelaySeconds: 300
            timeoutSeconds: 3
          ports:
            - containerPort: 42655
      volumes:
        - name: dev
          hostPath:
            path: /dev
        - name: run
          hostPath:
            path: /run
        - name: var-run
          hostPath:
            path: /var/run
        - name: sys
          hostPath:
            path: /sys
        - name: var-log
          hostPath:
            path: /var/log
        - name: var-lib
          hostPath:
            path: /var/lib/containers/storage
        - name: machine-id
          hostPath:
            path: /etc/machine-id
        - name: configuration
          configMap:
            name: instana-agent

The following container environment variables will need to be adjusted.

  • INSTANA_AGENT_KEY - This is a base64 encoded Instana key for the cluster to which the generated data should be sent

    echo YOUR_INSTANA_AGENT_KEY | base64
  • INSTANA_AGENT_ENDPOINT - IP address or hostname associated with the installation.

Depending on your deployment type (SaaS or On-Premises) and region you will need to set the agent end-points appropriately. For additional details relating to the agent end-points please see the Host Agent Configuration.

In addition it is recommended to specify the zone or cluster name for resources monitored by this agent daemonset

  • INSTANA_ZONE - used to customize the zone grouping on the infrastructure map (See Custom Zones). Also sets the default name of the cluster.
  • INSTANA_KUBERNETES_CLUSTER_NAME - customised name of the cluster monitored by this daemonset

Note: For most users, it only necessary to set the INSTANA_ZONE variable. However, if you would like to be able group your hosts based on the availability zone rather than cluster name, then you can specify the cluster name using the INSTANA_KUBERNETES_CLUSTER_NAME instead of the INSTANA_ZONE setting. If you omit the INSTANA_ZONE the host zone will be automatically determined by the availability zone information on the host.

Install Using the Operator

The installation of the operator on OpenShift is similar to Kubernetes but with an additional installation method option and some prerequisites

There are two ways to install the operator:

Please perform the prerequisites steps before proceeding with installing the operator using one of the options mentioned above.

Install Operator Via OLM

  1. Install the Instana agent operator from OperatorHub.io, or OpenShift Container Platform, or OKD.
  2. If you don't already have one, create the target namespace where the Instana agent should be installed. The agent does not need to run in the same namespace as the operator. Most users create a new namespace instana-agent for running the agents.
  3. Follow Step 4 in the Install Operator Manually section to create the custom resource for the Agent and install it.

Operator Configuration

These are the configuration options you can set via the Instana Agent Custom Resource Definition and environment variables.

Customizing

Depending on your OpenShift environment you might need do some customizing.

If you can't pull docker images from the Docker hub you would need to add two image streams for the images we are using. Open up the OpenShift Container Registry, go to the instana-agent namespace and add the following image streams:

Name: instana-agent
Image: instana/agent

The resulting image stream should be: docker-registry.default.svc:5000/instana-agent/instana-agent

Name: leader-elector
Image: gcr.io/google-containers/leader-elector

The resulting image stream should be: docker-registry.default.svc:5000/instana-agent/leader-elector:0.5.4

Use the respective new image streams in the YAML.

With the node-selector you can specify where the instana-agent DaemonSet should be deployed. Note that every worker host should have an agent install. If you configure the node-selector check if there are any conflicts with project nodeSelector and nodeSelector defined in instana-agent.yaml.

With using the ConfigMap you can setup agent configuration that is necessary for proper monitoring.

Secrets

See Kubernetes secrets for more details.

FAQ

Why agent pod schedule is failing on OpenShift 3.9?

In OpenShift 3.9, it can happen that applying a DaemonSet configuration is resulting in unscheduled agent pods. If you see an error message similar to:

Normal   SuccessfulCreate  1m    daemonset-controller  Created pod: instana-agent-m6lwr
Normal   SuccessfulCreate  1m    daemonset-controller  Created pod: instana-agent-vchgg
Warning  FailedDaemonPod   1m    daemonset-controller  Found failed daemon pod instana-agent/instana-agent-vchgg on node node-1, will try to kill it
Warning  FailedDaemonPod   1m    daemonset-controller  Found failed daemon pod instana-agent/instana-agent-m6lwr on node node-2, will try to kill it
Normal   SuccessfulDelete  1m    daemonset-controller  Deleted pod: instana-agent-m6lwr
Normal   SuccessfulDelete  1m    daemonset-controller  Deleted pod: instana-agent-vchgg

Then you're missing an additional annotation to make the instana-agent namespace able to schedule pods:

oc annotate namespace instana-agent openshift.io/node-selector=""