Abstract: The following article is based on a set of slides that are presented to potential customers of Instana in informing them how we view observability in relation to monitoring in architecting a solution and designing the features within. My commentary on such slides is based on nearly 20 years experience in building low-level code profilers all the way up to a mirrored machine universe of software execution memories for the purpose of monitoring. Observability is an important aspect of any solution offering to help customers adapt to increasing rates of change and even greater complexity. But it must be framed and managed with regard to monitoring, controllability, and management for it to be both efficient and effective.
Each day it seems there is yet another use case for observability. We’ve gone from data collection to data debugging and now testing in production. Observability is fundamentally a sensory system, so it is natural for some to confuse what it does with how it can be used.
Observability is not debugging, but it can help with the task of debugging. Observability is not testing, but it can help with the task of testing. Observability, as its name implies, is about perception – the perception of change to structures and processes within a system and the enclosing environment that is monitored out of necessity for quality and availability.
The question often formed in discussing and comparing observability and monitoring as distinct and separate processes (and products and services) is whether there can be one without the other. But first, let’s try to simplify the definitions by relating them to our own self view.
Observability viewed as a multi-sensory system is our eyes and ears. If you still like to hug physical servers and feel the vibrations of the disks add in touch into the set of sensations.
Another way of framing the relationship between observability and monitoring is in terms of cybernetic control. Monitoring is the process that connects observability and controllability. Controllability here being the ability to rectify the system when its inferred state deviates or needs to adapt to changes including those within the environment or the management process.
Observability is a tactical function, whereas monitoring is a strategic function. Monitoring directs attention and assigns significance beyond the local source scope of data (or signals) emitted from the observability layer. The representations and models within the monitoring space are different from the those within the observability which is mainly concerned with measurement records – traces, logs, metrics, events, and signals. Monitoring gives context.
Observability does not have memory and hence is not able to give significance at a degree that serves effective monitoring and management. A memory is distinct from a record. Observability deals in records, whereas monitoring and other higher-order functions deal in terms of memories. Records serve to reconstruct the memory, but they are not the memory. A series of bytes in an audio file is a record of sound, but the sound cannot be brought back into the present for listening purposes without the ability to recall and reconstruct the memory of the sound recorded. I cannot emphasis enough the importance of this distinction, especially when observability products confuse the data they collect with a model for monitoring of services.
Beyond the process of collecting data, observability products and services offer storage and retrieval capabilities primarily. This is very distinct from recognition and recall required for effective monitoring and managing of systems – a cognitive effort for both man and machine.
It is the cognitive processes within the monitoring solution space that elevate data, and the recording of such, to information of significance for the system under management.
When operational staff buy service management products and services marketed as “intelligent” they are in effect looking to purchase cognition in some form. They are looking to extend and augment their existing thinking, reasoning, remembering, imagining, or learning capacities.
A simple reactive system responds to a stimulus in a purely functional way. It is largely predictable, not adaptive. An intelligent system, on the other attempts first to recognize the current context from past memories of situations and subsequent interventions. It is this memory of past, present, and future possibilities (planning) that gives meaning and value to data collection from observability tooling. Contextualization is always a process of (re)construction.
Observability offers up the raw materials to construct context, but the building of such is done by monitoring. That is not to say that data is blindly collected and thrown over the wall into the monitoring space. All effort, man or machine, comes with a cost that requires a balancing of efficiency and effectiveness. Monitoring serves this necessary requirement.
Observability without regard for monitoring does not scale. Humans cannot be expected to wade through billions of data records looking to identify patterns of significance. Doing so should be considered a last resort and possibly a failing in tooling and management. The responsibility of (re)constructing an appropriate context lies with monitoring, monitoring that has watched for a much longer time before a human operator is alerted to an incident.
Can there be observability without monitoring? No, if we consider what we do as a cognitive process consisting of perception, memory, and prediction. Here prediction is the ability to form inferences of the current objectified state, probable causes, and future consequences of actions.
I’ll end with a quote from Heinz von Foerster, the originator of second-order cybernetics, from his book Understanding Understanding.