Instana Blog

Date: January 6, 2020

Observability Normalized

Category: Engineering

Now and then, an observability or monitoring article will be posted pitting agent and agentless based architectures against each other. This is unfortunate as it often results in many not seeing the real underlying difference, which is less about the installation of local agent software and more about the efficient and effectual communication of contextual state – normalized or not. We should not differentiate in terms of whether an agent is deployed or not, especially with some companies electing to instrument some parts of an application’s codebase manually using Open Source observability libraries. Instead, we should consider whether the observer, an agent or library, is stateless or not concerning what and how it observes, measures, composes, collects, and transmits observations.

With a stateless observer (and backend), each time an event occurs and calls into the instrumentation library, the entire contextual hierarchy must be transmitted in its entirety to a backend – invariably a big dumb data sink in the cloud. The event payload for a stateless observer needs to repeatedly send many (identity state) tags pertaining to the enclosing context hierarchy such as cluster, host, container, pod, process, and thread – flattened and bloated. This is especially true for those observability and monitoring vendors that came to the market with a very simplistic “send all your event data to us” message. Data as opposed to context. Data as opposed to a model. Flat and fat.

A stateful observer, on the other hand, will send to the backend multiple different types of payload when it discovers objects of interest (contexts) and determines a change in state – there is a conversational dialog with the backend in building up a normalized representation of the reality the observer exists within and readily perceives. Contexts are mirrored in the backend, with transmissions only including what is needed to maintain this. Event payloads need only include an identifier for the immediate enclosing context – fast and slim.

Note: Just imagine how difficult it would be to construct a compelling narrative if a local observer were not able to identify and track forms within its local environment. A human form enters a bar with hair color brown, height 6ft,… then later…a human form orders a drink with hair color brown, height 6ft,… Can we be sure that this is the same form? The job of an observer is to accurately associate both statements if indeed they concern the same subject.

Wide events are, in many cases, just another way for a vendor to describe an unnormalized semi-structured data collection transmission encoding. This can be attractive for a vendor, less so for their customers, as there is not a need for heavy up-front engineering work in the area of contextual model design. Vendors need only allow an observer to send along as many arbitrary tags (or labels) as possible, every time an event is transmitted. The costly engineering effort falls on the users (developers) of the instrumentation library and those tasked with trying to put everything back together again by way of custom dashboards and multi-tagged multi-dimensional rows and tables.

Customers of denormalized data backends nearly always end up spending a significant amount of time attempting to define a company standard set of tags or labels before even getting to the most difficult and impractical task – the linking dashboards together by way of multiple tag sets embedded in pages, widgets, and console references.

A proper context-based observability model, with normalized forms, makes it nearly effortless to navigate infrastructure and service maps – up and down layered stacks and across system and service boundaries. With a good UX/UI design moving between multiple contexts, both spatially and temporally based, can be extremely productive in reducing the cognitive load on operations staff. When offered mostly a denormalized and heavily tagged data backend, engineering teams will become entangled and lost in the data fog losing sight of bounded contexts. Once such forms are lost to the human eye, the recognition of significant change will follow suit.

14 days, no credit card, full version

Free Trial

Sign up for our blog updates!

Start your FREE TRIAL today!

Free Trial

About Instana

As the leading provider of Automatic Application Performance Monitoring (APM) solutions for microservices, Instana has developed the automatic monitoring and AI-based analysis DevOps needs to manage the performance of modern applications. Instana is the only APM solution that automatically discovers, maps and visualizes microservice applications without continuous additional engineering. Customers using Instana achieve operational excellence and deliver better software faster. Visit to learn more.