If You Can’t Model It You Can’t Manage It: 9 New Issues Impacting Your Cloud and Container Based Applications Part 4

February 13, 2018

The Application Data Model is Important

Welcome to Part 4 of Instana’s “9 New Application Issues” series. In this post, we’ll explore application issue #6: “Inflexible data models in monitoring tools make it impossible to understand impact and causality when using containers, orchestration, serverless, etc.”

The underlying data model of an application monitoring and/or analytics tool will either set you up for success or doom you to failure as you manage your cloud, container, and microservices applications. That’s a bold statement, but the data model is a vital consideration when your applications are critical to the health of your business and your application monitoring tool is necessary to identify and help you fix application problems. In reality, all data models are not created equally; and if you hope to utilize advanced analytics or AI to assist in your application triage then you MUST have an accurate model at all times.

What Application Data Model?

The data model is the master ledger of all technical piece parts that make up each application; your physical, virtual, and logical components as well as the relationships that exist between them. It is the single source of truth describing your application environment at any moment in time.

Example components:

  • Physical: Servers, network adapters, disks
  • Virtual: Containers, processes, VM’s, JVM’s
  • Logical: Clusters, transactions, services, applications

If your data model is missing components (like containers, data caches, or orchestration tools), you will only have partial information to work with and the conclusions you draw from this data will be incomplete. If the data is stale (not accurate due to rapid change), you also risk drawing the wrong conclusions which wastes time and money.

Application Data Model

The Evolution of Data Models Over Time

A look back at history will help us understand why most monitoring data models are no longer effective. Let’s go back in time to explore the architectures and associated data models that were applied within monitoring tools. Warning, the following information is a simplified to keep this blog post relatively short and digestible. There are many extra details that COULD have been added but the core details are in place.

Client-Server: intelligent client applications contact a centralized application server to get information to display or use. The clients, themselves, performed most of the computation, using the data returned from server requests. From a monitoring perspective, this was a simple data model. Monitor the client, server, OS, and the important requests passing between client/server.
Scale of typical environment: 10’s of components.
Rate of change: Slow (months)

3-tier: Almost every web application used to use a 3-tier model: Web Server, Application Server, Database Server. Users connect to the web server with either a dumb or a rich client (web browser), while much of the application logic and processing occurred on the server side with little to no computing on the client side. This made the data model a bit more complicated but still pretty simple overall. Generally, you could monitor web server, app server, database server, web clients, OS, trace transactions within the application server(s), and this would provide the details required to identify and solve problems.
Scale of typical environment: 10’s – 100’s of components.
Rate of change: Slow-ish (weeks)

SOA: Service Oriented Architectures built upon the 3-tier concept by breaking down the large monolithic code base of the application server into smaller chunks (although still complex) that provided specific business functionality. Most SOA based applications still used the 3-tier architectural concept of web, app, database to run each independent service but linked each service together through a message bus or via API calls. So this added another layer of complexity to the data model required to properly monitor and manage SOA applications. You had to account for everything already present in a 3-tier application but then add in the ability to model message buses, API calls, and distributed transactions (requests that flow across multiple services).
Scale of typical environment: 100’s – 1000’s of components.
Rate of change: Medium (days)

At this point, all hell is about to break loose as virtualization technologies are really becoming mainstream and are being applied in various forms including cloud and containers.

Microservices, Containers and Dynamic Application Models: Similar to SOA but broken down into automatically deployed services that are connected via lightweight communications (often RESTful APIs). Cloud computing, containers and orchestration tools are often utilized to deploy, run, and manage microservice applications. Each virtualization within these environments has its own unique characteristics that must be accounted for. Other new technologies, such as Serverless application infrastructure, are being regularly used and incorporated. To be effective, your application monitoring data model must account for all changes occurring at all times; it must adjust almost instantly; it must be flexible enough to add new technologies regularly; it must also understand the relationship between all physical, virtual, and logical constructs.
Scale of typical environment: 1000’s – 100,000’s of components.
Rate of change: Incredibly fast (seconds)

The Times, They Are A-Changing

The data models used to manage every architecture of the past were quite static. They seldomly changed, which was okay for managing the architectures of those times. Today’s more dynamic apps make it not okay anymore. Managing tens or hundreds of thousands of rapidly changing components in your application environments demands an automatic approach where the understanding and observability of the environment is handled by artificial intelligence.

Instana CEO Mirko Novakovic detailed how Instana’s dynamic data model (called the Dynamic Graph) is used in combination with artificial intelligence. It’s well worth taking 10 minutes to read and understand the relationship between dynamic data models and artificial intelligence

Stay tuned to Instana’s Application Monitoring Blog to catch our discussion of application issue #7: “Alerts take too long to trigger making impact to business too costly”. We’ll discuss why it’s not okay to get notifications 5-10 minutes after a problem has started, even though that is still the norm in many organizations. There’s a much better way and we’ll cover it in detail.

Play with Instana’s APM Observability Sandbox

Customer Stories, Engineering, Thought Leadership
Software is not static. As soon as you roll out an update, someone requests a new feature. Even if everything works fine, an infrastructure upgrade can break your code, and resolving one...
Announcement, Product, Thought Leadership
Kubernetes, Kubernetes Monitoring and KubeCon I have vivid memories of the first KubeCon that I attended – it was in Austin, and it SNOWED. I was also pretty blown away by the...
Conceptual, Featured, Thought Leadership
Building scalable systems has become more accessible over the past decade thanks to immutable infrastructure, containers, and orchestration platforms such as Kubernetes. As the complexity of these applications continues to accelerate the...

Start your FREE TRIAL today!

As the leading provider of Automatic Application Performance Monitoring (APM) solutions for microservices, Instana has developed the automatic monitoring and AI-based analysis DevOps needs to manage the performance of modern applications. Instana is the only APM solution that automatically discovers, maps and visualizes microservice applications without continuous additional engineering. Customers using Instana achieve operational excellence and deliver better software faster. Visit to learn more.