- Executive Summary: Manual Monitoring Slows Down Everything
- Change is the New Normal
- The Rise of Dynamic Applications
- The True Effort To Build Your Own Performance Monitoring
- Why Manual Performance Management Fails
- Implementing Automated Performance Management
- Conclusion: Maximizing Your CI/CD Agility
- About Instana
Executive Summary: Manual Monitoring Slows Down Everything
Continuous Integration and Continuous Delivery together (CI/CD) have become the goal for a majority of organizations. Meanwhile, modern technologies like Docker and Kubernetes have become widespread in production application environments. The result of these trends is that applications and their infrastructure are becoming increasingly dynamic: constantly changing to meet higher scalability requirements and fast-changing application functionality.
Yet the typical organization still relies on performance monitoring technology designed before CI/CD was the application delivery model. These solutions require significant manual effort from
IT staff, hampering the ability of organizations to use staff time effectively, control IT spending, and gain meaningful visibility into their applications and infrastructure.
Adopting an automated performance management strategy is the only way organizations can break through application challenges and fully realize the opportunity presented by CI/CD powered application environments. Embracing automation at every layer of performance management enables organizations to reduce costs while improving outcomes.
Change is the New Normal
It may sound like a cliché to observe that we live in a fast-changing world. From the perspective of IT organizations however, the fast-changing nature of today’s application environments is extreme.
Here at Instana, as we engage with our customers, their mandate is clearly a bias for speed. The faster a company can deliver new custom business applications, the more value IT is delivering. Effectively, this makes the ROI of IT go up!
This simple reality is driving the adoption of new techniques and technologies, particularly CI/CD (Continuous Integration & Continuous Delivery). CI/CD is an AUTOMATION exercise that enables delivery speed, better business capability, and ultimately improved quality for custom applications. The last stage in the CI/CD cycle is MONITORING. It follows, therefore, that the more automated and seamless the monitoring of a business application is, the easier it is to complete the CI/CD cycle and restart the loop of improving the application.

The Rise of Dynamic Applications
For application development, that need for speed has driven broad adoption of technology that enables rapid construction and delivery of new services.
In particular, the Continuous Integration and Delivery machine (shown above) is being recognized as a primary process for enabling speed and quality. In addition, to augment that process, new architectures such as containers, microservices, serverless computing and Kubernetes orchestration are being investigated and adopted.
With more workloads moving to dynamic technologies (29,000,000,000+ Docker downloads in 5 years), there are new realities for the pace and scale of change within application environments.
More than half of organizations now use CI/CD company-wide, and the IT industry seems to be consolidating around the CI/CD terminology and methodologies.
Meanwhile, each of the major public cloud platforms (AWS, Azure and Google Cloud) now offer Kubernetes managed container processing as a service which are easily integrated into CI/CD delivery pipelines. It’s quite clear, there is a major change in the world of application construction and delivery.

Visualizing how highly distributed applications are structured and operating is challenging. uber.com/blog
The True Effort To Build Your Own Performance Monitoring
Since traditional APM and monitoring tools were not designed for dynamic applications, a new set of open-source technologies emerged to help development teams manually setup their own monitoring through manual coding. Whether providing performance metrics, tracing paths of an application, or exposing other details of code, these open-source monitoring tool create their own sets of challenges when it comes to production performance monitoring.
To manually monitor the performance of an application, engineers must do the following:
- Write data collectors manually
- Manually code tracing to track distributed requests
- Configure a data repository manually
- Identify and designate dependencies manually (usually through reverse engineering)
- Manually select data to correlate
- Build dashboards to visualize correlation manually
- Configure alerting rules and thresholds manually
Of all the issues listed in the left sidebar, the biggest problem is that the people performing these tasks are usually the highest skilled (and highest paid) technicians and engineers in our company.
In short, while open source tools imply simplified performance management, there are too many manual tasks, resulting in deployment slow-downs, cost increases and additional manpower requirements. And that’s before you look at the opportunity cost of your precious developers writing monitoring code instead of business application code.
Why Manual Performance Management Fails
Historically, manually setting up a monitoring system didn’t present a problem because neither the application code, nor the application infrastructure (middleware, app servers, etc.) changed very often. IT would provision a box, set its IP address, load some software, set up the monitoring and then never touch it again for years.
Nor was an application environment that was built upon traditional technologies likely to fluctuate rapidly in scale; the number of application instances and host servers running at any given moment typically did not change. As a result, manual configuration of the tech stack in static environments didn’t impact the organization’s ability to monitor and manage performance.
But when workloads move into dynamic environments based on technologies such as containers and on methodologies such as CI/CD, manual performance management strategies and build-it-yourself solutions break down. This is true for several reasons.
Constant Change
Change is the only constant in dynamic application environments. The structure of the application changes continuously at every layer. New hosts come online and disappear all the time, with containers making the provisioning even more dynamic. Completely new APIs/services are built and provisioned by the developers -continuously, without checking with operations. Even the application code can change due to improvements or bug fixes at any moment. The closer a team gets to CI/CD, the more frequently changes occur.
As a result, performance management configuration, monitoring dashboards, dependency mappings and alerting rules must be able to evolve automatically to keep pace with the environment they are monitoring. Otherwise, IT teams lack the necessary accurate visibility into the environments they manage, which leaves the organization at great risk of failure that could impact end-users.
Complex Dependencies
These constant changes impact the actual dependencies between different components. Any specific service depends on a unique vertical stack of software as well as data (or processing) from other services.
Why is knowing dependency important? Troubleshooting! Getting to the root cause of an issue in complex environments is an exercise in dependency analysis. What’s causing slow/erroneous requests? Requests traverse many services across many frameworks and infrastructures, so to answer that question, understanding structural dependencies of every request is invaluable. But as we detailed in the last paragraph, dependencies constantly change!
Attempting to interpret such dependencies manually is simply not feasible—especially when dependencies change quickly due to code deployments or infrastructure scaling. Even if human operators succeed in mapping dependencies manually at one moment in time, their mappings will quickly become outdated. What’s more, manual dependency interpretation is a huge resource drain and takes your best engineers to accomplish.
Rule Deterioration
When your environment changes constantly, the rules that your monitoring tools use to determine whether applications and services are healthy need to change continuously as well. If they don’t (which is likely to happen if you depend on manual intervention to update rules), the rules will quickly deteriorate. For example, when a new service is deployed or an orchestrator moves workloads, health check rules will cease to interpret environment dependencies accurately until the rules are updated.
If rules are not updated manually, monitoring alerts and insights will be based on outdated configurations. This deterioration undercuts visibility and increases the risk of an infrastructure or service failure.
Manual Monitoring Impedes Speed
Tasks such as writing tracing and monitoring code are too time-consuming. So is interpreting monitoring information by hand and manually updating performance management rules whenever application code or environment architectures change.
Simply put, humans can’t support rapidly changing environments without automation. Attempting to manage performance manually will significantly slow down application release cycles. And for the business, it means poor use of the expensive expertise that IT staff represent.
Accelerating software delivery is paramount for organizations seeking to keep pace with fast-changing user demands. Business needs will only increase this demand so looking forward, applications will certainly become MORE dynamic.
Implementing Automated Performance Management
If manual monitoring is so laborious, then it begs the question:
Why hasn’t every organization automated monitoring yet?
There are actually three reasons for this:
- Until recently, fully automatic monitoring technology did not exist.
- Software engineers tend to code their own solutions to problems, leading to the monitoring-as-code approach, which is functional in the short term, but is not maintainable or scalable.
- Teams try to leverage previous investments in legacy monitoring tools, hoping they will work in their new high-speed environments which unfortunately turns out to not be true.
With these reasons in mind, let’s take a look at what capabilities should be part of a fully automated application performance monitoring solution.
- Automatic discovery and monitoring of the full infrastructure and application stack.
A key element of the CI/CD process that is missing today and is slowing down the release process. - Real-time complex dependency mapping, with automatic updates for any change
Critical for root-cause analysis to address performance issues quickly. - Automatic, rapid identification of performance problems
Quickly identifying performance issues based on automatic rule configuration and monitoring is the only way to minimize false positives and avoid application delivery delays. - Rule configuration and alerting that use machine learning and artificial intelligence to establish dynamic baselines for healthy application behavior, then identify anomalies based on those baselines
This eliminates the need for tedious configuration, monitoring and data interpretation by humans, while minimizing false-positive alerts. - Automatic monitoring setup: Agent deployment, code instrumentation, infrastructure discovery and more
Maximizing the data collected, while simultaneously minimizing the effort required from human admins to collect it.
Both legacy performance management tools and modern open source monitoring tools lack this automation functionality. While the exact work is different, both sets of tools require too much manual configuration, programming, setup and administration to optimize monitoring.
Conclusion: Maximizing Your CI/CD Agility
Is Your CI/CD as Agile as it Could Be?
Successful management of dynamic application environments doesn’t end with automated monitoring.
Your IT teams should constantly assess the extent to which they have successfully automated all performance management tasks by asking themselves the following questions:
- How long does it take to achieve sufficient visibility into application performance after you push out a new release?
- How long does it take to update monitoring rules when a new application or service deployment occurs?
- How much time and effort do your developers expend writing tracing code?
- How many performance or availability incidents are you missing per month or quarter?
- How are you handling alert storms (meaning a rapid stream of alerts in a short period)? Are you able to respond to each alert effectively without suffering alert fatigue? Can you trace alerts quickly to root causes so that you know when multiple alerts are stemming from the same underlying issue?
- Is your monitoring and performance management process as automated as the rest of the application delivery pipeline? If not, how can it be automated further?
Regardless of your IT team size, automation is critical for an effective performance management strategy so that you can achieve high-speed delivery of new business services.
Instana, the automatic performance management solution born in the age of microservices, cloud computing, and containers, can enable you to truly deliver on the promise of CI/CD.
Designed specifically to manage the performance of dynamic, CI/CD driven application environments, Instana leverages artificial intelligence and automation to deliver comprehensive, actionable insight with no manual effort.
About Instana
Instana, an IBM Company, provides a real-time, automated Enterprise Observability Platform that includes application performance monitoring capabilities to businesses operating complex, modern, cloud-native applications no matter where they reside—on premises or in public and private clouds, including mobile devices or IBM Z® mainframe computers. Users can control modern hybrid applications with Instana’s precise metrics, full end to end traces for all transactions and AI-powered contextual dependencies discovery inside hybrid applications.
Instana helps System’s Reliability Engineers improve the reliability and resiliency of cloud-native applications by preventing issues from turning into incidents and by providing fast remediation times when incidents occur. Instana also provides visibility into development pipelines to help enable closed-loop DevOps automation with actionable feedback
for optimizing application performance, enabling innovation, mitigating risk, and managing cloud technology expenditures.
For more information, visit instana.com.