Analyze Traces & Calls
TABLE OF CONTENTS
- View traces
- Capture logs and errors
- Automatic aggregation of short exit calls
- Capture parameters
- Long running tasks
- Historical data
Examine traces in Unbounded Analytics, where you can investigate the traces and calls collected by Instana. To help you understand how an application behaves with each call, we monitor each one of those calls as they come in to the system.
- On the sidebar, click Applications.
- On the Applications dashboard, select an application or service.
- On the application or services dashboard, click Analyze Calls.
- On the Analytics dashboard you can analyze calls by application, service, and endpoint, breaking down the data that Instana is presenting by service, endpoint and call names, respectively. Under Applications, select Calls or Traces.
- Click a group and then select a trace.
On the Analytics dashboard, traces or calls can be filtered and grouped using arbitrary tags. In Analyze Calls, filters can be connected using the AND and OR logic operators and grouped together with brackets. In Analyze Traces, only the AND operator is available.
There are two approaches to filter data:
- Query Builder
- Filter Sidebar
While both of them are usable on their own, best results are achieved when combining them.
Use the Query Builder on top of the Analytics dashboard to filter the initial result set.
By clicking Add filter, you can apply the
endpoint.name tags, along with infrastructure entity tags such as
host.name, to both the source and the destination of a call.
By default it is applied to the destination.
To change it to the source, click the selector before the tag name and select source.
By combining source and destination, you can create queries such as Show me all the calls between these two services or Show me all the calls that are issued from my
agent.zone 'production' towards the
The selection of source or destination is not available for call tags such as
call.tag, which are properties on the call itself and are independent of the source or destination.
To apply grouping, click Add group and select one of the tags. The default grouping uses the endpoint name (
endpoint.name) tag. To inspect the individual traces and calls that match the filters, you can either expand the group to peak into the results, or click Focus on this group which removes the grouping and further filters the results by the value of the selected group. Tags can be applied to the call's source or destination, so you can express queries such as Show me all the calls towards this one service, broken down by caller. Calls that do not match any group are shown in a separate group, for instance with
agent.zone this will be Calls without the '
agent.zone' tag. To remove the unmatched
agent.zone from the results, apply an additional filter with the
is present operator.
Grouping by source and destination is also not available in Analyze Traces, as the available groups in that view are independent from source or destination of any one particular call.
The preceding example filters by application
Catalogue (user-journey) and lists the calls grouped by the endpoint name.
Using the results recieved by applying query builder filters, it's possible to quickly drill down on the data by applying additional filters in the Filter Sidebar on the left side of the page.
Items within the same tag category will be concatenated via logical OR, different tag categories will be concatenated using logical AND. All selected filters in the filter sidebar are applied on top of any applied query builder filter via logical AND. The header at the top of the filter sidebar shows the total count of selected items accross all tags and allows you to quickly remove all applied sidebar filters.
Attention: Please note that multiple selections for a single tag are currently not supported on Analyze Traces.
In the example above we're filtering by application
Catalogue (user-journey) in the query builder AND services
discount-svc selected in the filter sidebar.
To quickly group by one of the filter sidebar tags, click the grouping button displayed on the right side of each tag suitable for grouping. This is a quick way to configure grouping in query builder as described earlier. The same way it is possible to group by a specific filter sidebar tag, it is also possible to lift grouping again by clicking the ungroup button on a tag currently used for grouping.
Trace and call latency can be inspected using the Latency Distribution chart. When selecting a latency range on the chart, filters above the chart will be adjusted accordingly. The results in the table below will be updated to show only traces or calls within the specified latency range.
To display a trace view, on the Analytics dashboard select a group, and then click the trace. Selecting a call displays the call in the context of its trace.
The summary details of a trace include:
- The trace name (usually an HTTP entry).
- The name of the service it occurred on.
- The type or technology.
The core KPIs:
- Sub calls to other services.
- The number of erroneous calls.
- The number of errors within the trace.
- The number of warnings within the trace.
- The total latency.
The trace timeline displays the following:
- when the trace started.
- the chronological order of services that have been called throughout the trace.
The call chains hang from the root element (span). On simple three tier systems, you have a typical depth of four levels. In contrast, on systems with a distributed service or microservices architecture, you can expect to see much longer icicles. When you have long subcalls of the trace, or periodic call patterns, like one HTTP call per database entry, the timeline gives you an excellent overview of the call structure.
To view details of the span, click the span within the timeline graph. To view details of where the time was spent within a specific call, hover over the call displayed on the timeline graph. The call details include
Self (within the call),
Waiting on another call, or on the
The services, listed under the timeline graph, summarizes all the calls per service and lists the number of calls, the aggregated time, and the errors that have occurred. Each service has its own colour (in this example shop = blue, productsearch green). Select a service to view its details in the applications and services dashboard.
The trace tree displays the structure of the upstream and downstream service calls, along with the type of the call. To explore specific calls, expand and collapse individual parts of the trace tree. Select a call to view its details in the services and endpoints dashboard.
To display the call detail sidebar, select a call in the timeline graph. The details displayed include the source and destination of the call, errors, a status code, along with the stack trace.
Instana automatically captures errors when a service returns a bad response or log with an
WARN level (or similar depending on the framework) was detected.
Instana always endeavors to give you the best understanding of service interactions, while also minimizing impact on the actual application. Certain scenarios, however, require Instana to drop data in order to achieve that.
A very common problem in systems is the so called 1+N query problem, which describes a situation where code performs 1 database call to get a list of items, followed by N individual calls to retrieve the individual items. The problem usually can be fixed by only performing one call and joining the other calls to it.
The icon next to the call name indicates how many requests were batched together. Call details match those of the most significant service invocation, for example the request with highest duration or having errors. Duration and error count for the shown call is aggregated from all batched calls.
The aggregation of service interactions only happens within the following constraints:
- High frequent and repetitive access patterns of similar type
- Individual service invocations take less than 10 ms
- Time between invocations is less than 10 ms
Due to impact concerns, at the moment the tracing sensors of Instana do not automatically capture method parameters or method return values. To capture additional data on demand, use the SDKs.
Due to timeouts, high load, or any other number of environmental conditions, it is possible that calls need significant time until they respond. Traces can contain tens or even hundreds of such calls. As we don't want to wait until all calls have responded to deliver tracing information, long running spans are replaced with a placeholder. When the span finally returns, the placeholder is replaced again with the correct call information.
Instana stores all traces and calls for 7 days. Past this period, our retention strategy retains statistically significant traces and calls to prevent unbounded storage growth.
Traces and calls that rarely occur may not be represented in such scenarios.