The word “telemetry” has traditionally been associated with industrial automation. Smart meters, aircraft control systems, and self-driving cars all generate real-time metrics that, when monitored, gathered, and stored remotely, provide the raw material for the generation of a wide range of very diverse information.
Telemetry systems for smart meters gather usage-related information applicable to billing, capacity planning, and communicating with customers. Those for aircraft control systems power auto-pilot, landing, and hydraulics systems and provide notifications when parts or systems are about to fail. Selfdriving cars rely on real-time analysis of hundreds of thousands of metrics per minute, gathered from cameras and sensors. From this data, internal guidance algorithms generate split-second responses to changing traffic conditions.
Traditionally, however, there is a separate unique type of telemetry, often called “operational metrics,” generated by IT environments and the applications that execute over them. Virtually every component underlying applications produces time-series metrics; some produce logs or similar unstructured/semistructured data as well. The purpose of such data has traditionally focused on IT support. Application Support and Operations teams need visibility to the technology environments they manage to monitor performance and availability, and to guide them during the troubleshooting process.
From this perspective, traditional performance monitoring/management solutions are essentially data collection and analytics systems, optimized to analyze and report on application execution in context with the operational metrics supporting the application. Ideally capable of real-time processing of diverse data generated across the execution environment, these products are designed to “understand” the application ecosystem and, in doing so, automate notifications and responses when problems occur.
As IT organizations begin to deploy a new generation of modernized applications, however, many find that incumbent performance management platforms cannot meet 100% of their application monitoring requirements. Applications are increasingly being hosted on virtual versus physical infrastructure, either on premises or in the public cloud. Recently, even virtual machines are being called “legacy containers”; approximately 15% of companies are already using container-based technologies, such as Docker, to deliver production services.
As modern applications become increasingly componentized and more loosely connected, two things start to happen. First, the volume of data they generate starts to grow exponentially, primarily because each component generates its own telemetry. Next, the types of data generated from these systems—which consists primarily of structured and semi-structured machine data—may or may not be recognized or correctly processed by incumbent performance management solutions. Finally, since existing in-house toolsets were never designed to process massive amounts of data at scale, there may well be monitoring gaps in the toolsets that make it difficult to cost-effectively support component-based, containerized, and API-connected applications.
While some performance management vendors are adding support for basic log analysis to existing solutions, another approach is gaining momentum. Sumo Logic, a Software as a Service (SaaS) log management provider, has introduced a SaaS-based Unified Logs and Metrics (ULM) platform. Representing a new approach to Application Data Analytics, the creators of Sumo Logic have applied many of the principles behind the very successful science of Business Intelligence (BI) to the field of IT operational support. In addition, this data and analytics platform supports analysis and reporting functions which can be of significant value to a wide variety of both IT and Line of Business (LOB) applications. This Enterprise Management Associates (EMA) white paper profiles Sumo Logic’s ULM platform.
As companies begin to deploy application components over a wide range of diverse platforms and technologies, the resulting polyglot of metrics, logs, and messaging creates a torrent of data. IT professionals report that they are drowning in data but still lack the information they require to streamline the process of application support.
Analytics are the key to solving this challenge. And particularly as application ecosystems become increasingly complex and dynamically changing, the value proposition of solutions capable of analyzing a wide range of data types and enormous volumes of machine data in real time will continue to accelerate. Indeed, EMA research is showing correlation/analytics tools as a number one “wish list” product for 2016, for exactly the reasons outlined in this paper.
Within today’s rapidly changing business and IT landscapes, Sumo Logic continues to enhance its Application Data Analytics platform. With native support for analysis of combined quantitative (metrics-based) and action/messaging (semi-structured and unstructured-based) insights in context with one another, this combined approach positions Sumo Logic as distinctive in its class. Providing the continuous, real-time execution insights necessary for building, running, and securing modern applications, these capabilities extend the value proposition of existing APM installations while also providing robust management analytics as a standalone solution.