blog に戻る

2020年03月09日 Bruno Kurtic

In A Fast Changing World, Peer Benchmarks Are A GPS

As businesses transform their traditional business models into new digital ones, and aggressively compete for turf within the digital economy, their constant pursuit of competitive edge drives technology, process, and architectural innovations. As a result, it seems that every 18 months a technology paradigm shift comes about that enables better agility, lower cost, improved quality of service, better intelligence and more.

Over the last 20 years, we saw many of these technology shifts arise. During the early 2000s, agile development came into mainstream along with the adoption of SaaS and an emergence of generally adopted open source technologies. Towards the end of that same decade, big data stacks and cloud services (IaaS & Paas) both became mainstream. CI/CD and microservices architectures entered the scene during the early 2010s, followed soon after that, by proliferation of containers and data center operating systems. In the last few years Kubernetes has become a standard, and serverless technologies along with general purpose adoption of ML is gaining ground.  

Changes and choices are not showing any sign of slowing down. The open source movement seems to sprout promising new technologies on a regular basis and cloud providers deliver new services even more frequently. Amazon Web Services alone offers more than 150 enterprise class services, and GCP and Azure have as many each.

If you buy into the fact that these technologies are emerging as a response to the needs of digital businesses to be more competitive, then it must be the case that businesses that don’t effectively adopt them are at a competitive disadvantage. Not a huge leap of logic. We do know, from a survey we ran, that ⅔ of enterprises today believe that lack of skills is the top barrier to successful adoption of new technologies.

It makes sense that this would be the case - humans don’t acquire complex skills quickly and much of technology skill comes from practice - trial and error so-to-speak. As such, the solution to lack of skills is not simple and it won’t be fast.

Benchmarks are everywhere in our lives. Most of us forget that the values for normal heart rate or body temperature are actually benchmarks. We know which cars are fuel efficient by comparing them to other car’s fuel efficiency and governmental fuel efficiency standards. We know what’s fashionable by observing what others are wearing. So much of our learning comes from observations of what or how others are doing things.

Can benchmarks like this be translated to technology usage? Plenty of such benchmarks exist but it is not easy to discern which can be relied upon due to the fact it is unclear how universal they are, how they were derived, and if they apply well to our specific scenarios. Some benchmarks are so well understood and validated like a web transaction should take less than 100ms in order to ensure good user experience. Others, such as expected web traffic error rates are harder to apply because they depend on specific context and scenarios and are more difficult to validate.

It would, however, be very useful to be able to short circuit the trial-and-error stage when adopting new technologies, such as a new type of a database or cloud services. It would be helpful to know how one should deploy the database, how much of the available resources the database should consume during typical operation, how long an insert or read operations should take, particularly if the benchmarks are derived in real-time from actual databases running. When adopting new cloud services such as AWS, it would be helpful to know what is the average rate of specific types of security threats observed across other users in order to know whether our own usage is “normal” or requires tighter security measures.

To put this in context, per Sumo Logic’s Continuous Intelligence Report, the median Sumo Logic customer uses 15 AWS services. Each of these services have between 50-100 configuration settings making it difficult to get it right the first time, putting operational and security posture of new digital services at risk. Often, such configuration errors remain undiscovered well into production usage causing availability issues, performance degradation, cost runup or security exposure. Benchmarks, in our view, can help DevSecOps users design better applications by leveraging the wisdom of the crowd consuming services they rely on. Likewise, on-call staff can use benchmarks to pinpoint (or eliminate) root causes based on the experience of others and accelerate recovery time.

Sumo Logic has been investing into benchmarking technologies through our Global Intelligence Service (GIS). We recently released a new powerful GIS for AWS Cloudtrail and have enhanced our GIS for AWS GuardDuty. We invite you to try it and let us know what other benchmarks could help you accelerate adoption of new technologies from either selection, deployment, operations or security perspective.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Bruno Kurtic

Bruno Kurtic

Founding Chief Strategy Officer

Bruno leads strategy and solutions for Sumo Logic, pioneering machine-learning technology to address growing volumes of machine data across enterprise networks. Before Sumo Logic, he served as Vice President of Product Management for SIEM and log management products at SenSage. Before joining SenSage, Bruno developed and implemented growth strategies for large high-tech clients at the Boston Consulting Group (BCG). He spent six years at webMethods, where he was a Product Group Director for two product lines, started the west coast engineering team and played a key role in the acquisition of Active Software Inc. Bruno also served at Andersen Consulting’s Center for Strategic Technology in Palo Alto and founded a software company that developed handwriting and voice recognition software. Bruno holds an MBA from Massachusetts Institute of Technology (MIT) and B.A. in Quantitative Methods and Computer Science from University of St. Thomas, St.Paul, MN.

More posts by Bruno Kurtic.

これを読んだ人も楽しんでいます