Almost every day we hear of another data breach. There has been no shortage of headline-hitting breaches here in Australia, such as Medibank and Optus. The old-school method of ransomware encrypting any data it has access to and requesting a ransom for the decryption key to restore data has evolved. It is now a more insidious and far-reaching problem involving bad actors exfiltrating data and requesting a ransom to keep from exposing that data publicly.
From an organisation’s perspective, this ‘new’ method is an even bigger nightmare than before. Instead of being faced with a service outage impact because of encrypted data, they now face exposure of sensitive customer data that falls foul of regulations and incurs the wrath of customers.
When you have data of almost half the population of Australia exposed across just two data breaches (Medibank 9.7 million customers and Optus 2.1 million customers of the current population of 25.89 million), then the government is suddenly going to sit up and take notice.
This is the case in Australia where legislation was introduced for the following:
The maximum penalty for serious or repeated breaches of privacy laws will rise from AUD$2.2 million to the greater of AUD$50 million; three times any benefit obtained from the misuse of data; or 30 percent of adjusted revenue in the relevant period.
Just let that sink in. For a company with a revenue of AUD$1 billion in the 12 months before a breach, they could face a fine of up to AUD$300 million.
Why companies need to prioritise detection
To reduce the risk of being fined following a breach (not to mention the reputational impact), organisations must take immediate steps to improve their cyber security capabilities. A new risk assessment followed by either improving or implementing their people, processes and technologies is a good place to start; everything from preventative controls (whether physical or user education), detection capabilities (physical and threat hunting) and of course remediation and reporting (automation and processes).
As a security practitioner of over 20 years, I am a firm believer in the phrase “There are two types of organisations — those that have been breached and those that don’t know they have been breached.” There is no silver bullet. I believe it is impossible to stay one step ahead of an attacker — the game changes too frequently. The very best you can hope for is to stay in that attacker’s shadow.
Detection
That brings me to a technology I’m familiar with, detection. Let’s assume that you already know that email is one of the biggest threats to any organisation, (it features frequently in the annual Verizon DBIR Report) and you have good, but not perfect, email threat detection. Then you have 2FA/MFA (two-factor/multi-factor authentication) because you don’t want to have a user compromised by a phishing email where that user had privileged access and no multi-factor authentication do you? But 2FA/MFA can easily be bypassed (have a read up on Evilginx). The best we can hope for if they do get in is that we detect them as quickly as possible!
SIEM (Security Incident and Event Management), has been around longer than I have been in the game, but, it’s not a magic bullet. I spent many years as an analyst where I had no centralised logging or correlation, I was a Human SIEM (H-SIEM), pivoting from one security tool to another, maybe starting with a malware alert, then jumping to an email or web gateway console, then remote connection to a workstation or server to inspect security logs. It was cumbersome and time consuming, but I generally managed to build my timeline of activity and then declare an incident.
Then I worked in places with a SIEM. These legacy behemoths ingested all sorts of logs and spat out huge numbers of alerts, sometimes millions per day. They did some kind of meaningless correlation, but inevitably I became the H-SIEM again, drilling down, pivoting, searching and cursing because those logs I needed weren’t there. They were too expensive to ingest (due to volume), or the SIEM had hit the limit of what it could handle. I wish I had the benefit of modern security analytics back then. This brings me to two critical points: the quality of log sources and how quickly I can get a meaningful detection.
Log sources
Historically SIEM was used to meet compliance, centralise and store logs in one place. However it always had challenges around cost, so not all logs were ingested. Then Cloud came along and with it new log sources with weird connections and collection methods that weren’t Syslog! The rapid adoption of cloud technologies means more logs, resulting in more noise. However, not all logs are created equal. There are those I want to correlate across and those I need to ingest for compliance and possibly to forensically search in case of an incident.
Tip number one: Make sure your SIEM solution allows you to ingest everything you need, but does not penalise you by making you pay top dollar for logs you may only look at occasionally. Sumo Logic allows you to ingest at different tiers, some to be able to correlate and visualise in real-time, others to allow infrequent searching (for forensics) at a fraction of the ingest cost.
Tip number two: Log data is meaningless without context. There’s a huge benefit to a security analyst to have real-time context when looking at an alert and subsequent logs. Give me an IP address’s ASN info in the event so I don’t have to pivot to a new browser tab and do an IP lookup. Give me user context so I don’t have to open Active Directory or Okta to see who the user is and what their potential risk is, etc. This all takes time to do. The longer I take to investigate, the further I fall from the attacker’s shadow.
That brings me to critical point number two.
Detection time
As I mentioned above, having a SIEM that generates potentially thousands (if not millions) of alerts a day is of no value to your organisation whatsoever. As a SOCologist (there’s a new word for you), I am a firm believer in the following:
No more than ten alerts per analyst per day/per shift
Ten alerts per analyst based on an eight-hour shift only allow 40 minutes per incident to validate, investigate and triage. That is incredibly tight. I once had an incident that kept me at my work desk for 36 hours straight, but I digress. Unless you are running a SOC with thousands of analysts per shift, those noisy SIEMs I mentioned above are of zero value.
SIEM solution mustn’t give your security team alert fatigue and burnout. It is a real issue. A good modern SIEM such as Sumo Logics Cloud SIEM Enterprise produces high-fidelity actionable alerts using smart technologies to auto-correlate, prioritise, enrich, and make investigation and remediation quickly by presenting an alert with a timeline of activity that can be validated in minutes and not hours. The closer you are to the attacker’s shadow, the easier it is to kick their ass out the door!
Now is the time to make sure your cyber security posture is fit for purpose. An investment in time and funding, performing an up-to-date risk assessment, validation of the findings and improvements where necessary could be the difference between preventative spending or facing a huge and potentially crippling multi-million dollar regulatory fine.
The phrase ‘Any publicity is good publicity’ isn’t always true in the world of data protection and cyber security.
For further information, read our Paccar case study to learn how our customers use Cloud SIEM Enterprise to improve their security monitoring and overall security posture.
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.