5 Considerations For Post-Breach Security Analytics
Some of the most important security analytics tasks that organizations perform must be done with the pressure of a running clock and exacting standards for how data is preserved and manipulated. Unlike day-to-day log analysis, post-breach inspection of security data requires special considerations in the collection and handling of information following a compromise.
1. Collecting Relevant Data
The importance of the clock ticking in a compromise situation is one of the most crucial to remember when conducting analytics on forensic data. Firstly because investigators need to figure out what went wrong in order to stop active compromise situations and prevent further damage from occurring. And, secondly, because minimizing the breach notification window with ample public information is so crucial from a regulatory, legal and PR perspective.
“When a breach has been detected, it’s really important to have instant visibility from multiple viewpoints because you need to actually understand the breach, scope out the damage and remediate,” says Lucas Zaichkowsky, enterprise defense architect for AccessData.
Some of the types of data that can come into play within a forensic analysis include log files from multiple sources, information on affected endpoints such as structured file data or data in memory, as well as volatile data such as open network connections or running processes on systems, says J.J. Thompson, managing partner at Rook Consulting.
“You’re going to want to collect anything that is in scope for the incident, so you’re going to want to make sure you collect all of the system logs, database logs and network logs that you can possibly get your hands on,” he says, “and make sure that those are accessible and available for future analytics. That’s step one.”
Depending where initial log review starts to lead the incident response team, that’s where deeper collection of data within host logs will occur. This lays in contrast with standard security operations analytics, where host data happens “significantly less frequently,” Thompson says.
[How do you know if you’ve been breached? See Top 15 Indicators of Compromise.]
2. Make Data Collection A Possibility
Unfortunately, many organizations struggle to gain timely visibility into security data because they didn’t prepare enough data collection mechanisms in advance of the incident in order to offer them that immediate lens into what happened within the infrastructure impacted by a breach.
“A lot of time people will find out what they need to collect once they see the indicators of compromise and realize that collecting that information from then on is kind of a moot point,” says Chris Novak, global managing principal of investigative response for Verizon Enterprise Solutions, who recommends that organizations test themselves with mock incidents and walk through a collection scenario before their hair is on fire. “A mock incident is a way to really have those teachable moments as to what exactly it is that you need to be prepared for.”
In addition to shortfalls in data collection mechanisms, the mock incident may uncover a frequently lacking piece of foundational information: namely, an up-to-date network diagram. Novak says he’s frequently surprised at how many organizations might have a fully detailed rendering of the physical building a data center is hosted in while lacking a network map counterpart.
3. Preserve Data For Longer Than You Think You’ll Need It
As organizations think about what types of data to routinely collect, they should also be mindful of keeping it long enough as a precautionary measure to allow for taking a lengthy enough backwards look at the data to pinpoint the initial compromise. According to Zaichkowsky the longest time he’s witnessed between initial discovery of compromise and forensics trail to initial infiltration of ‘victim zero’ was 456 days.
“That’s a long attack lifecycle that they need to be able to reconstruct what happened,” he says.
As a rule of thumb, he recommends organizations retain at least a year’s worth of relevant log data, with three months’ worth of it online and ready to search at a moment’s notice.
In addition to this precautionary groundwork, once a breach has been discovered those retention windows on the in-scope data should lengthen considerably. After an investigation is complete, organizations should secure and archive that collected data in case it is need for a rainy day. That could mean for legal purposes, but also on the chance that compromise went deeper than initially thought.
“A lot of times companies will go through the process, remediate and then when they find three months later the attack was resumed, they realize the attacker is still in the system but all of the relevant data was deleted after the investigation,” he says.
4. Establish A Chain Of Custody
As Zaichkowsky mentioned, analytics of forensics data will lead to inspection of data that’s rarely looked into on a day-to-day basis. As an investigation team digs into collection of volatile and legally sensitive data, they must not only think about preservation of data that will lead to swift mitigation of risk but also about preservation of evidence in a legally admissible way.
“Things typically start with the preservation of the evidence: not powering off systems so we can collect volatile data and maintaining a proper chain of custody,” he says.
Establishing chain of custody is an imperative for cases where litigation or legal proceedings could of occur. The key thing being able to document how data was obtained, by whom, when it was obtained and maintaining the integrity of the data state to prove it was never tampered with during the investigation process, says Thompson.
“It’s really about making sure that you can show counsel that this evidence was obtained using forensically sound mechanisms, it was not altered and you have that evidence available for opposing counsels, advisors, consultants and experts to analyze it there themselves and see if they come to the same conclusions,” he says.
Typically, the best practice is to pull the entire binary or data in full, duplicate it and keep a hashed copy prior to running analytics on the working copy of data in order to show it hasn’t been altered in any way, Zaichkowsky says.
5. Go Down The Rabbit Hole Without Getting Lost
With evidence bagged and tagged and data ready for analysis, the hard work still lies ahead for investigators who must roll their sleeves up and inspect the data. While the mantra for forensics collection of data is to collect as much as you can that could tie to the incident, that scope needs to be tightened once it is time to run analysis.
“Usually what happens is you have massive scope creep and an overconsumption of that forensics data—you collect so much you feel like you have to analyze the same amount,” Novak says, who instead recommends customers use an ‘evidence-based’ approach to the investigation. “How did you recognize the problem? Start there and only expand as much as you need.”
Thompson agrees, stating that organizations should let the indicators of compromise lead the investigation into the paths of analysis. One way he gets his analysts to tighten focus is to go through an exercise where they literally draw a box on a piece of paper and hand write out the components that were indicators that lead them to believe there was a compromise. The idea is to draw out lines and start brainstorming within that box in a way similar to how a detective would work through evidence in a physical crime case. With that picture in front of them, it is easier to start listing out the investigative techniques to start with so that the analyst can jump down potential rabbit holes without ever getting lost.
“That really helps them keep on track so that they don’t end up veering off course,” he says.
Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.