When You Know Too Much: Protecting Security Data from Security People
Modern security tools are growing increasingly capable, scanning millions of devices and gathering intelligence on billions of events each day. While the idea is to piece together more information for threat intelligence, it also begs the question of how all this data is secured.
“There’s so much more data today, more than there has ever been,” says Rebecca Herold, founder and CEO of The Privacy Professor consultancy. “And organizations never delete it, so they’re always adding more, with more devices and more applications.”
Further, she adds, there are several more locations where information is collected, stored, and accessed. Many companies lack control over employee-owned devices, which may be used to access key data.
Malicious insiders are a real and growing threat to companies, especially those who hold vast amounts of sensitive data. Twitter and Trend Micro are two examples on a long – and growing – list of organizations that have abused legitimate access to enterprise systems and information.
With sensitive data streaming in, it is imperative that security companies reconsider how they store it and who can reach it.
For many businesses, this demands a closer look at the IT department, which Herold says is often given too much access to data, even in the largest firms. IT pros who develop and test new applications are often given full access to production data for testing.
“This is a huge risk in a couple of big ways,” she notes. When you give developers and coders access to production data, you’re letting them see some pretty sensitive information and bring it into potentially risky situations. “Oftentimes, what is being done with those applications could leak the data, depending on what the system or app they’re building does,” Herold says.
Inappropriately sharing data with unauthorized entities creates a vulnerability, but that isn’t the only consequence. It also violates a growing number of data protection laws and regulations that say companies can only use personal data for the purposes for which it’s collected. Using data to test new applications and updates generally isn’t one of these purposes, she adds.
Herold also points out how it’s “still a pretty common practice,” especially among IT and development teams, to share a single user ID and password for each system. They can use these credentials to log in, make changes, tweak data, or remove it. The problem is, if something happens to the data, there is no way to know who was behind malicious activity.
“When you have multiple people using the same user ID, you completely remove the accountability for those using that ID,” she explains. Without a clear tie between a person and specific user ID, it’s hard to ascertain whether someone used that ID to steal key information. Failing to implement controls could make it easier for an insider to get away with data theft.
Those who can access sensitive data should have their access monitored, says Herold, and using individual IDs can help keep track of employees obtaining certain types of data or sharing it outside the organization. Data backups are one area that insiders will take advantage of, but one that organizations don’t often consider when they’re thinking about which data to protect.
“I’ve seen so many organizations who have strong controls on their data that they use for production, for their daily work activities, but then their backups are pretty much left wide open,” Herold says. Access to backup data often isn’t strictly prohibited to employees, granting access to many people who could obtain corporate secrets or personal information.
(continued on next page: Steps to protect data)
Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio