STE WILLIAMS

Hackers Leverage GDPR to Target Airbnb Customers

Fraudsters are taking advantage of new EU privacy laws to demand personal information from Airbnb users.

A new phishing scam capitalizes on the upcoming General Data Protection Regulation (GDPR) to trick Airbnb customers into sharing personal and financial data, Redscan reports. The scale of the campaign is unknown, though it likely targets email addresses taken from the open Web.

Targets receive an email designed to appear as though it’s from Airbnb, addressing them as a host of the service. The message says hosts can’t accept new bookings or contact potential gusts until they accept a new privacy policy. If they click the malicious link, targets are prompted to enter personal information including payment card details and account data. Everything they enter goes to the attackers.

It’s worth noting Airbnb is sending emails to alert users about GDPR-related changes; however, the legitimate notifications include more detail and none ask for users’ credentials.

The European Union’s new privacy laws, designed to grant people more control over the information they share online, goes into effect on May 25. In the weeks leading up to its rollout, companies are sending messages to gain users’ consent for staying on their mailing lists — and hackers are taking advantage of the trend. Read more details about this campaign here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/hackers-leverage-gdpr-to-target-airbnb-customers/d/d-id/1331715?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Twitter Alerts Users to Change Passwords Due to Flaw that Stored Them Unprotected

Social media giant discovered bug in an internal system that inadvertently stored passwords in plain text.

Happy World Password Day: Twitter today alerted its 330 million users to change their passwords after it detected a flaw in one of its internal logs that stored their passwords “unmasked.”

Twitter since has fixed the bug and said it has no knowledge of a breach or abuse of the information but is asking users to create new passwords just in case. The company protects passwords via the bcrypt hashing function, which basically replaces the password with a mix of random letters and numbers that are then stored in Twitter’s servers.

“Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again,” Twitter said in a blog post about the exposed passwords. 

“Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password,” the company said.

Read more here on what happened and how to change your Twitter password.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/twitter-alerts-users-to-change-passwords-due-to-flaw-that-stored-them-unprotected/d/d-id/1331716?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

RSA CTO: ‘Modernization Can Breed Malice’

Zulfikar Ramzan predicted the future of cybersecurity, drivers shaping it, and how enterprise IT should react in his InteropITX 2018 keynote.

InteropITX 2018 — Las Vegas — In a room packed with business technology executives, RSA CTO Zulfikar Ramzan discussed the reality of today’s cybersecurity landscape and the threats IT organizations should have at top of mind as they adopt new tech like machine learning.

“No organization exists as an isolated entity,” said Ramzan in his keynote presentation. “The ripples of chaos can spread farther and faster now, as technology connects us in remarkably astonishing ways … in cybersecurity, they’re quite prevalent.”

To illustrate his point, he cited several recent security incidents that expanded far beyond players’ expectations. The Target breach, one of the largest in history, happened because threat actors accessed a single password for a third-party HVAC system. Makers of baby cameras became entities in a massive DDoS attack that resulted in the takedown of major websites. An attack on the DNC caused people to question the foundations of democracy, he said.

Now, board members can see their careers fall apart following a cyber incident. In a world like that, Ramzan noted, we need to think less about security trends and more about the drivers: modernization of technology, malice of threat actors, and mandates forcing organizations to tie their business value to the strength of their security posture.

“Innovation can invite exploitation. Modernization can breed malice,” Ramzan said.

Consider the evolution of ransomware, a comparatively old threat that has grown with the modernization of payment technology. The advent of digital payment systems has given attackers the means to collect more money from increasingly large groups of victims. Some hackers couple their attacks with 24/7 customer support to help their victims pay up.

“When threat actors start talking about customer support, we are in a brand-new world,” explained Ramzan, who calls this mindset the “hacker industrial complex.” It’s not only the advanced actors his audience will have to worry about either, but the average attackers who are happy to do “a simple smash-and-grab” to generate a lot of money in a short timeframe.

Ramzan turned the conversation to artificial intelligence and machine learning, two hot topics of conversation among the InteropITX enterprise audience. AI has been around for a long time, and it has been used in the context of security for a long time, he explained. It’s used to combat spam, online fraud, malicious software and malware, and malicious network traffic.

“But we’re just at the beginning of what AI can actually do,” he continued.

(Image: Interop)

(Image: Interop)

What worries Ramzan about AI and machine learning is putting all data in one place for technology to analyze it. It’s not the theft of data that concerns him, but the manipulation of that data. If a threat actor can access and modify an organization’s data, chances are nobody will notice it. Few people understand the mathematics of how these technologies work.

“Machine learning wasn’t designed to deal with threat actors,” he explained. “But if you’re going to think about technology becoming ubiquitous, you have to think about the risks.”

But how to address the risks? Ramzan warned of the danger in adopting a “no vendor left behind” policy when shopping for security tools. The industry “is effectively a hot mess,” he said. With some 2,000 vendors in the security space, there is a need to consolidate and innovate. IT pros should figure out which vendors provide the most value, and focus on them.

He closed out his keynote by explaining how to react when security incidents occur. “Plan for the chaos you can’t control,” he noted, pointing to the “ABCs” of incident response planning.

The first: Availability. When forming an incident response plan, you should only use resources that are already available to your organization. “An incident response plan isn’t a wish list,” said Ramzan. “Don’t put empty fire extinguishers in every hallway.”

Budget is second. Security breaches come with unexpected costs, he noted. You may need legal help, for example, and if you don’t have an in-house team you’ll need to hire an outside law firm. “Response plans must have budget authority,” said Ramzan. Without them, “effectively, it’s just a fairy tale.”

The final factor is Collaboration. During an incident, most areas of an organization can inevitably get involved. Security teams will be identifying the root cause of the attack while the IT team patches infrastructure and quarantines networks. If customers were affected then the sales team will be involved; if sales is involved, then the marketing team may be involved also.

Success in cybersecurity will depend on enterprise ability to gauge the risks that lie ahead, he concluded. “Adapt quickly and adopt technology in a way that fosters and fuels innovation.”

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/rsa-cto-modernization-can-breed-malice/d/d-id/1331721?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google and Amazon put an end to censorship-dodging domain fronting

In a matter of weeks, the precious domain fronting technique used by some privacy services to hide themselves from government censors has started to crumble.

Things started going wrong in April when Google abruptly blocked fronting for its cloud App Engine platform, immediately causing problems for popular privacy messenger app Signal which had quietly been using it since 2016.

As privacy advocates fretted, Google was adamant that fronting had always been an accidental feature:

Domain fronting has never been a supported feature at Google but until recently it worked because of a quirk of our software stack. We’re constantly evolving our network, and as part of a planned software update, domain fronting no longer works. We don’t have any plans to offer it as a feature.

Three weeks on and Amazon has given Signal the knock-back in a brusque email the app developer has made public:

We are happy for you to use AWS Services, but you must comply with our Service Terms. We will immediately suspend your use of CloudFront if you use third party domains without their permission to masquerade as that third party.

Losing two of the biggest Content Delivery Networks (CDNs) on the planet that allowed fronting was never going to be a good day.

But how did something that few internet users have heard of become such an issue?

If an app like Signal were to connect its users to the service’s servers directly, all censors would have to do is perform a quick DNS lookup to figure out which IP addresses to block.

Several countries have been doing this for a while – Egypt, Oman, Qatar, and UAE – hence Signal’s shift to alternatives such as fronting.

Google is right to describe the way fronting works on its service as a quirk, in that it exploits the unusual way it and Amazon process HTTPS connections to their clouds.

An HTTPS request identifies the computer it’s being sent to in up to three places: an IP address, a hostname in optional Server Name Indication (SNI) metadata, and a hostname in an HTTP host header.

The IP address and SNI header are sent in the clear, and are visible to censors, whereas the HTTP host header is shielded from prying eyes by TLS encryption.

Google and Amazon allowed services to make a TLS connection to a server that acts as a front to many others. The host header that’s invisible to censors and connects users to the service they want is processed separately, after the encrypted TLS connection is made.

Censors can only see apps communicating with Google or Amazon’s IP addresses, forcing them to choose between blocking absolutely everything using those IP addresses, just to stop the one service they want to censor, or blocking nothing.

 

An example of where this cat and mouse can end up is recent attempts by the Russian internet regulator (and later Iran) to block the Telegram privacy app, which reportedly caused major disruption to 19 million IP addresses for surprisingly little effect on the service.

It appears Telegram’s resistance to the Russian censors was down not to fronting but to the more old-fashioned and labour-intensive technique of IP hopping (shifting the domain as fast as censors shut them down). This left the censors chasing Telegram across subnets, with predictably heavy-handed results.

It’s not hard to see why this incident might put Google and Amazon off IP hopping’s big brother, domain fronting, even as Google claims it is all for privacy in other respects.

Another problem comes in the shape of cybercriminals, who’ve been using fronting for years to hide their command control – indeed that may have given legitimate services the idea in the first place.

As fronting fades, privacy services will need to find other ways to keep ahead of censors, for example attempting to use ‘refraction networking’ at ISP level.

Alternatively, more users may turn to Tor or VPNs. The collapse of fronting has hurt privacy apps but the battle to protect subvert government censorship will go on.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ubAJv3rJse0/

Facebook’s getting a clear history button

In these days of post-Cambridge Analytica/Cubeyou privacy-stress disorder, privacy advocates, members of Congress and users have been telling Facebook that we want more than the ability to see what data it has on us.

We want a button to nuke it all to kingdom come.

Well, get ready: it’s in the works, due out in a few months. Mind you, analytics will still be foisted upon us, and Facebook will still share those analytics with apps and websites. The difference, the company says, is that it will strip our identifying information from the data.

On Tuesday, Facebook VP and Chief Privacy Officer Erin Egan offered an example of aggregated analytics that the platform’s still going to share with apps and websites, even after the Clear History button debuts in a few months:

We can build reports when we’re sent this information so we can tell developers if their apps are more popular with men or women in a certain age group.

Also on Tuesday, CEO Mark Zuckerberg gave some details about the Clear History button during F8, the company’s annual developer conference. For one thing, he warned, your Facebook experience is going to suffer:

To be clear, when you clear your cookies in your browser, it can make parts of your experience worse. You may have to sign back in to every website, and you may have to reconfigure things. The same will be true here. Your Facebook won’t be as good while it relearns your preferences.

Zuckerberg said that after going through Facebook systems, and having gone through the wringer with Congress for a few days of testimony, “this is an example of the kind of control we think you should have.”

It’s something privacy advocates have been asking for – and we will work with them to make sure we get it right.

One thing I learned from my experience testifying in Congress is that I didn’t have clear enough answers to some of the questions about data. We’re working to make sure these controls are clear, and we will have more to come soon.

So, what about people who get tracked even when they’re not logged into – or even members of – Facebook?

Motherboard asked. A Facebook spokesperson pointed the publication to a “Hard Questions” post about the data the platform collects when we’re not using Facebook. The post defines Facebook analytics simply as helping websites and apps “better understand how people use their services.”

Gabe Weinberg, CEO and founder of the search engine DuckDuckGo – an engine that’s been blocking trackers since January – has cited figures that Facebook’s trackers are on 24% of the top million pages. A recent study from Ghostery, a private-browsing and -search technology maker, put it in the same ballpark, finding that 15% of all page loads are monitored by 10 or more trackers, and 27.1% of them are Facebook’s – second only to Google for tracking.

Egan explained that Clear History will enable people to see the websites and apps that use features such as the Like button or Facebook Analytics in order to share data about us with Facebook and to tailor their content and ads. Clear History will bring the ability to delete the information from our accounts and to turn off Facebook’s ability to store data associated with our accounts. Egan:

If you clear your history or use the new setting, we’ll remove identifying information so a history of the websites and apps you’ve used won’t be associated with your account.

As far as sharing aggregated analytics goes, Egan said that Facebook can do it without storing user information in a way that’s associated with individuals’ accounts. And, as always, she said, “we don’t tell advertisers who you are.”

Facebook will be building the feature with input from privacy advocates, academics, policymakers and regulators, including regarding how the platform plans to remove identifying information and “the rare cases” where it needs information for security purposes. Egan said that the company has already launched a series of roundtables in cities around the world.

Let’s hope the rollout of Facebook’s Clear History button goes a bit smoother than the hate speech button debacle: Facebook on Tuesday accidentally released a button that let users label anything as hate speech.

Or hey, how about the hate-speech button’s cousin: the ill-fated fake-news flag? That went over about as well as a lead balloon. It was akin to waving a “bright red flag” in front of the faces of people with entrenched views, Facebook said: the button backfired and prompted such people to visit the disputed content more and to share it far and wide.

Facebook mothballed it in December after determining that the button didn’t do squat to stop the spread of fake news.

Better luck this time! We may all want this button yesterday, but if it takes a few months to get it right, so be it. While you’re waiting, take a look at our guide to protecting your data on Facebook.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QuGoWslWX6o/

Using Docker and Windows Server Containers? There’s a patch for that

Microsoft has emitted a patch to fix a critical vulnerability in a wrapper used to launch Windows Server Containers from Go.

The issue (CVE-2018-8115) is a nasty one, allowing remote code execution when importing a container image due to a failure of the library to validate what was on the way in.

Exploiting the issue could be a challenge, as Microsoft stated:

“An attacker would place malicious code in a specially crafted container image which, if an authenticated administrator imported (pulled), could cause a container management service utilising the Host Compute Service Shim library to execute malicious code on the Windows host.”

No administrator would ever import an image without knowing its providence, right?

The wrapper itself, the Windows Host Compute Service Shim library, appeared back in January 2017 with Microsoft’s launch of the Host Compute Service (HCS), a management API for Windows Server Containers and the likes of Docker.

Along with HCS, the new caring, sharing Microsoft also unleashed two open-source wrappers on GitHub to save devs having to worry about dealing directly with the C API. It is the one written in Go for Docker (hcsshim) where the issue lies.

The vulnerability was found in February 2018 by Michael Hanselmann, who said: “I reported the issue to Microsoft’s security response center and Docker in February 2018 using responsible disclosure. Both were involved in resolving the issue.”

Hanselmann has promised a proof-of-concept of the exploit by 9 May, so testing and applying the patch before then would seem prudent.

The US Computer Emergency Readiness Team issued an advisory yesterday suggesting that administrators got on with this sooner rather than later. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/03/docker_for_windows_vuln/

4 Critical Applications and How to Protect Them

What’s This?

Since critical apps are, well, critical, security teams must take preventive measures to keep attackers from exploiting their vulnerabilities.

Critical applications are often so baked into the day-to-day tempo of an organization that users often forget their importance — until they go down. The first key definition of a critical application is how much an enterprise relies on it. By their nature, critical apps have enormous data stores, multifaceted processing engines, spread globally, and are deeply integrated into other dependent application services.

Here are four of the most complex and vulnerable critical applications:

Financial Apps
Financial applications are often focused on the unique requirements of an organization. Banks have thousands of applications, all critical to revenue and business operations. But consider accounting applications, which are also often intricate and tailored to the particular industry of the organization. Nearly all financial applications are subject to regulation as they hold, process, and move critical data, which must remain confidential and untampered. Often you will see internet commerce systems with direct ties to financial systems to process customer payments. All of these are potential ingress points for attackers.

Medical Apps
Hospitals are usually assemblages of independent, smaller clinics, doctor’s offices, and diagnostic facilities. Their applications exist in the same manner: deeply vertical and highly variable. This means lots of applications with different levels of security and reliability all sitting side-by-side exchanging confidential medical data. It’s not surprising for an old Windows XP box to be connected to a drug dispenser machine. Some systems are so specialized that you may have software developed by a singular researcher, who supports the program as a side project (if ever). This is also an environment where patient safety trumps all other requirements, sometimes even security. So you can see things like the network protocols that embed patient identification into the network packet itself to ensure medical information is never mixed up.

Messaging Systems
Another overlooked but critical application is email and communication systems. Messaging systems need to touch everyone as well as accept connections from the outside. Mail systems are notorious dumping grounds for years of yet-to-be-classified-but-probably-should-be-secret documents and private conversation threads. Email systems are also often the gateway to authentication with password resets landing in people inboxes. An analysis of the California Attorney General breach notifications for 2017 showed that 5% of reported significant data breaches were directly attributed to credential exposure via email compromise. Email messages often stand in as the primary identity on the Internet. A compromised email account can be leverage point for a variety of insidious scams, targeting both your customers and internal employees.

Legacy Systems
Legacy systems could fit into any of the earlier categories, although most them are specialized applications, often heavily customized. Think of airline reservation systems, customer management software, and one-off unique software. Legacy systems exact an excessive burden in their high operating cost and incompatibility with modern systems and security tools. The most difficult and insecure of these systems have existed in a long period of stasis, rarely updated due to their being written in archaic programming languages.

Managing the Common Risks
One of the first things that should be done is to become aware of what and where critical apps live. As part of a forthcoming report on protecting applications, F5 commissioned a survey with Ponemon that found that 38% of respondents had “no confidence” in knowing where all their applications existed. These large, sprawling, and critical systems have common vulnerabilities that can be exploited by attackers.

  • Credential Attacks: Many older applications do not have robust authentication systems, leading to mismatches with authentication requirements. If a critical app doesn’t support better authentication, or can’t hand off to an access directory server, then authentication gateway servers can be used. These are proxies that stand in front of the critical application and provide superior authentication schemes. All access to the critical app flows through the gateway, which in turn pass the legacy credentials to the critical app invisibly. Even weak passwords could be strengthened with this to use newer authentication technologies like federation, single sign-on, and multi-factor. For this to be effective, you need network segregation to enforce it.
  • Segregation from Exploits and Denial-of-Service Attacks: Segregation with firewalls and virtual LANs reduce inbound network traffic to the few limited protocols necessary for the application to function. Since some legacy or specialized apps aren’t patchable or have limited hardening capability, a firewall restricts connection attempts to those vulnerable services. Easily exploited services such as Telnet, FTP, CharGen, and Finger can all be blocked from external access. It’s not perfect, but at least you’ve reduced your attack surface. In some cases, smarter firewalls with intrusion prevention capability or virtual patching can also help.
  • Encryption to Prevent Network Interception: A malicious insider or an attacker that’s already breached your network is a potential threat, so any internal traffic carrying confidential information should be protected. If the critical app doesn’t support a secure transport protocol, then a TLS or VPN gateway can be used. Like the authentication gateway, these sit in front of the critical app and encapsulate all traffic passing through into an encrypted tunnel. These should also be used for all external links from the application, even to trusted third parties.

Get the latest application threat intelligence from F5 Labs.

Raymond Pompon is a Principal Threat Researcher Evangelist with F5 labs. With over 20 years of experience in Internet security, he has worked closely with Federal law enforcement in cyber-crime investigations. He has recently written IT Security Risk Control Management: An … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/4-critical-applications-and-how-to-protect-them/a/d-id/1331698?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

No Computing Device Too Small For Cryptojacking

Research by Trend Micro shows IoT and almost all connected devices are targets for illegal cryptocurrency mining.

Pretty much any computing device — however low powered — appears to be becoming a target for cybercriminals trying to make money through illegal cryptocurrency mining.

An investigation by security vendor Trend Micro shows how underground markets are awash in cryptocurrency malware, including those targeted at devices with relatively low processing capabilities such as consumer IoT products, smartphones and routers.

Though mining for cryptocurrency is a computationally intensive and power-consuming task, several of the crypto mining malware samples that Trend Micro observed appear dedicated to exploring whether any connected device, however underpowered, can still be exploited for financial gain.

“IoT devices have less computing power, but are also less secured,” says Fernando Merces, a senior threat researcher at Trend Micro. “In some cases there may be thousands of them publicly exposed, so the amount of devices compromised is important here.”

It is unclear how many IoT devices an attacker would need to infect with mining software in order to profit from cryptomining, Merces says. A lot would depend on the type of device infected and the cryptocurrency being mined. “[But] a big botnet with a few thousands of devices seems to be attractive to some criminals, even though some of them disagree.”

Not all of the cryptocurrency malware that Trend Micro observed is for mining. Several of the tools are also designed to steal cryptocurrency from bitcoin wallets and from wallets for other digital currencies like Monero. But a lot of the activity and discussions in underground forums appear centered on illegal digital currency mining. And it is not just computers that are under threat but just about any internet-connected device, Trend Micro says.

“The underground is flooded with so many offerings of cryptocurrency malware that it must be hard for the criminals themselves to determine which is best,” Merces says in a Trend Micro report on the topic this week.

The sheer number of cryptocurrency mining software tools currently on sale in underground forums makes it hard to categorize and study all of them. Prices for these tools range from under $5 for Fluxminer, an Ethereum miner, to $1,000 for some miners like Decadence, a software product for mining Monero digital currency. The varying price points reflect the different features that are available with different malware samples. A product like Decadence for instance starts at just $40 but can cost up to $1,000 when features like graphics processing unit support, a web-based control panel, remote access capabilities and encryption services are added.

One of the latest offerings is a Monero cryptocurrency mining tool called DarkPope priced at around $47. The malware is designed to surreptitiously use hijacked computers for mining purposes, and to send earnings to a digital wallet owned by the attacker. Among other things, the authors of DarkPope offer round-the-clock support for the tool, according to the Trend Micro report.

Somewhat ironically, despite the abundance of mining malware, there’s little evidence that threat actors are making any major profits from them, at least presently. Though some other vendor reports have described threat actors as having the potential to make upwards of $180,000 per year or $500 a day from cryptomining, Trend Micro says the company is currently not aware of criminals making large amounts of money from illegal cryptomining. But the potential for doing so certainly exists, Merces says.

“Though our research doesn’t specifically focus on the profit, other research has proven this is possible,” Merces says. “It is all situation-dependent with the number and type of devices, as well as the type of cryptocurrency being mined,” he says. With enough processing power being leveraged, criminals can indeed make substantial profits from cryptomining, he says.

“Cryptomining is fast becoming one of the top threats to individuals and organizations as cybercriminals look to compromise systems for use in mining,” Merces says. “The main difference here is threat actors don’t compromise systems looking to steal data or drop ransomware, they want the computing resources the machine can provide for their cryptomining activities.”

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/no-computing-device-too-small-for-cryptojacking/d/d-id/1331712?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GDPR Requirements Prompt New Approach to Protecting Data in Motion

The EU’s General Data Protection Regulation means that organizations must look at new ways to keep data secure as it moves.

The EU’s General Data Protection Regulation (GDPR) will take effect on May 25, a response to data breaches and demands for greater oversight relating to security of personal identifiable information (PII). As shown by the recent Equifax and Cambridge Analytica debacles, the risks to PII are real as digital transformation makes all interaction data usable and the Internet of Things (IoT) causes an explosion of new data sources.

GDPR is the latest of numerous laws around the use of PII. These laws often vary by jurisdiction, industry, and data type, making for a complex puzzle for enterprise data governance. For companies with large global customer bases, compliance with the strictest regulation across the customer base ends up being the prudent course, as it can be difficult to apply only geographic-specific restrictions to PII.

Adding to this complexity is the fact that digital data takes many forms. Efforts to analyze data to improve the business end up distributing PII throughout the enterprise. This business imperative to move and change data means that organizations must look at novel ways to keep it safe and secure.

This complexity bears itself out with the Gartner prediction that by the end of 2018, more than one-half of organizations affected by GDPR won’t be in compliance. Given the high stakes of noncompliance, organizations must have technology and processes in place to protect PII.

Keeping the Genie in the Bottle
Many organizations already have solutions that scan for and protect personal “data at rest.” However, in the time between when the data arrives and when it’s masked or encrypted, it might have already been shared. And, with the growth of real-time stream processing, the time between arrival and sharing compresses to almost nothing. In short, the genie may be out of the bottle before you even know you have PII. 

Additionally, any arriving PII is moved across data stores and computing platforms for a valid business reason and to be available for use. A balance must be established between data protection and data availability. This balance can be achieved through governance zones that allow different levels of access based on the type of data and the type of user; however, achieving this adds another layer of complexity to data protection and compliance.

The problem of big data sources and data drift (where fields are added or data types are changed without notice) further complicates matters. New data sources such as IoT devices, API data, and log files are added all the time in the name of digital transformation and business agility, and they may include PII. Plus, many of these data sources that are governed by others or loosely governed — such as unstructured data sources — are subject to data drift. As a result, a data protection solution that is compliant on day 1 may be noncompliant by day 3.

Data Protection Should Start When Data Is Born
The pressures of real-time data, data sharing, and data drift mean that sole reliance on “scan at rest” across every data store is risky. Discovering PII and mitigating compliance exposure must start at the point of data ingestion. A multilayered strategy that includes both incoming pathways and the data stores is optimal.

First, inspect for patterns in the live data because your chief vulnerability is around sensitive data that you don’t expect to see but arrives because it’s impossible to keep track of all data efforts across the company, or simply because of data drift. To catch this data, you must scan the contents of your data flows, inspect the data, and compare it to known or likely PII patterns. Some form of probabilistic match capability will allow you to catch patterns and those that may be new or specific to your industry or company.

Second, you must be able to act on that data as soon as the PII pattern is detected and have a wide variety of actions to take. Then you can customize the approach based on the potential uses of the data.

Third, due to the need to classify the use of different data types as well as different user groups, the ideal approach should be based on centrally driven policy management that is integrated with how you protect data at rest to ensure completeness. Enterprise risk teams should set up security service-level agreements for data and expect the system to alert on violations and stop insecure data delivery before it happens.

Tooling Coding
Monitoring and discovering sensitive data in stream can be very difficult. As GDPR takes effect, solutions must mature from ad hoc or DIY approaches focusing on data at rest to tooling that can discover and track data starting with its first appearance. Moving protection from data stores out to first detection is a critical step that will help ensure the integrity and security of PII.

Related Content:

Rick Bilodeau is a marketing leader with deep experience with enterprise data, networking, and security innovators. Before joining StreamSets, Rick led outbound marketing functions for B2B technology companies Qualys (IT security), iPass (enterprise mobility), and 3Com … View Full Bio

Article source: https://www.darkreading.com/endpoint/gdpr-requirements-prompt-new-approach-to-protecting-data-in-motion/a/d-id/1331655?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Enterprise Password Managers That Lighten the Load for Security

EPMs offer the familiar password wallet with more substantial administrative management and multiple deployment models.PreviousNext

Image Source: Blackboard via Shutterstock

Image Source: Blackboard via Shutterstock

Companies may try to promote good password hygiene among users, but obstacles remain fairly profound, even after years of prodding by IT and security managers.

What better time than World Password Day to explore this issue?

Frank Dickson, a research director within IDC’s Security Products research practice, says given the threat level and the reality that the average individual user can have 130 or more unique accounts, it’s unrealistic to expect that all those passwords can be managed manually.

Dickson says the only way to successfully solve the password problem is for the company to deploy identity and access management tools. For organizations starting from scratch, Dickson says Enterprise Password Management (EPMs) systems are a very good first step.

Keep in mind that passwords are still a thorny problem for many companies. According to Forrester Research, of enterprise organizations that have suffered at least one data breach from an external attack, cybercriminals used stolen user credentials to carry out 31% of the attacks.

The cost of a single breach runs high, as does the cost of managing passwords. Forrester’s Merritt Maxim, a principal analyst, says several large U.S.-based organizations across different verticals spend more than $1 million annually on just password-related support costs. And while SAML-based single sign-on (SSO) tools can alleviate the password burden, Maxim says many organizations rely on a hybrid heterogeneous computing environment that very often does not support SAML. This means companies still have to rely on password-based authentication for many of their systems.

Maxim says some security teams also rely on a shared spreadsheet or Word document to store and track passwords, especially for privileged accounts. Such practices have become a major security risk because malicious insiders can compromise these documents.

“The other thing to remember is that [lost or stolen] passwords also have an indirect effect on employee productivity,” Maxim says. “Every minute an employee spends unable to access a system because of a lockout is lost productivity.”

EPMs can help, says Maxim, because they offer the familiar password wallet model with more substantial administrative management tools, as well as multiple deployment models.

The following list is based on interviews with Dickson and Maxim. We tried to stick with pure-play EPMs as opposed to SSO or PAM products.  

 

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/endpoint/6-enterprise-password-managers-that-lighten-the-load-for-security-/d/d-id/1331711?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple