STE WILLIAMS

Introducing the Digital Transformation Architect

Bet-the-company transformation that expands the attack surface requires close alignment and leadership across executive, IT and security teams.

For companies today, digital transformation poses a “do-or-die” proposition — in many cases, literally.

Two-thirds of organizations are actively pursuing a transformation, as annual worldwide spending on the technologies and services that drive these transformations will reach nearly $2 trillion by 2022, up from $1.25 trillion this year, according to separate research reports from TechTarget and IDC.

The stakes are high because established companies are being pressured by startups that seek to disrupt markets by exclusively promoting mobile apps — instead of physical stores/locations — to target customers. In response, large brands are inventing immersive apps and online services to deliver new features and redefine the customer experience.

They’re doing so because they feel they have no choice, as staying on the sidelines could result in their eventual demise: Nearly two-thirds of C-suite, IT, and business decision-makers feel that a failure to launch new digital services will lead to reduced revenue, and 55% say it will eliminate their company’s competitive differentiation, according to research from Oracle. Half say a lack of these services will cause both a loss of customers and brand perception/relevance. It’s no wonder, then, that 85% believe that the launch of new digital services is critical to their business strategy.

What’s more, they’re in a hurry to get into the game: Nine of 10 prioritize speed to market, with half of decision-makers believing they should be able to launch a new digital service in just a few days, according to the Oracle findings. Subsequently, the entire commerce landscape has evolved to the point where three-quarters of companies either offer “inherently” digital subscription services (like Netflix or Airbnb) or digital subscription services positioned around physical products (such as connected cars, home security systems, or Internet of Things-connected services).

Although consumers benefit from the intensified competition, the bet-the-company nature of these transformations demands alignment and leadership across executive, IT, security, and other functions. To address this need, a job role called “digital transformation architect” is emerging as a business reality — a senior professional who performs as a hub interface for CISOs, CIOs, CEOs, the C-suite, marketers, and developers. This architect objectively weighs these experts’ input in pursuing the transformation mission while making sure that overarching strategies and execution are not tripped up by unexpected security and risk issues.

For certain, acting as the prime ambassador for digital strategic goals while seeking to minimize cyber threat issues requires a skillful balancing act. Here are two core areas on which the architects must focus to lead their organizations to a digital transformation that is not only successful but secure.

1. Enforce Access and Identity
In the online world, trust is too often broken, such as when cyber thieves swipe user credentials and hijack accounts for their own gain. By deploying effective identity and access management (IAM) programs, transformation architects put a stop to the exploitation of their customer-facing digital presence and offerings. As defined by Gartner, IAM is the security discipline that enables the “right individuals to access the right resources at the right times for the right reasons.” It ensures appropriate access to resources across increasingly heterogeneous technology environments while meeting increasingly rigorous compliance standards, according to Gartner.

Digital transformation architects play a prime role in helping organizations tailor identity and access safeguards according to risk tolerance and requirements. Depending on organizations’ industry, customer base, back-end security layers, and regulatory responsibilities for spotting fraud or intrusions, architects confer with security and IT team leaders to make sure new digital interfaces and investments do not stretch risk beyond what is necessary to measurably capitalize on transformation opportunities.

2. Keep the Consumer/User Engaged
Yes, organizations must invest in IAM tools so that only authorized users are accessing their products and services. It just takes one significant breach, after all, to inflict devastating brand reputational damage and the resulting lost customers and revenue.

However, if businesses set up too many authorization barriers, they risk overwhelming their users and the ensuing friction can lead to customer churn. Thus, digital transformation security architects are tasked to oversee the development of authentication requirements that are as unobtrusive as possible to eliminate the friction. They have to go beyond traditional (and often vulnerable) approaches such as enforcing password complexity, relying on tokens, captchas, and PIN codes.

Digital transformation amounts to a very big bet. It takes large investments to reinvent a company through new apps and online services — a transition that expands the attack surface and, therefore, invites greater risks. Yet, introducing too many protective measures to “tighten up” the environment will turn away the very customers who drive success. That’s why the architects must work with both the security side and business units to engage users while safeguarding their experiences. With this, the path to a fully realized transformation appears much clearer and easy to navigate — for the architect, the CISO, the CEO, and everyone else with a stake in the game.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

As VP of Products at BehavioSec, Jordan Blake is responsible for the product strategy and vision of the company’s cyber safety solutions. His more than 20-year career in product management include both consumer and enterprise roles with security industry leaders such as … View Full Bio

Article source: https://www.darkreading.com/endpoint/introducing-the-digital-transformation-architect-/a/d-id/1334618?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft Builds on Decentralized Identity Vision

The company elaborates on its plan to balance data control between businesses and consumers by giving more autonomy to individuals.

Microsoft wants to give people more control over their digital identities. In doing so, it aims to shift the power between consumers and the businesses currently holding most of their data.

Organizations have the bulk of control over users’ information, and people are becoming more aware. More than 75% think companies need to protect their information — a 16% increase from last year — and 68% strongly agree it’s their responsibility to protect their information. More are taking action by changing passwords and enabling multifactor authentication (MFA) after learning of a breach.

Still, more can be done, and Microsoft this week shared updates on its plan to reshape the future of identity. In February 2018, it outlined this vision and explained its investment in using blockchain and distributed ledger technologies to create decentralized digital identities. Rather than having people give broad consent to apps and services and spread their identities across providers, Microsoft wants them to have an “encrypted digital hub” for storing identity data.

“Our goal is to create a decentralized identity ecosystem where millions of organizations, billions of people, and countless devices can securely interact over an interoperable system built on standards and open source components,” writes Daniel Buchner, program manager in Microsoft’s Identity Division, in an update published Monday.

In a separate blog post posted today, Joy Chik, corporate vice president for Microsoft Identity, explained the role of businesses in helping to achieve this goal. She argues in a world where people have greater control over information, businesses must be more intentional about the type of information they collect, where it’s from, where it’s stored, and how much it collects.

“They accept information from individuals that an independent authority has verified, like citizenship verified by a government agency or education level verified by a university,” she writes. With these verifiable credentials, people can prove who they are without the business holding all of their sensitive data. This puts less liability on organizations and gives people control. Further, businesses can choose to store data with people rather than keeping it themselves.

“The individual, in essence, becomes a data controller,” she adds. “This changes the relationship — and the balance of power — within organizations.”

As part of a decentralized identity (DID) system, public keys and identifiers can be linked to distributed ledger tech (Bitcoin, Ethereum, and others) that complies with standards set by the community via the Decentralized Identity Foundation (DIF) and W3C Credentials Community Group. But while these ledgers are useful for the foundation of decentralized identifiers, they should not be used to store personal identity data, Microsoft says. This demands different storage. Its solution is Identity Hubs, unveiled in early March, which are decentralized, off-chain personal data stores that give people control over identity info, official documents, app data, and more.

Since early 2018, Microsoft has been building on its vision with contributions to emerging industry standards and development of open source components, explains Alex Simons, vice president of program management for Microsoft’s Identity Division, in Monday’s blog post. This week Microsoft announced an early preview of Identity Overlay Network (ION). The is a DID network based on Sidetree, a blockchain-agnostic protocol for building DID networks; it was built in partnership with Microsoft and other DIF members, including Transmute and Consensys.

ION is a public and permission-less open network that anyone can use to create DIDs and manage their public key infrastructure (PKI) state. The code for its reference node is still under development, Microsoft says, and there are still aspects to be implemented before it’s ready to be tested on the Bitcoin mainnet. In the coming months, it’ll be working with open source contributors and players in the identity community to publicly launch ION on Bitcoin’s mainnet.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/microsoft-builds-on-decentralized-identity-vision/d/d-id/1334723?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Attackers Are Messing with Encryption Traffic to Evade Detection

Unknown groups have started tampering with Web traffic encryption, causing the number of fingerprints for connections using Transport Layer Security to jump from 19,000 to 1.4 billion in less than a year.

Online attackers are trying to obscure their encrypted traffic in an attempt to evade detection, using a technique known as “cipher stunting,” accoding to Internet infrastructure and security firm Akamai.

Cipher stuffing modifies the fingerprint of communications encrypted with secure sockets layer (SSL) and transport layer security (TLS). Akamai, which fingerprints encrypted traffic as one way to identify attacks on its customers, found that the number of variations of the initial handshake request — known as the Client Hello packet — has recently exploded, from the usual thousands of variants in August 2018 to more than a billion in February. When used legitimately, each unique variant represents a different combination of encryption software, browser, operating system, and configuration of the encryption package. The change is on a “scale never seen before by Akamai,” the company said in an analysis.

While variations could be due to legitimate software behavior or some sort of software defect, the most likely explanation is that attackers are attempting to evade detection or appear as a large number of different systems, says Moshe Zioni, director of threat research at Akamai.

“We were able to deduce with high certainty that it is a Java-based tool that made most of these permutations,” he says. “The existence of such a thing means this was an intentional attempt to hide on the part of the threat actor … in the greater scheme, this is a good evasion technique.”

The surge in variations of the Client Hello packets is the latest iteration of the cat-and-mouse game between attackers and defenders. Because SSL and TLS are so popular — 82% of malicious traffic uses encrypted communications, according to Akamai — many companies use fingerprinting as one of the techniques to classify traffic. Because the content of the communications is encrypted, defenders can only make use of the initial handshake between the client and server, which is in plaintext.

“This is a great illustration of one of the limitations of fingerprinting,” says Shuman Ghosemajumder, chief technology officer of Shape Security, a website security firm. “If you are trying to fingerprint a device in any of many different ways, the first thing that an attacker will do is randomize that characteristic.”

The goal for attackers is to make a single machine, or network of machines, look like hundreds of thousands or millions of users’ devices, he says.

Fingerprinting encrypted communications using characteristics of the initial handshake between client and server is at least a decade old. In 2009, Qualys researcher Ivan Ristic described ways of fingerprinting clients and browsers from their SSL characteristics. In a talk at DerbyCon 2015, Lee Brotherston, at the time a senior security adviser at Leviathan Security Group, described how defenders could use TLS fingerprinting to better detect threats.

“The thing I like about TLS fingerprinting is that people don’t tend to update how their crypto is set up that regularly. Even the big browsers with their regular releases have the same crypto between versions, and when I looked at malware, it never really changed their crypto signatures,” Brotherston said at the time.

Akamai routinely looks at the initial Client Hello packets sent by a client as part of the process of establishing a secure connection between a browser client and a server. The packets allow anyone with access to the network to fingerprint, and later identify, a particular client. The fields included in the Client Hello packet are the TLS version, the session ID, cipher-suite options, and extensions and compression methods.

“Observing the way clients behave during the establishment of a TLS connection is beneficial for fingerprinting purposes so we can differentiate between attackers and legitimate users,” Akamai stated in its analysis. “When we conduct fingerprinting, we aim to select components of the negotiation sent by all clients.” 

Attackers are trying to pollute the waters. In August 2018, Akamai collected 18,652 distinct fingerprints on its global network, representing a fraction of a percent of all possible fingerprints. Starting in early September 2018, however, attackers started to randomize the characteristics of the cipher packets. By February of this year, the variation had hit 1.4 billion, the company stated.

Much of the randomization occurred on traffic attempting to use login credentials stolen from other sites to takeover accounts of Akamai clients.

With the explosion of random fingerprints, defenders will have problems classifying specific malware, but will still be able to detect TLS encryption requests that are behaving badly, Akamai’s Zioni says.

“There is a relatively small and finite set of SSL/TLS stack implementations available today,” he said. “We see strong correlation between randomization and malicious activity.”

Related Content

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/attackers-are-messing-with-encryption-traffic-to-evade-detection/d/d-id/1334726?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Intel Vulnerabilities Bring Fresh CPU Attack Dangers

Four newly discovered vulns from the speculative-execution family bring Meltdown-like threats to Intel’s processors.

A new family of speculative execution side-channel vulnerabilities has been found in Intel CPUs and researchers and vendors are split over how severe the flaws are and how easy they are to exploit.

Even the name of the vuln family is a subject of disagreement among researchers, ranging from colorful to prosaic: ZombieLoad, Fallout, RIDL (Rogue In-Flight Data Load), YAM (Yet Another Meltdown), and Intel’s name for the family of flaws, MDS (Microarchitectural Data Sampling). 

Researchers from security firms Cyberus, BitDefender, Qihoo360, and Oracle, along with academic researchers from TU Graz, Vrije Universiteit Amsterdam, the University of Michigan, the University of Adelaide, KU Leuven, Worcester Polytechnic Institute, and Saarland University, discovered the flaws and came up with the related exploits. All of the researchers were exploring the same conceptual issues – side-channel vulnerabilities – but found the new family in a different area of the CPU than where the previously identified side-channel vulns, Spectre and Meltdown, operate.

The researchers followed responsible disclosure practices and held on publicly releasing their work – some for as much as a year – while Intel developed firmware to remediate the issues.

Bogdan (Bob) Botezatu, director of threat research and reporting for Bitdefender, says the difference between these MDS vulnerabilities and those exploited by earlier speculative-execution flaws like Spectre and Meltdown, is the difference between a buffer and a cache.

“A buffer is an area of the CPU where operations are executed in transit,” he explains, while a cache is memory where data or instructions are stored in anticipation of being called. This difference in the affected CPU area is why the phrase “data in transit” is being used with the new vulnerabilities: Data in a buffer is being being used in an operation while data in a cache is at rest and waiting to be called into use. 

While Spectre and Meltdown could look at data sitting in a special part of storage, this latest generation can grab data that’s in the middle of a process.

As with all examples of this type of vulnerability, user programs are not supposed to be able to access this data except through very specific calls through the operating system, and then only to the buffers associated with their defined and assigned user space. Researchers have found, though, that carefully constructed calls can gain access to the data — and in doing so can side-step security layers put in place to protect users from one another.

“It’s leaking all the data that user space should not have access to,” says Botezatu. For example, in a multi-tenant environment – such as on servers at a cloud-hosting provider – it would be possible for software running as part of one user’s space to gain access to data in another user’s space, he says.

An Intel spokesperson confirmed the nature of the vulnerability but noted that exploiting MDS, like exploiting any Meltdown-category vulnerability, is quite complex and likely beyond the capability of most malware developers.

The software exploiting the vulnerability would have to be running on the same core as the targeted victim, execute in an adjacent thread, and then either exfiltrate large quantities of data hoping for a useful byte, the spokesperson said, or repeatedly load and flush the desired data.

Botezatu concurred that the attack would be difficult to pull off by the average hacker.These kinds of attacks are not something that I would expect that your average ransomware operator would use to infect millions of people. This is mostly the kind of attack that a very, very determined threat actor with a pretty big target will use to gain information or to gain access,” he says.

While most of the “use cases” for this type of exploit involve multi-tenancy environments in cloud or virtualized server data centers, MDS is subject to other exploit types. Chris Wysopal, CTO at Veracode, says it could also be exploited in browsers. “Another case is browsers running untrusted JavaScript. A malicious website could compromise private data on a system that renders a page with malicious JavaScript,” Wysopal says.

Some vendors, including Microsoft, have suggested that disabling hyper- threaded execution on servers might be required for remediating the vulnerability, but Intel says this should not be the case since simply disabling hyper-threading doesn’t provide protection.

Intel released a patch for MDS this week. Microsoft and Apple also have included microcode patches in recent Windows and MacOS, updates, and Linux patches also have been issued. Intel also fixed the flaw in new CPUs it released last month. 

One near certainty is that there will be a continuing stream of speculative execution side-channel vulnerabilities found now that academia has discovered the category of issues that exists as part of the CPU architecture.

“Expect to see more of this class of vulnerabilities. Meltdown and Spectre sparked a new area of research, and there are most likely more architectural flaws waiting to be discovered,” says Jimmy Graham, senior director product management, vulnerability management at Qualys.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/new-intel-vulnerabilities-bring-fresh-cpu-attack-dangers-/d/d-id/1334728?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GDPR Drives Changes, but Privacy by Design Proves Elusive

One year later, the EU mandate’s biggest impact has been to focus more attention on data protection and privacy, security analysts say.

In the year since it went into effect, the European Union’s General Data Protection Regulation (GDPR) has heightened awareness of data privacy issues and driven some important changes in how US companies handle consumer data. However, most organizations appear to be a long way off from implementing GDPR’s core requirement for a privacy-by-design model for data protection, security experts say.

“As we wrap the first year of GDPR, most businesses progressed on accountability,” says Jean-Michel Franco, a GDPR and data privacy specialist at Talend.

Many organizations have set up or refreshed their legal framework for data privacy, improved defenses against data breaches, and begun managing user consent more rigorously.

“But significant gaps toward compliance are generally still to be addressed,” Franco says. Chief among them is the challenge many organizations face in capturing and reconciling all the data they have about their customers and employees and implementing the rights to data access and other rights available to consumers under GDPR, he says.

A Sweeping Mandate
GDPR went into effect May 25, 2018. The statute is designed to ensure that organizations handling private data on EU residents take proper measures to protect that data against misuse. It provides for administrative penalties of up to 4% of an organization’s annual revenue or up to 20 million euros ($22.4 million) for infringements.

The law requires covered entities to minimize data collection, get explicit permission for collecting data, and explain to consumers in unambiguous language why they are collecting the data, how they will use it, and with whom they might share it. Organizations have up to 72 hours, in most cases, to report a data breach affecting consumer data to the appropriate data authority in their country.

GDPR gives consumers considerable control over how organizations that collect their data go on to use it. Among other things, the statute gives consumers the right to ask organizations for a copy of all their data and to request corrections to the data. Importantly, it also requires businesses to ensure that any personal information that they collect on individuals is portable so that it can be easily transferred to another entity if a consumer requests it. A right-to-be-forgotten clause allows users to ask companies that have their personal data to erase it.

One year after the law went into effect, most of the changes that US companies handling EU data have made to comply with it have been the relatively easy ones. The harder, more meaningful changes needed to implement privacy by design and privacy by default remain a long way off, security experts say.

“Besides the complaints filed against the obvious suspects like Google, Facebook, and Instagram, we’ve definitely seen a number of changes to how companies ensure data privacy,” says Dov Goldman, director of risk and compliance at Panorays. Many organizations have updated their privacy policies, implemented consent for pop-ups, and made available tools for more user control over their data.

Scratching the Surface
“That being said, these enhancements have primarily been limited surface treatments and much less of the extensive ‘privacy by design’ envisioned by the regulators,” Goldman says.

“It used to be you only heard about cookies this much from a group of Girl Scouts,” adds Paul Russert, vice president of product marketing at SecurityFirst. “But [since] May 25, 2018, you can’t visit a website without being told that they use cookies and have updated their privacy policy.” For many companies, this has proved to be a quick approach to informing users about their data collection practices and to get explicit consent.

One big change that GDPR has fostered is that it has forced companies to widen the definition of personal data that needs to be protected according to the International Association of Privacy Professionals (IAPP).

US state laws have varying definitions of what constitutes personally identifiable information from a breach disclosure standpoint. An individual’s date of birth alone, for instance, is often not considered private data. It is only when that data is leaked in combination with an individual’s first and last name or last initial that the data is considered personally identifiable — and that, too, not in all states.

Data elements regulated under GDPR include postal address, email address, racial and ethnic information, religious belief, sexual orientation, and criminal records, the IAPP said in a recent blog post. “US privacy professionals working for compliance with the GDPR are having to broaden the scope of their privacy programs to account for this wide definition of personal information,” the IAPP said.

Heightened Focus on Data Protection
GDPR has elevated data privacy and protection to a boardroom-level discussion at organizations covered by the statute. The statute’s requirements for prompt breach disclosure and the hefty penalties it imposes on infringing companies has pushed companies into paying closer attention to what data they already have, where that data resides, and how and why they are collecting more data.

“GDPR has brought executive-level focus to cyber-risk, including risk posed by third-party data processors,” says Jake Olcott, a vice president at BitSight. For many organizations, GDPR is forcing a greater focus on gaining real-time visibility into the data-handling practices of outsourcers and other third parties with whom they share data, Olcott says.

GDPR has also prompted a lot of breach disclosures. An infographic that the European Commission (EC) published in February showed that around 41,500 data breaches were reported to data privacy authorities after GDPR went into effect last May. Global law firm DLA Piper assessed the number to be a much higher 59,000 data breaches, based on its own research.

The EC said that between May 25, 2018, and the end of January this year, data privacy authorities had received as many as 95,100 complaints under GDPR from individuals across EU member nations. The EC said it had enforced the rules on cross-border companies — such as social media platforms — a total of 255 times since GDPR went into effect.

The biggest fine under GDPR through the end of January 2019 was one assessed against Google for 50 million euros ($56 million) for failing to get consent from users before displaying ads. The two other instances where the EC assessed a GDPR-related fine (at the end of January 2019) was against a German social network operator for 20,000 euros and one against an Austrian cafe for 5,280 euros for unlawful video surveillance.

Daniel Barber, co-founder and CEO at DataGrail, a privacy management platform, says GDPR has spurred calls for similar mandates in other parts of the world, including the US, where as many as 10 states are considering privacy reforms. “It is undeniable that GDPR has been a catalyst and has served as a template for a worldwide wave of impending privacy regulation,” Barber says.

Even California’s Consumer Privacy Act (CCPA) — which becomes effective early next year — has similarities to GDPR, although it was motivated by very different reasons, Barber notes. For instance, both statutes provide for increased data control and transparency for consumers. However, CCPA, which was spawned in the wake of the Facebook/Cambridge Analytica data-sharing scandal, also includes other requirements around how organizations share consumer data, he notes.

Hard Work Ahead
For all the attention that GDPR has focused on data privacy, most organizations are nowhere near close to having data architectures that integrate privacy by design or privacy by default.

For example, despite GDPR’s right-to-be-forgotten clause, many companies continue to amass data that is no longer needed, a survey by Varonis found earlier this year.

The security vendor discovered that, on average, a stunning 72% of the folders — representing over 50% of a company’s data — is stale. Ninety-five percent of the companies in the Varonis survey had 100,000 or more files with stale data in them on employees and customers, heightening risk of noncompliance with GDPR’s right-to-be-forgotten clause, among other things.

Most companies have also made progress in terms of getting more user consent, says Pankaj Parekh, chief product and strategy officer at SecurityFirst. They continue to struggle with more fundamental GDPR requirements for processing of personal data and for ensuring security of that processing.

Organizations need a scalable way to ensure that data is handled according to an individual’s wishes and only for the purpose for which the data was collected and only for the time needed to complete the specific function for which it was collected, Parekh says. “One of the biggest problem areas from an operational point of view has been to understand and prove that protected data is always protected,” he notes. That means knowing where the data is, understanding how critical the data is, and tracking it as it moves through the enterprise, Parekh says.

The GDPR requirement that organizations only use third-party processors that can provide sufficient guarantees about their data protection measures is another thorny area, as is the need for them to maintain a record of their own processing activities, adds Panorays’ Goldman.

“Few companies have dealt effectively with some of the thorniest issues, including the accountability demanded by the regulation with regard to third-party data processors and the requirement for notification to the supervisory authority within 72 hours of a breach being discovered,” he says.

There are other issues are well. Most companies covered under the mandate are unable to provide individuals with access to their personal data as required under GDPR. A survey that Talend conducted found that 70% of companies were unable to provide access to personal data despite claiming to offer such access in their privacy notices, Franco says.

Organizations also have to do deal with the cost and manpower implications of GDPR compliance. Most companies are spending at least six figures on technology and consulting services, and 25% are spending $1 million or more on compliance, Barber from DataGrail says. In addition, there are the human costs. “Most companies assigned dozens of employees to dozens of meetings while getting ready for GDPR,” Barber says, referring to a recent survey that DataGrail conducted. “Privacy management decision-makers frequently spent at least 80 hours personally preparing for GDPR.” The DataGrail survey showed that companies also had to deal with hundreds of privacy rights requests, spanning dozens of business systems and third-party services.

Much of the work to achieve and to sustain compliance with GDPR requires companies to better understand in what business systems regulated data resides, and update internal procedures when systems are added or collect additional data, says Barber. “Much of the other work to ensure sustained compliance requires companies to better understand in what business systems regulated data resides and update internal procedures when systems are added or collect additional data.”

The EU’s relatively light enforcement of GDPR may be encouraging some organizations to hold off on major changes. Besides the $56 million fine on Google, all of the other penalties combined so far have been less than 400,000 euros, Goldman says.

One likely reason could be that GDPR enforcement is local, meaning that data regulators from each EU member state are responsible for oversight and enforcement in their country. With the exception of the UK Information Commissioner’s Office with its 500 employees, most other data regulators generally have been understaffed and therefore unable to enforce the statute more vigorously.

If this trend continues, it will mean that companies won’t work towards GDPR compliance because they will believe that it won’t be enforced,” Goldman says. But given the sweeping privacy trends and concerns, it is unlikely regulators will let that happen, he says.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/gdpr-drives-changes-but-privacy-by-design-proves-elusive/d/d-id/1334729?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Supreme Court says secret UK spy court’s judgments can be overruled after all

Britain’s Supreme Court said today that rulings from a secretive UK spy tribunal can now be appealed against after a legal challenge from pressure group Privacy International.

Decisions of the Investigatory Powers Tribunal (IPT), which rules on legal cases involving surveillance powers and the British spy agencies (MI5, MI6 and GCHQ), were previously immune from being appealed against. If you didn’t like the IPT’s verdict, tough – there was no way of asking a more senior judge to take a second look at it.

Section 67(8) of the Regulation of Investigatory Powers Act meant that, unlike any other law court in the UK, decisions of the IPT “shall not be subject to appeal or be liable to be questioned in any court”.

Lord Carnwath, one of the Supreme Court judges who delivered the majority verdict of the five-strong judicial panel, ruled today that even with that legal wording, decisions that were “legally invalid” could still be questioned. The common law, he said, has a strong presumption against “ouster”, which is the legal idea of stopping the High Court from overturning junior courts’ judgments by having its own judges review them.

“I am unimpressed by arguments based on the security issues involved in many (though not all) of the IPT’s cases,” said Lord Carnwath. “As this case shows, the tribunal itself is able to organise its procedures to ensure that a material point of law can be considered separately without threatening any security interests.”

The judge also commented in an aside in the 113-page judgment that MPs cannot make laws that stop the High Court from enforcing the law: “Parliament cannot entrust a statutory decision-making process to a particular body, but then leave it free to disregard the essential requirements laid down by the rule of law for such a process to be effective.”

Privacy International barrister Dinah Rose QC successfully argued that, contrary to what the spy agencies’ lawyers said, the ordinary courts have plenty of ways of protecting secret and sensitive material when carrying out judicial reviews.

Supreme Court photo via Shutterstock

UK Supreme Court to probe British spy court’s immunity from probing

READ MORE

Megan Goulding, a lawyer working for PI’s fellow pressure group Liberty, which intervened in support of PI, said in a statement: “Putting oversight of the intelligence agencies – with their sweeping intrusive powers under the Snooper’s Charter – beyond the review of ordinary courts, is not just undemocratic, but a sinister attempt to reduce the safeguards that protect our rights.”

What kicked the whole case off was Privacy International starting a legal challenge against GCHQ hacking, which Britain’s lax laws allow the spy agency to do more or less whenever and to whomever it pleases with few meaningful controls. When PI lost in the IPT, they tried to take it to judicial review in the High Court, only for the spy agencies to pull out the section 67(8) trump card. Both the High Court itself and the Court of Appeal agreed with PI.

Dissenting from Lord Carnwath were Lords Sumption and Wilson, who said section 67(8) was clear and that Parliament had obviously intended to ensure the IPT could not be judicially reviewed or otherwise appealed against.

You can read the full judgment here (PDF). ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/15/supreme_court_ipt_judicial_reviews_green_light/

We like transparency and we’re a CA, hackers hack all night and we log all day

Let’s Encrypt has wheeled out a new certificate transparency log called Oak, which is funded for a year by the certificates arm of Sectigo (formerly known as Comodo).

As well as the obvious corporate social responsibility impact for Sectigo, it helps ease pressure on an increasingly important piece of internet security infrastructure, the firm told El Reg.

Certificate transparency logs, or CT logs, at their simplest are records of to whom SSL certificates were issued to. The idea is to minimise the number of “mistakenly issued certificates or certificates that have been issued by a certificate authority (CA) that’s been compromised or gone rogue,” as the Certificate Transparency project explains.

In addition, public CT logs allow domain owners and users alike to check whether SSL certs have been issued by mistake. All of that is baked into browsers, though the basic infrastructure is still there for verification with the Mk.I human eyeball.

There are relatively few CT log providers who can handle the extremely high volume of requests that comes with maintaining a log specified in Firefox, Chrome, Edge or IE, with Let’s Encrypt itself telling The Register it had issued “approximately half a billion certificates at this point.”

Sectigo senior fellow Tim Callan told us:

“What if one of these log providers decides to stop doing this? We’re in a deeply bad situation. If one of the log providers decides to stop doing this and someone has an outage… that seems like an untenable situation.”

Sectigo

Cert authority Sectigo whisks infosec biz Icon Labs into IoT security kit

READ MORE

He continued: “Imagine you’re standing in water up to your chin. Then compare that to standing in water up to your forehead. It’s only six inches but it makes a big difference.”

Sectigo added, in a statement: “Google Chrome requires all new certificates to be submitted to two separate logs, so multiple log options are imperative to our operation… Let’s Encrypt often issues more than one million certificates each day, so we wanted to design a CT log that is optimized for high volume.”

This is where the Oak log, freshly sponsored by Sectigo for a year, comes in. It is built on Google’s Trillian software running on AWS, with Kubernetes for container orchestration and job scheduling.

Oak has been submitted for inclusion in the approved log lists for Google Chrome and Apple Safari, Sectigo said. After 90 days of successful monitoring, “we anticipate our log will be added to these trusted lists and that change will propagate to people’s browsers with subsequent browser version releases.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/15/lets_encrypt_ct_log/

Windows 10 Migration: Getting It Right

The transition to Windows 10 doesn’t need to be a sprint. Organizations can still take advantage of the security in Windows 7 while gaining added management flexibility from the newer OS.

Organizations worldwide are still coming to grips with the migration from Windows 7 to Windows 10. Although many are already capitalizing on the transition as a chance to strengthen their overall IT and better protect endpoints for individual users, others are stalling.

Earlier this year, Microsoft announced that 184 million commercial PCs are still running Windows 7 across the world — and that’s excluding the People’s Republic of China. But as the deadline for Windows 7 extended support draws to a close in 2020, it’s important for IT professionals to prepare and become better informed on the implications of the migration for their business today.

With this in mind, we’ve identified some of the key things that organizations should consider when transitioning to Windows 10.

Recognize Modern Security Challenges
Windows 10 is considered the most robust Windows operating system so far; therefore, it’s little surprise that countless organizations trust in Microsoft’s cloud-based modern management approach to facilitate heightened security and agile IT capabilities.

But mobile device management solutions mean that employees must have administrator rights to do their jobs on a daily basis — a potential security risk. So, while Microsoft is enabling organizations to deploy Windows 10 support and adopt modern management more easily, it’s important that businesses understand that the operating system alone is unable to protect businesses from evolving threats.

To protect their organizations, CSOs, CISOs, and other IT security professionals need to think more strategically when migrating to Windows 10.

For example, in a survey of 500 global IT and cybersecurity professionals last year, vulnerable endpoints were the top security concern of migrating from Windows 7 to Windows 10 for 40% of respondents. Meanwhile, all regions except the United Arab Emirates claimed that the biggest challenge for securing remote workers and employees that use their own devices on Windows 10 was ensuring that endpoints are secure.

These concerns are not misplaced, with many breaches arising due to employees working remotely and enjoying access to data from their own devices. To help mitigate this threat, CISOs should remove admin rights wherever possible and implement a thorough training program to ensure that employees understand why this is happening, along with the correct steps that must be taken to continually mitigate the threat of exposed endpoints.

Privilege or No Privilege?
There have been two main types of account — administrator and standard user — in every version of Windows to date, and Windows 10 is no exception. But with the knowledge that removing admin rights could mitigate 80% of all critical Microsoft vulnerabilities reported in 2017, the specific security threat that overprivileged admin users pose to their businesses is clear.

Fortunately, the removal of admin privileges from employees is relatively simple on Windows 10. However, although this process does result in improved security, it can present some usability challenges. Because many day-to-day tasks and applications require admin rights, their loss can hamper a workforce’s efficiency in carrying out their responsibilities.

This is a conundrum for businesses, which must aim for maximum security but also avoid locking too many users out of the systems they need. IT and security leaders must weigh this balancing act on a case-by-case basis and, if they do remove admin rights, ask which of their existing practices should be tweaked to avoid the challenges associated with them.

Getting the User Experience Right
Although Microsoft rolls out updates to its operating system twice yearly, its modern management still doesn’t allow for a distributed set of employees to install key applications in a secure, user-friendly way. For example, when admin rights are taken away, IT staff can have difficulties in accessing the network and helping users to install software — ultimately detracting from the overall user experience.

But IT leaders should note that the transition to Windows 10 doesn’t need to be a sprint. For example, by evaluating which devices require an upgrade, they can use previous operating systems for some areas of the business while simultaneously implementing Windows 10 for others. This will enable organizations to benefit from the security in Windows 7, for example, while also benefiting from the flexibility of newer systems.

Conclusion
The migration to Windows 10 is an opportunity for organizations worldwide to upgrade their Windows management. But it’s vital that the flexibility that the new operating system offers is balanced with measures to maintain an organization’s security against evolving threats. By thinking carefully about the points outlined in this post, IT leaders can plan a smooth transition to Windows 10.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kevin Alexandra is an experienced Technical Consultant who has been working in the IT industry since he was 13. Kevin combines his passions of technology, learning, and sharing to help BeyondTrust customers globally navigate the ever-changing space so they can make informed, … View Full Bio

Article source: https://www.darkreading.com/endpoint/windows-10-migration-getting-it-right/a/d-id/1334611?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Two Ransomware Recovery Firms Typically Pay Hackers

Companies promising the safe return of data sans ransom payment secretly pass Bitcoin to attackers and charge clients added fees.

A new report sheds light on the practices of two US data recovery firms, Proven Data Recovery and MonsterCloud, both of which paid ransomware attackers and charged victims extra fees.

ProPublica researchers were able to trace four payments from a Bitcoin wallet controlled by Proven Data to a wallet controlled by the operators of SamSam ransomware, which caused millions of dollars in damages to cities and businesses across the US. Payments to this wallet, and another connected to the attackers, were banned by the US Treasury Department due to sanctions on Iran, explained former Proven Data employee Jonathan Storfer to researchers.

Proven Data claims to unlock ransomware victims’ data using its own technology. Storfer and an FBI affidavit say otherwise: The company instead paid ransom to obtain decryption tools. MonsterCloud, another data recovery firm that claims to employ its own recovery practices, also pays ransoms — without telling the victims, some of which are law enforcement offices.

Proven Data chief executive Victor Congionti did tell ProPublica paying ransom “is standard procedure” at the company, and oftentimes it pays attackers at the request of clients. But Storfer explains how the company developed a relationship with the attackers and, as a result, was able to receive extensions on payment dates and even get discounts on ransoms. SamSam operators would advise their victims to contact Proven Data for help with submitting payment.

The report draws attention to a dilemma that businesses face when hit with ransomware: It’s easy to frown on paying the ransom in theory; it’s different when your data is held hostage.

It’s neither illegal to hide strategies for decrypting data nor illegal to pay attackers, the report points out. But paying ransom while pretending otherwise to a client could fall under deceptive business practices banned by the Federal Trade Commission Act, former FTC acting chairman Maureen Ohlhausen said. The FTC has not cited MonsterCloud or Proven Data, they note.

Read the full report here.

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/two-ransomware-recovery-firms-typically-pay-hackers/d/d-id/1334721?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook sues app developer Rankwave over data misuse

It sounds a lot like Facebook has gotten itself into (or encouraged and is now pretending it’s aghast about it all) another Cambridge Analytica-ish data privacy fiasco.

Facebook announced on Friday that it’s filed a lawsuit against a South Korean social media analytics firm called Rankwave, alleging that the company abused Facebook’s developer platform’s data and that Rankwave has refused to cooperate with the platform’s mandatory compliance audit and Facebook’s request that it delete data.

Facebook already suspended Rankwave’s apps and any accounts associated with the company. Now it’s looking for the court to get it to comply with a data audit and to delete whatever Facebook data it has, as well as to cough up the $9.8m USD it made off selling data it never should have, as Facebook tells it.

From its announcement:

By filing the lawsuit, we are sending a message to developers that Facebook is serious about enforcing our policies, including requiring developers to cooperate with us during an investigation.

The suit, filed in California Superior Court for the County of San Mateo, says that beginning around 2010, Rankwave starting developing apps on Facebook’s platform in order to sell advertising and marketing analytics and models, in violations of Facebook’s policies and terms. It operated at least 30 apps on the Facebook platform, according to the complaint.

Those apps included both business-to-business (B2B) and consumer apps. Businesses, including a South Korean department store, a tourism organization and a baseball team, used the B2B apps to track and analyze activity such as Likes or comments on their Facebook pages.

As far as the consumer apps for Facebook users go, this is where it starts to sound like Cambridge Analytica (et al.), which vacuumed up users’ data without permission via what came off as innocent online quizzes.

Facebook detailed one such from Rankwave called the Rankwave App. For six years, up until March 2018, the app offered to analyze a Facebook user’s popularity on the platform by crunching data about the interactions they got on their posts. The analytics company claimed that the app calculated users’ “social influence score” by “evaluating your social activities” and receiving “responses from your friends.”

In other words, another seemingly fun, innocent Facebook app that was quite serious about sucking up user data for profit. The app could pull data about users’ Facebook activity that included such things as location check-ins – handy to determine that you’ve just checked in to a given place and then to target you with appropriate ads. Targeted marketing doesn’t sound as sinister as the political ads that were targeted at users with the help of Cambridge Analytica’s thisisyourdigitallife personality quiz, but it’s in the same ballpark with regards to tempting users to fork over data.

Rankwave’s site has apparently been taken down, but the Android version is still available on Google’s Play store.

Serious, or sloooooooow?

In spite of Facebook’s push to get across the notion that it’s “serious about enforcing our policies,” this lawsuit instead highlights how fast and loose it’s played with user privacy and user data.

Facebook says that it got antsy about Rankwave in June 2018, after the company had been purchased by a Korean entertainment company in May 2017 for about US $9,800,000 (11b South Korean won). For whatever reason, however, it didn’t reach out to Rankwave until January 2019.

Facebook says that as far as it can tell, starting at least as early as 2014, Rankwave allegedly stopped complying with the company’s policies about only using user data in order to enhance its app and instead started using it to line its pockets, by providing consulting to advertisers and marketers: a use that’s prohibited by Facebook Policy 6.1 on data collection and use. Some clauses from that policy:

Only use an entity’s data on behalf of the entity (i.e., only to provide services to that entity and not for your own business purposes or another entity’s purposes).

Don’t let people other than those acting on an entity’s behalf (ex: its employees) access the entity’s data.

But as Tech Crunch points out, there was nothing furtive about what Rankwave was up to. It openly promoted services that blatantly flaunted Facebook policies, casting doubt on how well Facebook has been policing its developers and the apps they run on Facebook Platform. Many critics are suggesting that Facebook buried the news with a late-Friday announcement of the lawsuit in order to avoid calling attention to its failures to protect user data.

One excuse after another

Facebook says it started to ask Rankwave for proof that it was in compliance with its policies starting in January 2019. We want to hear back by 31 January, it said.

Facebook to Rankwave on January 29: “Hellooooooo? Your response is due in two days.” Response: the sound of silence.

On 13 February, Facebook sent a cease and desist letter, telling Rankwave that it was then in violation of Policy 7.9, since the company had allegedly failed to prove it was in compliance with Facebook’s policies. Tell us who got at that data, by purchase or by other means, and send us your access logs to boot, Facebook demanded. Give us the data back, delete and destroy it, and give us access to your storage devices so we can confirm it’s really erased.

On 17 February, Rankwave finally poked its head out of its shell, Facebook says. It said its CTO had resigned and that the company needed more time to respond. OK, fine, you’ve got until the 21st, Facebook said.

Rankwave’s next response, on 20 February: we didn’t violate your policies. Rankwave allegedly ignored the audit request and claimed that it hadn’t had access to any of its Facebook apps since 2018.

Wrong-o, Facebook claims: one of Rankwave’s B2B apps was chugging along until at least last month. More letters flew back and forth, and then on 25 February, Rankwave said sorry, we need nine more days: our bosses are all visiting Spain right now.

Fine, you’ve got until 9 March, Facebook responded, but that’s it, no more extensions. Well, it’s two months later, and Facebook still hasn’t received anything, it said in the suit.

Can’t fix this with (just) a fine

Facebook says that monetary damages aren’t enough to fix this. Rankwave’s “misconduct” has tarnished its reputation, public trust and goodwill. It’s also had to spend time investigating and redressing this mischief, it says.

Facebook is seeking an injunction to keep Rankwave from accessing its platform, to get Rankwave to respond to Facebook’s requests for proof of compliance (including a forensic data audit), and to force Rankwave “to delete any and all Facebook data as appropriate after Rankwave complies with” the audit requirement.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Vy-PQB2boV4/