STE WILLIAMS

Intel to slap hardware lock on Management Engine code to thwart downgrade attacks

Intel’s Coffee Lake and Cannon Lake x86 processors can be fortified by computer manufacturers to prevent in hardware attempts to downgrade, exploit and potentially neuter Chipzilla’s built-in creepy Management Engine.

In June, Positive Technologies security researchers Mark Ermolov and Maxim Goryachy privately reported to Intel a brace of exploitable bugs – CVE-2017-5705, 5706, and 5707 – in the powerful Management Engine’s firmware.

Last month, in response and ahead of Ermolov and Goryachy’s public presentation of their research at Black Hat Europe, Chipzilla published eight vulnerability notices: the tech giant admitted its Management Engine (ME), Server Platform Services (SPS), and Trusted Execution Engine (TXE) could be attacked to give miscreants access to the controversial hidden administrative layer – effectively granting God-mode on the computer.

As such patches to kill off the security holes in the code are gradually being made available to organizations and people to download and install. Unfortunately, though, the ME’s reliance on writeable firmware has meant any fixes can be reversed. Thus, it is possible for miscreants to reprogram flash chips on the motherboard to undo any changes.

It’s pretty much game over if you can gain enough physical access to a machine to rewrite its solid-state storage, of course. However, it may be possible for Intel to thwart tools – such as me_cleaner – that forcibly neuter the Management Engine in later revisions of its firmware. And it may be impossible to roll back the firmware to a version that can be nuked.

A recent confidential Intel Technical Advisory posted to GitHub stated that starting with ME version 12, the chip’s Security Version Number (SVN), which gets incremented with updates to prevent rollbacks, “will be saved permanently in Field Programmable Fuses (FPFs) as a means to mitigate physically downgrading Intel ME [firmware] to a lower SVN.”

FPFs, once set, become read-only memory (ROM) and cannot be easily altered. And the presence of this immutable value provides Intel’s security measures with a way to validate firmware versions in order to avoid a version rollback.

The cryptographic keys used to protect Intel ME data are also bound to the SVN, to deny an attacker access after a downgrade.

According to Intel’s advisory, ME versions 8 through 11 can be physically downgraded using a flash programmer, such as DediProg, that has been connected to the chip’s flash memory.

Intel’s super-secret Management Engine firmware now glimpsed, fingered via USB

READ MORE

The FPF-based protection in version 12 and onward prevents that, though there’s still room for physical tempering and fault injection attacks.

The anti-rollback feature is disabled by default; Intel hardware partners – PC and server makers – can enable it using Intel’s Flash Image Tool (FIT) and ship the machines out to customers. Intel said it strongly recommends enabling the feature and may soon enable it by default.

In an email to The Register, Todd Weaver, founder and CEO of Purism, which makes privacy-focused Librem laptops in which the Intel Management Engine has been mostly disabled through unofficial means – mainly by wiping away a chunk of its data and activating what appears to be a hidden kill switch – said Chipzilla’s software-based anti-rollback protection can be bypassed. The proposed Management Engine version 12 hardware-based protection is better, he said, but that doesn’t change the fundamental problem with the technology.

“The ME [Management Engine] hardware still ships on all Intel CPUs; the ME firmware (where this Positive Technologies security exploit is at) is still required by Intel,” he said. “If users do not want the ME at all, there is no current Intel based CPU option.”

Weaver said his company petitioned Intel last year to sell chips without the ME and continues to advocate for that. Purism, he said, continues to work on reverse engineering the Management Engine because Intel has shown no interest in an ME-free option for its x86 processors.

“Mitigating risk with usable solutions is something Purism strives for, and currently a great way to remove this ME local access threat is by running TPM to measure the ME region, and have Coreboot + Heads to ensure the first bit can enter a proper measured boot process,” he said.

The other option appears to be setting the ME’s HAP bit, which disables but doesn’t remove the ME in order to comply with the US government’s high assurance program, an NSA-developed IT security framework.

Intel did not immediately respond to a request to clarify the range of chips affected its technical advisory. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/13/intel_management_engine_gets_hardwarebased_lock/

Put down the eggnog, it’s Patch Tuesday: Fix Windows boxes ASAP

Microsoft has kicked out its December batch of software security fixes, the final Patch Tuesday of 2017.

Redmond has addressed 32 CVE-listed vulnerabilities in Edge, Windows, and Office, as well as a hole in Internet Explorer last seen in the early-oughts. Get patching as soon as possible.

Leading this month’s Patch Tuesday charge is CVE-2017-11927, a bug in Windows that can be exploited by an attacker to snatch a victim’s NTLM hash, which could be cracked offline to reveal their password. A mark would have to be tricked into clicking on a link to a malicious website, SMB share, or UNC path, which would trigger exploitation via the little-used ITS protocol, a format used for serving compiled HTML help (CHM) files.

“In theory, you shouldn’t be able to access remote content using ITS outside of the Local Machine Zone thanks to a 2005 update,” explained Dustin Childs from Trend Micro’s Zero Day Initiative.

“It appears that has been circumvented by this bug, as it allows attackers who trick users into browsing to a malicious website or to malicious SMB destinations to leak info.”

As is often the case, scripting engine flaws in Microsoft Edge and Internet Explorer make up 17 of the 19 vulnerabilities rated by Microsoft as “critical” risks. Those flaws will allow remote code execution by way of a specially-crafted website: browsing a dodgy page could end up leaving you with malware or spyware on your machine.

The remaining critical issues are CVE-2017-11888, another remote code execution flaw caused by a memory corruption error in Edge, and CVE-2017-11937, the remote code execution flaw in Malware Protection Engine that Microsoft addressed with an out-of-band patch last week.

For the second straight month, Microsoft is also patching a security bypass flaw in Device Guard (CVE-2017-11899) that lets unsigned files pass themselves off as signed. This means malicious programs can masquerade as legit software.

“This is exactly the sort of bug malware authors seek, as it allows them to have their exploit appear as a trusted file to the target,” noted Childs.

CVE-2017-11885, a remote code execution vulnerability in the Routing and Remote Access feature of RPC for servers, also raised the eyes of security experts.

“Make sure you are patching systems that are using RRAS, and ensure it is not enabled on systems that do not require it, as disabling RRAS will protect against the vulnerability,” explained Gill Langston of Qualys.

“For that reason it is listed as exploitation less likely, but should get your attention after patching the browsers.”

Office users will want to update the suite to address a remote code execution flaw in Excel (CVE-2017-11935) – yes, an evil spreadsheet can execute arbitrary malware on your system when opened – and information disclosure vulnerabilities in PowerPoint (CVE-2017-11934) and Office (CVE-2017-11939).

Microsoft Exchange Server has received an update for a spoofing bug (CVE-2017-11932) and SharePoint Server was patched against an elevation of privilege attack (CVE-2017-11936).

Adobe, meanwhile, has just one patch to issue this month, a fix for a Business Logic Error (CVE-2017-11305) in Flash player that could allow for a reset of the global settings preferences file. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/13/december_patch_tuesday/

iOS jailbreak exploit published by Google

The story’s not quite as bad as it sounds at first – a bang-up-to-date iPhone is already safe against this exploit.

But it’s still an interesting tale, so here goes.

Google Project Zero bug-hunting expert Ian Beer recently registered an account on Twitter, and his first tweet, back on 5 December 2017, has already clocked up 752 retweets and more than 1800 likes. [2017-12-12T12:38Z]

Beer said:

If you’re interested in bootstrapping iOS 11 kernel security research keep a research-only device on iOS 11.1.2 or below. Part I (tfp0) release soon.

It turns out he was referring to exploit code that takes advantage of a vulnerability dubbed CVE-2017-13861, patched by Apple in its recent iOS 11.2 update, published on 2 December 2017.

That was the update in which Apple fixed the KRACK Wi-Fi vulnerability for users of older iPhones, having managed to patch it only for the iPhone 7 and later to start with.

It turns out, however, that KRACK wasn’t the only reason to apply the iOS 11.2 patches.

Apple wasn’t joking when it described CVE-2017-13861 in the iOS 11.2 security bulletin with these words:

Impact: An application may be able to execute arbitrary code with kernel privilege.

Description: A memory corruption issue was addressed with improved memory handling.

Beer has now gifted to the jailbreaking community a proof-of-concept for this very bug, proving that it’s not just a theoretically-exploitable vulnerability.

Of course, if you’ve already updated to iOS 11.2, you’ve closed this particular hole, so you’re safe against Beer’s attack code.

Jailbreakers often run a few versions behind the bleeding edge, specifically to leave known vulnerabilities open in the hope that exploits will later be found – with Apple’s strict walled garden approach to the iOS ecosystem, updates are designed to be a one-way street so that you can never later downgrade.

So, if you keep bang up to date with Apple’s patches, you’ll be more secure in general, but at the cost of future flexibility if you suddenly decide you want to join the jailbreaking scene, in a bit of a security Catch 22.

Jailbreaking has a bad name, because it’s associated not only with freedom but also with piracy, unlawful copying and the purposeful bypassing of security that was originally put in place to protect intellectual property.

For the record, we don’t recommend jailbreaking, at least for phones you use in a work environment, and indeed our Sophos Mobile Control product provides a way to keep jailbroken and otherwise non-compliant devices off your organisation’s network.

For a busy system administrator, jailbroken iPhones (and their countercultural cousins, rooted Android phones) are yet another layer of security uncertainty that’s easier to live without, especially in a world where Europe’s new GDPR framework is fast approaching.

Having said that, there are numerous perfectly good reasons for jailbreaking, such as:

  • Repurposing an old device after Apple stops supporting it.
  • Applying a third-party security fix if independent researchers get to it before Apple.
  • Enjoying yourself because, hey, it’s your phone and you paid for it out of your own after-tax income.
  • Conducting security research – like the work Ian Beer does – that requires debugging access that Apple won’t give you out of the box.

So, although we advise against jailbreaking in general, we’ll repeat what we’ve said before:

As always[…], “Patch early, patch often.”

But we nevertheless wish that Apple would come to the jailbreaking party, even though we’d continue to recommend that you avoid untrusted, off-market apps.

We suspect that Apple would benefit both the community and itself by offering an official route to jailbreaking – a route which could form the basis of independent invention and innovation in iDevice security by an interested minority.

What to do?

We said it above: patch early, patch often.

Don’t hang back in the hope of later jailbreaks unless you have a well-formed reason for doing so.

There’s also the intriguing question, “Should Google Project Zero have dropped this exploit so soon after the update?”

Ironically, keeping up to date on Apple’s iOS platform is much easier than in Google’s Android world, where hundreds of different phone vendors, suppliers and carriers all need to knit their own updates once the Android source code is patched.

Not every iOS user is up-to-date, however.

So, even though Ian Beer has done the jailbreaking and the research community a favour, Google’s proof of concept exploit could also be seen as a bit of a Christmas present to the crooks out there, giving them a vector to attack the 30%-40% of Apple iOS users who aren’t up-to-date yet.

Where do you stand on this? Let us know below…

(You may post anonymously by leaving the name and email address details blank when you submit your comment.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xMA7bbT2OLc/

Microsoft Dynamics 365 sandbox leaked TLS certificate’s private parts

Another day, another credential found wandering without a leash: Microsoft accidentally left a Dynamics 365 TLS certificate and private key where they could leak, and according to the discoverer, took 100 days to fix the bungle.

Matthias Gliwka, a Stuttgart-based software developer, discovered the slip while working with the cloud version of Redmond’s ERP system.

Writing at Medium, Gliwka said the TLS certificate was exposed in the Dynamics 365 sandbox environment, designed for user acceptance testing.

Unlike the development and production servers, the sandbox gives admins RDP access, and “that’s where the fun begins”.

Access from any sandbox environment yields “ a valid TLS certificate for the common name *.sandbox.operations.dynamics.com and the corresponding private key — by the courtesy of Microsoft IT SSL SHA2 CA!”.

With the certificate (which can be exported with fairly basic tools) and the private key, Gliwka said that any man-in-the-middle can see user communications in the clear, and can modify that content without detection.

Gliwka detailed extensive communications with Microsoft to explain the issue, and after his efforts to get the problem fixed proved fruitless, he contacted German tech freelancer Hanno Böck to get coverage.

Böck tried filing a bug ticket with Mozilla’s bug tracker (since browsers track which certificates are trustworthy), and that got Microsoft moving. Gliwka wrote that the hole was plugged on 5 December – quite some time after his original notification to Microsoft on 17 August. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/11/dynamics_365_sandbox_leaked_tls_certificates/

Argy-bargy Argies barge into Starbucks Wi-Fi with alt-coin discharges

Starbucks has joined the long growing list of organizations that have inadvertently and silently mined alt-coins on customers’ computers for mystery miscreants.

A sharp-eyed quaffer in a branch of the frothy-coffee-flavored-milk franchise noticed that when he signed on to the cafe’s free Wi-Fi service something was amiss. Sitting in the shop in the bustling capital city of Argentina, Buenos Aires, this month, startup boss Noah Dinkin spotted that there was a ten-second delay in connecting to the internet via the Wi-Fi, and that time was used to fire up a copy of Coin Hive’s Monero-mining JavaScript in his browser.

Thus, when Dinkin and his fellow latte slurpers joined the cafe’s wireless, something on the network was maliciously injecting Coin Hive’s code into their web browsers so that for at least ten seconds or so, their PCs and other devices would toil away crafting Monero coins for whoever was masterminding the scam.

Coin Hive’s software is freely available, and when run in a webpage uses the visiting computer’s spare CPU cycles to mine the digital currency Monero – which is a young alt-coin and easily crafted by laptops and handhelds. One XMR is right now worth $304.88 – last time we looked last month or so, it was about $90.

The idea was that, rather than rely on ad clicks and views, website owners pocket revenue by running coin-mining code in visitors’ web browsers: the extracted digital money being funneled back to webmasters via Coin Hive. However, hackers have seized the software with gusto, and are silently embedding or injecting the JavaScript into shedloads of compromised websites – from big names to little sites – and trousering all the produced cryptocurrency for themselves.

coinhive

Stealth web crypto-cash miner Coin Hive back to the drawing board as blockers move in

READ MORE

High-profile crypto-jacking victims have included CBS’s Showtime website, the Pulitzer Prize-winning Politifact, the Ultimate Fighting Championship’s pay-per-view ufc.tv site, and Google Chrome extensions. At least 30,000 sites sneaked in Coin Hive code, and it has also started popping up on smartphones, killing battery life and overheating the handsets.

Many security tools and ad-blocking packages now routinely block Coin Hive’s JavaScript. As a result, the developers of the miner created AuthedMine that can’t be used unless a webpage’s visitor agrees to donate their hardware and electricity. Of course, it’s still possible to use Coin Hive’s stealthy script and service.

Back to Starbucks, and the American giant said, after some argy-bargy, its Argy ISP has killed off the mining code.

“As soon as we were alerted of the situation in this specific store last week, we took swift action to ensure our internet provider resolved the issue and made the changes needed in order to ensure our customers could use Wi-Fi in our store safely,” said a Starbucks rep in a statement on Monday afternoon.

Not only should you be on the look out for secret crypto-miners in hacked websites, keep an eye out for shenanigans by network providers, too. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/12/starbucks_wifi_crypto_mining/

Kaspersky dragged into US govt’s trashcan as weaponized blockchain agile devops mulled

President Donald Trump has signed the National Defense Authorization Act for 2018, which includes a ban on products from Kaspersky Lab running in US government agencies.

Section 1634 of the law specifies that:

No department, agency, organization, or other element of the Federal Government may use, whether directly or through work with or on behalf of another department, agency, organization, or element of the Federal Government, any hardware, software, or services developed or provided, in whole or in part, by—

(1) Kaspersky Lab (or any successor entity);

(2) any entity that controls, is controlled by, or is under common control with Kaspersky Lab; or

(3) any entity of which Kaspersky Lab has majority ownership.

All of Uncle Sam’s agencies have been given until October 1, 2018, to banish Kaspersky’s wares from their systems. The US Secretary of Defense Jim Mattis has a deadline too: he has 180 days to conduct a review on how to remove Kasperskyware from government systems, and then produce a report on how to get the job done. If the Pentagon uses all that time, its guidance is going to land only about three months before the date of expected expunging, which could make life interesting.

Canard Drones inspects airfield lighting with, er, drones. Pic: Breed Reply

America’s drone owner database is baaaack! Just in time for Xmas

READ MORE

Kaspersky Labs may laugh this one off: its stuff has already mostly been erased by some US government agencies, and it has closed its Washington DC office in anticipation of federal sales efforts being futile.

Plenty of other cyber-defense stuff

The Kaspersky ban is just one of “cyberspace-related matters” in Section C of the act. Section 1646 calls for “a description of potential offensive and defensive cyber applications of blockchain technology and other distributed database technologies” along with “an assessment of efforts by foreign powers, extremist organizations, and criminal networks to utilize such technologies.”

Section 1633 outlines a requirement for the US president to “develop a national policy for the United States relating to cyberspace, cybersecurity, and cyber warfare” that covers:

There’s also a review of “the role of cyber forces in the military strategy, planning, and programming of the United States” and another review of whether US military staff have had sufficient and/or adequate cyber security training.

Section 1642 gives “the Commander of the United States Cyber Command” the job of conducting revisiting procurement practices for cyber-tools, including “consideration of agile or iterative development practices, agile acquisition practices, and other similar best practices of commercial industry.”

The Register eagerly anticipates the USA’s future blockchain-powered, DevOps-driven cyber defence policy and will report on the various reports as they emerge. We’ve also asked Kaspersky Lab to comment on the Act and will update this story if the biz has anything of substance to say. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/12/us_government_bans_kaspersky/

Microsoft Azure AD Connect Flaw Elevates Employee Privilege

An improper default configuration gives employees unnecessary administrative privilege without their knowledge, making them ideal targets for hackers.

Microsoft today issued a security advisory to alert users to an improper default configuration in Azure AD Connect, which increases the number of “stealthy admins” on corporate networks and makes businesses more vulnerable to targeted attacks.

The flaw – which was discovered by Preempt researchers during a customer network review – was a separate advisory from Microsoft’s monthly Patch Tuesday updates, also issued today. Microsoft released 34 security fixes in its December batch of security updates, which affect Windows, Office, Office Services and Web Apps, Exchange Server, Microsoft Malware Protection Engine, Internet Explorer, Edge, and ChakraCore. 

Microsoft’s advisory for Azure AD Connect was published for an unpatchable issue related to the security configuration settings for the Active Directory Domain Services (AD DS) account used by Azure AD Connect when syncing to a directory. Default settings often give non-administrative employees permissions they don’t need.

Preempt researchers found many employees on their customers’ networks had some type of unnecessary administrative privilege, which came from unintentional inclusion in a protected administrative group. Active Directory audit systems often miss “stealthy admins,” or admins who have higher domain privileges as a direct result of domain discretionary access control list (DACL) configuration.

“Usually, stealthy admins are created by accident,” explains Preempt researcher Yaron Ziner. Employees are granted certain access for a legitimate purpose; for example, if a software requires particular privileges to install or if someone is in charge of resetting passwords.

Several permissions could give stealthy admins full domain admin privileges. Stealthy admins may be non-administrative users who can add users to security groups, which would enable them to make themselves a domain admin at any point. Another is the ability to replicate a domain, which includes the ability to read password hashes from the domain controller.

“The more privilege an account has, the higher the risk and easier the attacker’s job is going to be,” says Preempt cofounder and CEO Ajit Sancheti. Stealthy admin accounts are often less monitored than full domain admins despite their level of privilege. If an attacker gains access, they can determine the level of privilege and use it to their advantage.

Digging further into the issue, the researchers learned businesses were prone to having more stealthy admins when they installed Microsoft Office 365 with Azure AD Connect in on-premise environments, and used Azure AD Connect to connect between on-premise and the cloud.

More than 50% of Preempt clients were affected by a flaw in MSOL Azure AD Connect service account when installed with Express settings. Azure password sync, which is used to sync passwords between on-prem networks and cloud services, requires domain replication permissions.

“When you provision Office 365 in the organization, the first thing you need to do is sync the on-prem directory with the cloud directory,” says Ziner. When Azure AD Connect is installed, it creates a service (MSOL) account that syncs directories to read on-prem passwords. This is a “stealthy admin” account: it can access passwords but doesn’t have strong security measures.

Further, this account would not have AdminSDHolder protection, meaning other non-privileged users can reset its password and gain access. In many networks, the researchers report, the service account was a primary attack path for attackers with Account Operator permissions to increase their privilege and become full domain administrators. 

Consider a scenario in which a help desk employee has permission to reset non-admin passwords but lacks admin privileges. Because the MSOL account is part of the Built-in Users container, and the help desk team has permissions to reset passwords for that container, the help desk employee has full access to domain passwords and other higher privileges.

The discovery of MSOL as a stealthy administrator is “just one instance of a stealthy account that can lead to domain compromise,” says Sancheti.

Businesses can protect themselves from the threat by first reviewing the stealthy admins in their network. Once you know who the stealthy admins are, determine whether additional permissions are necessary. If not, they should be removed.

Microsoft’s Security Advisory 4056318 advises admins to avoid using the Account operators group since by default, members of this group have reset password permissions to objects under the User container. 

The company also recommends moving the AD DS account used by Azure AD Connect, and other privileged accounts, into an Organization Unit that is only accessible by highly trusted admins. When giving reset password permissions to specific users, limit their access to only user objects they are supposed to manage.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/microsoft-azure-ad-connect-flaw-elevates-employee-privilege/d/d-id/1330616?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security Compliance: The Less You Spend the More You Pay

The costs of complying with data protection requirements are steep, but the costs of non-compliance are even higher, a new study shows.

Like the old saying about an ounce of prevention being better than a pound of cure, complying with data protection requirements can be expensive, but the financial consequences of non-compliance can hurt a lot more.

Research firm Ponemon Institute recently interviewed 237 individuals from 53 multinational organizations on the economic impact of their compliance-related activities.

The study, sponsored by Globalscape, looked at the costs that organizations have incurred or are incurring in meeting the requirements of mandates such as the EU General Data Protection Regulation (GDPR), Payment Card Industry Data Security Standard (PCI-DSS)and Healthcare Information Portability and Accountability Act (HIPAA). The results were then compared with the findings from a 2011 Ponemon survey on the same topic. The differences were stark and telling.

Average costs of compliance have increased 43%, from around $3.5 million in 2011 to just under $5.5 million this year, while non-compliance costs surged from $9.4 million to $14.8 million during the same period.

On average, organizations that are found non-compliant with data protection obligations these days can expect to fork out at least 2.71 times more money getting into and proving compliance than if they had been compliant in the first place. Overall, non-compliance costs for organizations in the study ranged from $2.2 million at the low end to over $39 million at the high-end.

The findings are important at a time when many organizations are under pressure to meet various compliance objectives. One of the most pressing among them is GDPR, which will begin enforcement actions in May. A surprising 90% of the participants in the Ponemon studied pointed to GDPR as being the most difficult regulation to meet. A previous study this year by Dimensional Research shows that many organizations—regardless of size—expect to spend north of $1 million on GDPR compliance. More than eight in 10 expect to spend at least $100,000.

For the latest study, the Ponemon Institute considered expenses related to activities such as data protection and enforcement, audits and assessments, policy development, and training when calculating compliance costs. Non-compliance costs included those associated with business disruption and related productivity losses, fines, penalties, and settlement costs.

“The overall cost of compliance versus non-compliance was surprising,” says Peter Merkulov, chief technology officer at Globalscape. The delta between the two numbers underscores the need for enterprises to be vigilant about protecting data, he says. “The repercussions of not doing so are clearly pretty damaging from a cost perspective.”

Larry Ponemon, founder of the Ponemon Institute, adds that a data breach is not the only time non-compliance becomes an issue. “In our model, a data breach is a major source of non-compliance cost, but there are a lot of other reasons non-compliance can become an issue for an organization,” he says.

A cloud vendor that provides services to federal agencies, for instance, is obligated to ensure that government data doesn’t end up in the hands of unauthorized people. A vendor that fails the contract and gets discovered can face a lot of issues, including fines and mandated workflow changes, even though no data breach was involved. Another example would be a security exploit that results in a denial of service. “You don’t actually lose data here, but you basically suffer a cost because you lack availability and a lot of downtime, and that’s where you can see revenue losses,” Ponemon says.

For most enterprises, the cost associated with buying and deploying data security and incident response technologies account for a bulk of their compliance-related expenditure. On average, organizations in the Ponemon and Globalscape survey spent $2 million on security technologies to meet compliance objectives. The study found that businesses today are spending on average about 36% more on data security technologies and 64% more on incident response tools compared to 2011.

Indirect costs, such as those associated with administering a compliance program – everything from building the architecture and governance process to the salaries of people in charge of compliance, internal audits, and assessments – can add up. On average, such costs make up for 40% of compliance expenditures, while direct costs such as payments to consultants and auditors typically account for another 32%. Opportunity costs – which include things like an organization’s inability to execute a business initiative because of compliance concerns – accounted for the remaining 28% in the study.

Financial companies tend to spend a lot more – $30.9 million annually – on compliance initiatives than entities in other sectors. Organizations in the industrial sector and energy/utilities sector also have relatively high compliance-related expenses of $29.4 million and $24.8 million respectively annually.

Industries that tend to collect, store, and share some of the most sensitive data, generally tend to have higher compliance costs, Merkulov says. “It would only make sense that they would need to comply with more complex regulations and put more proactive measures in place to protect and manage this data.” Transportation, technology, and healthcare are also high on the list for similar reasons.

On the other end of the scale in the Ponemon and Globalscape study were media companies, with $7.7 million in compliance costs annually.

Unsurprisingly, larger enterprises spend more on compliance – and non-compliance – than smaller organizations. But, companies with less than 5,001 employees tend to have substantially higher per-employee costs compared to organizations with large headcounts.

Generally, organizations with effective security programs, that spend more per employee on compliance efforts, tend to spend less on costs related to non-compliance.

The same was true of centralized governance and audits as well. Enterprises that have a centralized data governance program and conduct more regular audits generally end up spending less on compliance costs than others, the report showed.

Related content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/risk/security-compliance-the-less-you-spend-the-more-you-pay/d/d-id/1330622?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Spies are watching… on LinkedIn

Germany’s spy agency – Bundesamt für Verfassungsschutz (BfV) – has published eight of the most active profiles it says are used on LinkedIn to contact and lure German officials for espionage purposes.

No surprises here – the young professionals the profiles portray are hot, enticing, and fake. BfV alleges that they’re just fronts used by Chinese intelligence to gather personal information about German officials and politicians.

Hans-Georg Maassen, chief at Germany’s intelligence agency (BfV), on Sunday alleged that Chinese intelligence has used LinkedIn to target at least 10,000 Germans, possibly to recruit them as informants.

Reuters quoted the BfV:

Chinese intelligence services are active on networks like LinkedIn and have been trying for a while to extract information and find intelligence sources in this way, [including seeking data on users’ habits, hobbies and political interests].

China denies it all.

Speaking in Beijing on Monday, Chinese Foreign Ministry spokesman Lu Kang said that the allegations are “completely groundless” accusations that amount to “chasing the wind and clutching at shadows.”

We hope the relevant German organizations, particularly government departments, can speak and act more responsibly, and not do things that are not beneficial to the development of bilateral relations.

The BfV identified faked profiles including:

  • “Rachel Li”, identified as a “headhunter” at “RiseHR”
  • “Alex Li”, a “Project Manager at Center for Sino-Europe Development Studies”
  • “Laeticia Chen”, a manager at the “China Center of International Politics and Economy” whose attractive photo was reportedly swiped from an online fashion catalog, according to a BfV official.

Reuters found that some of the profiles were connected to senior diplomats and politicians from several European countries, but that’s it: there’s no way to find out whether any further contact had taken place beyond initial social media “adds.”

According to the Financial Times, the BfV’s report is the result of a nine-month survey of social networks that began in January.

Maassen classified China’s work on LinkedIn as a “broad attempt to infiltrate parliaments, ministries and administrations.”

Chinese intelligence services are using new strategies of attack in the digital space. Social networks, especially LinkedIn, are being used in an ambitious manner to gather information and for recruitment.

The BfV said that establishing contact through social media has been on the agenda of foreign intelligence services for some time:

Information about habits, hobbies and even political interests can be generated with only a few clicks. Chinese intelligence agencies in particular are active on networks like LinkedIn.

According to German media reports, the Chinese intelligence services used fake profiles to contact members of the German and European parliaments, as well as senior military officials and representatives of foundations, lobby groups and consultancies.

Once contact was made, the spies would try to launch a professional exchange of views and information, followed by invitations to conferences and other events in China.

LinkedIn’s owner, Microsoft, on Monday announced that it had deleted any fake Chinese user profiles that were in violation of its Terms of Service.

How to fend off LinkedIn lusciousness

  • Don’t friend strangers. If you haven’t met someone in person, don’t accept their request to connect, even if they are a super-hot piece of crumpet.
  • Be careful what you share on social media. Work-related details are a goldmine for phishers, or potential spies.
  • Report imposter profiles. If you suspect a profile is fake, report it to LinkedIn.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/80sSWU_XMRQ/

Coinbase: don’t expect to trade your cryptocurrency at busy times

Brian Armstrong, CEO of digital currency exchange Coinbase, says he “couldn’t be more excited about the explosion of interest in digital currencies.”

But the explosion comes at a potential price, he commented in a 7 December blog post. All the excitement, largely driven by the rocketing price of Bitcoin and its brethren, was creating, “extreme volatility and stress on our systems,” he said.

And what could that stress do?

We wanted to remind customers that access to Coinbase services may become degraded or unavailable during times of significant volatility or volume. This could result in the inability to buy or sell for periods of time. Despite ongoing increases in our support capacity, our customer support response times may be delayed, especially for requests that do not involve immediate risks to customer account security.

No kidding. On the day of Armstrong’s post, that is exactly what happened. TechCrunch reported that Coinbase, “had a rough day… as the exchange buckled under the pressure of a particularly hefty day of trading.”

The site was “down for maintenance” for large chunks of the day, frustrating customers looking to buy, sell or merely access their account.

Not that it did any apparent harm to Coinbase’s popularity. TechCrunch added that also on that same day, the Coinbase app temporarily took over the No. 1 downloaded free app spot in the App Store.

Armstrong wrote that the company is responding to the growth by investing heavily in expanding trading capacity. He said Coinbase has increased its support team by 640%; launched phone support in September and increased by 40 times the number of transactions it is processing.

But he acknowledged that even that expansion would not prevent downtime. And he was long on generalities and short on specifics about that. He referred to “significant” volume or volatility, “periods of time” when customers would not be able to buy or sell, and “delays” in customer support response.

Things may get even more volatile in the coming months, given that on Sunday, Bitcoin launched on the CBOE futures exchange in Chicago, allowing investors to bet on whether its prices will rise or fall. The rival Chicago Mercantile Exchange (CME) is expected to begin listing it next week.

According to some analysts, this will “legitimize” Bitcoin. But it is difficult to predict what will happen when investors can bet on the future price of Bitcoin without actually owning any of it. As Naked Security and others have reported many times, Bitcoin and other digital currencies are not regulated by any country’s central bank, and therefore have no universally recognized exchange rate.

Indeed, the US Commodities and Futures Trading Commission (CFTC), which approved the CBOE and CME launches, also warned investors about the, “potentially high level of volatility and risk in trading these contracts.”

Or, as Philip Coggan wrote in his “Buttonwood” column in The Economist last week, “this blogger remains convinced it is a bubble.”

Indeed its exponential rise only reinforces the argument. The beauty of bitcoin is that its intrinsic value is impossible to determine and that makes any value plausible to true believers. This is not the same as saying there is no merit in electronic currencies or blockchain technology; of course there is. But the range of prices which can be found on cryptocompare shows this is a narrow, illiquid market.

All of which simply reinforces the reality that digital currency is risky by its nature. For a lot of people, when it comes to their money, boring is better (and much safer) than exciting.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fqrfCIiMS74/