STE WILLIAMS

Facebook shines a little light on ‘shadow profiles’

Mark Zuckerberg, CEO of supposed surveillance titan Facebook, has apparently never heard of shadow profiles.

Of all the things learned during Zuckerberg’s questioning by a succession of politicians in Congress this week, for privacy campaigners this was one of the most unexpected.

We have Congressman Ben Luján to thank for a discovery that might come to hang around Zuckerberg as he battles to save his company’s image.

After asking Zuckerberg about the company’s practice of profiling people who had never signed up for the service, said Luján:

So, these are called shadow profiles – is that what they’ve been referred to by some?

Replied Zuckerberg:

Congressman, I’m not, I’m not familiar with that.

For anyone unsure of its meaning, shadow profiles are the data Facebook collects on people who don’t have Facebook accounts.

Zuckerberg’s ignorance was presumably limited to the term and its usage rather than the concept itself, since Facebook offers non-members the ability to request their personal data.

It seems that all web users are of interest to Facebook for security and advertising.

During the exchange Zuckerberg explained that Facebook needs to know when two or more visits come from the same non-user in order to prevent scraping:

…in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to … we need to know when someone is repeatedly trying to access our services

A little later he implied that non-users are also subject to data gathering for targeted advertising:

Anyone can turn off and opt out of any data collection for ads, whether they use our services or not

You can opt of targeted advertising by Facebook and a plethora of other advertisers using the Digital Advertising Alliance’s Consumer Choice Tool or by blocking tracking cookies with browser plugins.

While not in widespread public use, the term shadow profiles has been kicking around privacy circles for some time as a big deal.

In 2011, a Irish privacy group sent a complaint about shadow profiling – collecting data including but not limited to email addresses, names, telephone numbers, addresses and work information – from non-members.

More recently, in the latest instalment in a long-running privacy case, a Belgian court ordered Facebook to stop profiling non-members in the country or face a daily fine.

The problem of shadow profiles for Zuckerberg is that it blows a hole in some of the arguments he has used to defend the way Facebook collects data on web users, not least that it’s all about security.

But what about the large number of people who encounter Facebook somewhere and aren’t scraping anything?

This includes non-members who encounter it through the ubiquitous ‘like’ button, or by downloading Facebook-connected apps such as WhatsApp or Instagram.

On top of that are technologies such as Facebook Pixel, a web targeting system embedded on lots of third-party sites, that the company has in the past trumpeted as a clever way to serve people (including non-members) targeted ads.

As Luján pointed out, non-members won’t have signed a privacy consent form, nor would they know to delete data they weren’t even aware was being collected.

Ironically, one of the ways the world has learned of the way Facebook collects and analyses non-members was through data breaches such as the one that hit the company in 2013.

A journalist at the time summed it up rather well:

You might never join Facebook, but a zombie you – sewn together from scattered bits of your personal data – is still sitting there in sort-of-stasis on its servers waiting to be properly animated if you do sign up for the service.

So, not having a Facebook account is not an effective way to avoid its data harvesting. Facebook is always watching, analysing and learning, even when it is nowhere to be seen.

But are they the only one? With just about everyone’s online business models dependent on extensive data gathering and targeted advertising, perhaps Zuckerberg might console himself with the thought that he likely won’t be the last tech executive hauled up and asked questions about this topic.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/54h-trw--C4/

From Bangkok to Phuket, they cry out: Oh, Bucket! Thai mobile operator spills 45k people’s data

TrueMove H, the biggest 4G mobile operator in Thailand, has suffered a data breach.

Personal data collected by the operator leaked into an Amazon Web Services S3 cloud storage bucket. The leaked data, which includes images of identity documents was accessible to world+dog before the mobile operator finally acted to restrict access to the confidential files yesterday, 12 April.

The issue was uncovered by security researcher Niall Merrigan, who told us he had tried to disclose the problem to TrueMove H, but said the mobile operator had been slow to respond.

the office

Amazon’s answer to all those leaky AWS S3 buckets: A dashboard warning light

READ MORE

The researcher told El Reg that he’d uncovered around 45K records that collectively weighed in at around 32GB. Merrigan attempted to raise the issue with TrueMove H, but initially made little headway beyond an acknowledgement of his communication.

Representatives of the telco initially told him to ring its head office when he asked for the contact details of a security response staffer before telling him his concerns had been passed on some two weeks later, after El Reg began asking questions on the back of Merrigan’s findings.

In the meantime, other security researchers have validated his concerns.

“There were lots of driving licences and I think I saw a passport,” said security researcher Scott Helme. “I guess they have to send ID for something and the company is storing the photos in this bucket, which can be viewed by the public.”

El Reg approached TrueMove H about the incident. The mobile operator responded last month with a holding statement stating that it was investigating the matter and we hung fire on opublication until the data was no longer public facing.

Please kindly be informed that this matter has been informed to a related team for investigation. If they have any queries or require any further information from you, they will contact [you] later.

Merrigan said the exposed data was still available up until yesterday, when it was finally made private, allowing the security researcher to go public with his findings. A blog post by Merrigan that explains the breach – and featuring redacted screenshots of the leaked identity documents – can be found here. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/13/thai_mobile_operator_data_breach/

When SecureRandom()… isn’t: JavaScript fingered for poking cash-spilling holes in Bitcoin wallets

Concerns about a flawed crypto library that could allow Bitcoin theft have been revived following a post to a Bitcoin mailing list last week.

David Gerard, a UK-based Unix admin and blockchain technology watcher, raised concerns in a blog post on Thursday.

“The popular JavaScript SecureRandom() library … isn’t securely random,” he wrote, pointing to an anonymous post to a Bitcoin mailing list a week ago that revisited the issue.

The post attributes the shortcomings of the code to JavaScript’s lack of type safety. A bug causes the code to fail to utilize the browser’s window.crypto API and to fall back on the cryptographically inadequate Math.random() API.

Via Twitter and on the mailing list, Mustafa Al-Bassam, a doctoral researcher in computer science at University College London, said that the problem lies with a pre-2013 version of jsbn, a JavaScript crypto library.

This particular crypto flaw has been publicly known since at least 2013. And Bitcoin Core developer Greg Maxwell discussed the issue during a 2015 presentation.

The perils of fallback

In response to the dustup, Filippo Valsorda, a cryptographer working for Google, advised against implementing any kind of fallback when generating keys.

Matthew Green, an assistant professor of computer science at Johns Hopkins and cryptography expert, in a phone call with The Register concurred. “Fallback is always kind of lousy idea,” he said.

Green explained that problem with the code might extend not just to older wallet apps utilizing weak key generation but to Bitcoin addresses generated at the time.

“If you generated your Bitcoin address using this code, you could potentially have crackable, predictable keys that could be exploited to steal money,” he said.

Green said it can be difficult to tell how browsers and apps generate keys because it’s not always apparent and there’s significant variation.

cop

Disgraced US Secret Service agent coughs to second Bitcoin heist

READ MORE

Google’s Chrome browser was affected by the issue until 2015.

The result of the subpar random number generation, Gerard says, is that crypto keys generated using this code are predictable enough to crack through brute force, in perhaps a week.

Gerard in his post declares “most web wallets” for storing cryptocurrency are affected by this flaw but doesn’t name any specific ones. But, if we’re lucky, it may be rather fewer than that.

In an email to The Register, he clarified while he doubts anything developed recently is vulnerable, apps using keys generated back then may be.

What’s at risk?

Asked for examples, he said possibly affected digital wallets include Bitaddress (pre-2013), Bitcoinjs (pre-2014), and anything using older GitHub repos that implement SecureRandom().

Bitcoin contributor Dave Harding expressed skepticism about the motives of the person who revived the issue on the Bitcoin mailing list, pointing to the individual’s rather dubious choice of remailers and the inclusion of a Bitcoin address in the message, presumably to solicit donations.

“So, although the issue is legit (but ancient), I myself suspect this person was just out to stir up a little drama or money,” he said in an email to The Register.

As it happens, the price of Bitcoin surged on Thursday.

Harding acknowledged that some Bitcoin private keys generated in web browsers years ago are not as secure as they could be.

“Likely the least secure keys have already been compromised and the users’ funds stolen; some other keys may have been secure enough at the time but can still be compromised in the future,” he said.

He advised those with concerns to contact their wallet vendor and noted Bitcoin.org maintains a list of digital wallets without known security issues. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/12/javascript_crypto_library_fingered_for_weak_wallets/

Cloudflare promises to tend not two, but 65,535 ports in a storm

Cloudflare made its name proxying traffic for web servers, on network ports 80 (HTTP) and 443 (HTTPS), as a defense against denial of service attacks and their ilk.

On Thursday, the online security biz broadened its ambitions by extending its watch over the remaining possible TCP/IP network ports under IPv4.

Cloudflare introduced a service called Spectrum, saying its distributed denial of service protection, load balancing and content acceleration service now extends to 65,533 more ports.

Though the math ads up – 65,533 plus 2, for ports 80 and 443, equals 65535, covering the full spectrum of ports from 1 to 216-1 – there’s a bit of fudging here. Cloudflare previously proxied a handful of other ports beyond those used for websites, even if it only accommodated two for Cloudflare Apps.

But quibbling aside, the upshot is that all sorts of other TCP-based protocols can be shielded, shifted and sped up, at least for Cloudflare enterprise customers.

cloudflare

CloudFlare CEO blasts Anonymous claims of ISIS terrorist support

READ MORE

That means services running on other ports like email servers, SSH, IoT devices, and gaming servers – apart from those affiliated with neo-Nazi hate speech – can take cover behind a prophylactic proxy.

Gaming service Hypixel, the target of DDoS attacks from the Mirai botnet, has been among the organizations testing the service.

“Before Spectrum, we had to rely on unstable services and techniques that increased latency, worsening user’s experience,” Hypixel’s CTO Bruce Blair said. “Now, we’re able to be continually protected without added latency, which makes it the best option for any latency and uptime sensitive service such as online gaming.”

Making the system work proved to be a minor technical feat. The BSD sockets API underpinning Cloudflare’s edge Linux servers was not amenable to being configured to accept inbound connections on any port. The company’s engineers could have used a bind system call on each of the 65535 server ports, but the technical consequences made that option unworkable.

Instead, the techs used Cloudflare’s firewall to analyze IP packets and decide whether to keep them, in conjunction with the relatively obscure TPROXY iptables module to handle the socket dispatch for incoming packets.

“With its help we can perform things we thought impossible using the standard BSD sockets API, avoiding the need for any custom kernel patches,” explained Cloudflare network engineer Marek Majkowski in a blog post. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/13/cloudflare_will_take_any_port_in_a_storm/

‘Well intentioned lawmakers could stifle IoT innovation’, warns bug bounty pioneer

IoT security regulations could stifle innovation without addressing the security problems at hand, a well-respected security researcher controversially argues.

Compromised IoT devices were press ganged into the Mirai botnet and infamously used in a DDoS attack that left many of the world’s most famous sites unreachable back in October 2016. The attack is exhibit one in the case for regulation against IoT device manufacturers who ship insecure kit.

Infosec luminaries such as Bruce Schneider have been pressing the case for regulation since then, but not all security researchers agree.

Katie Moussouris, founder and chief exec of Luta Security and the veteran infosec researcher who created Microsoft’s bug bounty programme, argued that there was a danger that well intentioned lawmakers could stifle innovation.

Moussouris singled out US proposals that would mean governments would be prohibited from buying IoT kit with known vulnerabilities as ill conceived.

Trying to stop the government from purchasing is misconceived particularly in the absence of agreement on what constitutes a serious bug, she said. Bugs are continually been found in all manner of devices – it’s a question of looking hard enough – so does that make everything insecure?

“Should the best practice in IoT be the same as that for general computing,” Moussouris said, citing the example of medical IoT devices that might be implanted in patients to make her point that the issue of patching, updates and default controls is more complicated than some might suggest.

Not all regulation is bad

Other participants in a panel on IoT security at CYBERUK 2018 in Manchester on Monday were more amenable to the concept of regulation, such as establishing a kite mark for IoT security in much the same way as there is already certification for electrical compatibility.

James Martin, of the British Retail Consortium, said that incentives and harm in the case of the damage caused by the Mirai botnet and other IoT threats don’t line up. Consumers with insecure devices might lose a little bandwidth on their home connections, but it is the big sites that are hit by denial of service attacks that are really affected.

We are in a weird scenario where thousands of insecure smart kettles can be dangerous at a national security level because they might be used to attack components of the national infrastructure, according to Martin.

Pushing against default passwords on routers may be appropriate and the government could act to create a commercial imperative for manufacturers, Martin said, before conceding that regulations were an “imprecise lever”.

Moussouris pointed out that regulating IOT devices that often can’t be patched – but don’t pose a particular risk – creates problems in itself because it is liable to add to the landfill problem.

The debate took place during a time when the Department for Culture, Media and Sport (DCMS) is inviting submissions to its Security by Design review into IoT security. The consultation is due to finish on 25 April and experts such as Ken Munro have already expressed skepticism about whether it will result in effective sanctions, as previously reported.

DCMS representative Emma Green echoed Moussouris’ thinking in saying that the UK government “doesn’t want to hinder innovation” adding that it regarded regulation as a “backstop”.

This line provoked a retort from noted IoT device hacker Ken Munro, a pioneer in hacking everything from smart kettles, kids toys, and smart cars. “Security can enable IoT if done right,” Munro said. “Unfortunately, most IoT vendors don’t.” ®

Bootnote

The Internet of Things Cybersecurity Improvement Act Moussouris references would set baseline security criteria for federal procurements of connected devices. These would include absence of hard-coded passwords and absence from known security vulnerabilities.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/12/iot_regs/

The Good, the Bad & the Disruptive: Bots on the Wild, Wild Web

Not all bots are bad — some are downright helpful — so you can’t block them entirely.

Botnets, with odd names like Mirai and Dorkbot, give bots a bad name. That being said, it’s important to understand that just because something is called a bot doesn’t mean it’s bad. We use the term to refer to almost anything from the spider programs that search engines like Google use to make a searchable index of your site to the most malicious tools hackers use to steal data. The reality is that most tools we call bots are beneficial to the workings of your site and the Internet. Intent and usage make a huge difference, even for the most helpful bots.

IT and security teams must be prepared to manage bots with their eyes open. According to Akamai’s Fourth Quarter, 2017 State of the Internet/Security Report, bot traffic accounts for more than 30% of all pure Web traffic (excluding video streaming). Web teams can’t afford to ignore the impact that bots have on their systems, nor can they block bots entirely.

There are also some bots that absolutely, positively must be defended against. Credential abuse bots, those that check an email and password against your site login page, are responsible for 43% of all login attempts, according to the Akamai data. There are literally billions of fake login attempts happening every month. And if you’re running a hotel or airline website, a significant majority (83%!) of all logins against your site are driven by malicious bots.

Background Radiation
Bots and botnets are all too easy to ignore for many organizations. They’re part of constant noise that every site sees, something that quickly gets filtered out of our ability to care about. But that’s part of what makes bot traffic so dangerous. Credential abuse attacks have been around since shortly after the first Telnet and Secure Shell servers were exposed to the Internet, and scanners looking for services will probably be some of the last systems to go offline decades from now.

The modern incarnation of account checkers and credential abuse bots have evolved significantly since those early days. Even a few years ago, one of the common ways to detect and protect against these tools was to use rate limits. If you saw dozens or hundreds of login attempts from a single host, you could block that host and move on. In response, bot developers have moved from single/few host models to using whole botnets to host their tools. They circumvent rate limits by using a few nodes of the botnet at a time against a single target, cycling through IP addresses so a single site only sees an IP once in a great while.

A few years ago, it might have taken half an hour for a system exposed to the Internet to see the first login attempts show up in the logs. In today’s world, it’s often just seconds after the system goes live that the first attacks happen. It’s not because attackers are so great that they detect your systems when they first come online. Instead, it’s because there’s so much bot traffic — acting as a kind of background radiation bouncing around without a specific target — that any system is going to be hit by random attacks from the very start.

Solving the Problem
Organizations looking to solve this problem can look to the efforts in fraud prevention to solve the problem of distributed credential abuse by these botnets. For instance, credit card companies have to look across multiple organizations and look for out-of-the-ordinary behavior. It sounds easy when stated like this, but the reality is that it requires visibility into traffic at a global scale and can’t be done by looking at the traffic of just one organization.

If you’ve ever received a call from your credit card company telling you that it put a hold on your account because it saw a charge coming from Ukraine just minutes after you purchased gas in Boston, you have an idea how this works. By themselves, neither the gas purchase nor the purchase from Eastern Europe are necessarily suspicious. But when intelligence from the two events is combined, a clear picture showing the impossibility of traveling from one location to the other is revealed.

Credential abuse bots can be detected in a similar manner. Traffic from an IP that hits your site once then moves on might not trigger any defenses, because a single event isn’t enough to draw conclusions. But if you can combine information from multiple organizations and see a pattern of that IP hitting a series of sites, a clear pattern of abusive behavior can be determined. In turn, this allows for signaling to other organizations, letting them deny the attacking IP automatically.

Bad guys are getting smarter and distributing their attacks. In return, defenders have to maintain controls that rely on what similar organizations are seeing. An attack that looks like it’s just part of the background radiation of the Internet takes on an entirely different meaning when you can draw patterns from shared information. Account takeover bots try to hide their activity in the noise of the bots that you want to have access to your site, which means you need to cut through the noise to find them.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop ITX 2018 agenda here.

Martin McKeay is a Senior Security Advocate at Akamai, having joined the company in 2011. As a member of Akamai’s Security Intelligence Team, he is responsible for researching security threats, customer education, and industry intelligence. With more than 15 years … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-good-the-bad-and-the-disruptive-bots-on-the-wild-wild-web/a/d-id/1331511?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Businesses Calculate Cost of GDPR as Deadline Looms

Surveys highlight the financial burden of GDPR as companies scramble to meet the May 25 deadline.

Businesses are under fire to achieve General Data Protection Regulation compliance ahead of the May 25 deadline, only 45 days away. As they do, new research highlights the steps they’re taking to adhere to new regulations, how much work they have left, and what this wave of changes is costing them.

The bigger the enterprise, the more it’s spending on GDPR, reports Netsparker. To learn how non-EU corporations are preparing for GDPR, the Web application security firm polled 302 C-level executives at US companies. Overall, they found, businesses are taking GDPR more seriously than PCI and HIPAA; 99% are “actively involved” in the process to become compliant.

It’s an expensive process. Ten percent of Netsparker’s respondents say they will spend more than $1 million to become GDPR compliant. Nearly 24% will spend between $100,000 and $1 million, 35.8% will spend from $50,000 to $100,000, and 20% will spend between $10,000 and $50,000.

Part of the cost stems from hiring GDPR professionals. About 63% of respondents have a dedicated team addressing compliance issues; however, 28% had to hire a third-party firm or service to help out. For many, GDPR has prompted reorganizations and new employees.

Nearly 57% of businesses have re-engineered their internal systems and procedures to achieve compliance, and 47.7% have re-engineered their internal security teams. Fifty-five percent have recruited new employees specifically to handle GDPR compliance. The number of people a company needs to achieve compliance correlates with the size of the business overall.

Most (82%) of companies surveyed have a data protection officer (DPO) on staff, and 77% plan to hire a replacement DPO before the May 25 deadline. Nearly 19% of businesses have hired more than 10 new employees, 36.8% have hired between 6 and 10, 31.5% have hired between 2 and 5.

The Cost of Compliance

The financial concerns around GDPR are not limited to preparing for the new regulations; companies are also worried about the consequences of noncompliance. Starting May 25, all organizations that handle personal data belonging to European citizens must be GDPR compliant or risk fines of up to $20 million or 4% of their annual revenue, whichever is higher.

To learn more about global awareness around business concerns, NetApp conducted a survey of 1,106 C-suite employees, CIOs, and IT managers in the US, UK, France, and Germany. All respondents are involved with IT buying decisions and represent companies of 100+ employees.

More than half of NetApp’s respondents in the US (51.5%) and the UK (56%) believe noncompliance with GDPR could lead to reputational damage. Half of US respondents say it could lead to revenue loss. About 40% of respondents in both the US and the UK say the financial penalties of noncompliance could put their company’s survival at stake.

Organizations are torn on the impact of GDPR’s regulation fines. In Netsparker’s survey, 54.3% of respondents say businesses will be more hesitant to report data breaches because of the fines. However, 53.6% say businesses will no longer hide data breaches as a result of GDPR.

More than two-thirds of survey respondents believe they will be fully compliant by the May 25 deadline, Netsparker reports. About half (49%) are 75% of the way through the process; another 37% are halfway there. Ten percent say they’ve only done 25% of the work.

NetApp, which polled a larger number of global respondents, had a range of responses on GDPR readiness. In the US, 23.6% of respondents have no concerns about meeting the GDPR deadline, and 39.7% are slightly concerned. In the UK, 25.9% have no concerns and 52.1% are slightly concerned. Comparatively fewer companies were extremely concerned about meeting the deadline, across the US and Europe.

However, it’s worth noting that motivation is higher as May 25 quickly approaches. In Europe, the Middle East, and Africa, it reports, levels of concern have decreased by 9% over the past 15 months. A separate survey from Scale Venture Partners found the two biggest drivers of security program changes in 2017 were high-profile breaches and the GDPR’s May deadline.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for an intensive Security Pro Summit at Interop IT X and learn from the industry’s most knowledgeable IT security experts. Check out the agenda here.Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/businesses-calculate-cost-of-gdpr-as-deadline-looms/d/d-id/1331527?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Android Patches Can Skip a Beat

Researchers have found that some Android devices are skipping patches and lying about it.

When a device isn’t patched to the most current OS level, it tends to be bad from a security viewpoint. When the device lies to you about it, claiming up-to-date software while remaining unpatched, it’s much, much worse. “Much worse” is the state many Android owners find themselves in, according to two years of research by Karsten Nohl and Jakob Lell of Security Research Labs (SRL).

Nohl and Lell found that Android patching practices are a crazy quilt of practices ranging from fully up to date to woefully behind patch versions to, in the worst cases, woefully behind while telling the users that they are up to date. The problem for users is that there’s no one good way to tell the camp in which a device resides.

According to an article in Wired, SRL tested the firmware of 1,200 phones, from more than a dozen phone manufacturers, for every Android patch released in 2017. They found that a single vendor — Google — provided every patch for every device. All the other vendors, from a list that ranged from Samsung and Motorola to ZTE and TCL, missed at least some of the available patches. Worse, a smattering of devices from each of these vendors failed to install patches even though they told the user that software had been updated.

Now, there can be legitimate reasons for a user, whether individual or company, to skip a patch or delay its rollout. Patches may break individual corporate apps, change device or app behavior, or cause massive device slowdowns. The point is that the choice of whether to install a given patch or update rightly rests with the user, not the vendor.

There can also be legitimate reasons for a vendor to skip a patch or update. Android exists as an ecosystem existing on a staggering number of different hardware platforms, each of which must reach its own separate accord with changes to the operating system. If a vendor finds that a particular patch is incompatible with its hardware, then it can sit out a round and make up any security issues in later versions.

When a vendor chooses not to provide an update but revises the software date to make it appear that a patch has happened, it becomes much harder to justify the vendor’s behavior. The false sense of security the revised OS date provides is especially pernicious at a time of malware that can literally destroy a device.

There are techniques by which a user can manually check for applied updates, but such techniques require methods that many users will not be comfortable using and most enterprise IT shops will find onerous. And there’s no great way to know whether a particular device will be affected by any given patch that might be missed.

In the Wired article, Nohl touts defense in depth as the only realistic protection against the sort of vulnerabilities that may be created by a spoofed update. Defense in depth is a presumption for most corporate IT security schemes. It may well be that paranoia should be added to the toolbox if Android devices are in the pockets of corporate employees.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/android-patches-can-skip-a-beat/d/d-id/1331528?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Update now! Microsoft’s April 2018 Patch Tuesday – 65 vulns, 24 critical

With the Windows 10 1803 Spring Creators Update delayed at the eleventh hour for unknown reasons, admins and end users will still receive plenty of updates in the April 2018 Patch Tuesday.

The big picture is 65 security fixes assigned CVE numbers, 23 of which (plus a separate Adobe Flash flaw) are rated critical, with no true zero-days among them.

An critical 66th CVE on the list should already have been fixed a week ago through an emergency patch that Microsoft issued for a remote code execution (RCE) vulnerability (CVE-2018-0986) in the Microsoft Malware Protection Engine (MMPE).

Affecting Security Essentials, Intune Endpoint Protection, Windows Defender, Exchange Server 2013/2016, and Forefront Endpoint Protection 2010, this patch should have been applied automatically via MMPE itself.

A breakdown of the remaining 22 critical flaws shows:

  • Seven memory corruption vulnerabilities in the Chakra Scripting Engine (Edge’s JavaScript interpreter).
  • Five RCE flaws in Microsoft Graphics’ Windows font library.
  • Four affecting Internet Explorer.
  • Four affecting the scripting engine also used by Internet Explorer.
  • One affecting Windows 10’s Edge browser.
  • One RCE in the Windows VBScript engine.

The five font-themed flaws attracted warnings from experts, including Dustin Childs of vulnerability research company Zero Day Initiative:

Since there are many ways to view fonts – web browsing, documents, attachments – it’s a broad attack surface and attractive to attackers.

A final interesting flaw is CVE-2018-0850, rated “Important” and affecting Microsoft Outlook.

Reported by US CERT CC’s Will Dormann way back in November 2016, the update patches this but not entirely, he said:

This update prevents automatic retrieval of remote OLE objects in Microsoft Outlook when rich text email messages are previewed. If a user clicks on an SMB link, however, this behavior will still cause a password hash to be leaked.

Spectre chip flaws

In parallel news, AMD has issued a Windows microcode update addressing the Spectre variant 2 chip flaw (CVE-2017-5715) that Naked Security covered last week in relation to older Intel microprocessors.

For Windows 10 users, this works in tandem with a Microsoft update (look for “April 2018 Windows OS updates”), installed in conjunction with each PC manufacturer’s BIOS updates. Linux mitigations were released earlier in 2018, AMD said.

TL;DR: in the four words of Naked Security’s security update mantra: patch early, patch often.

Note. The Microsoft Knowledge Base (KB) update number you see depends on your Windows version and build number. The latest Windows 10 build is 16299.371 (1709), for which the update appears as KB4093112.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BQlJiLPJPB8/

UK defines Cyber DEFCON 1, 2 and 3, though of course doesn’t call it that

The UK government has launched a new cyber attack categorisation that is designed to improve response to incidents – sadly it doesn’t go up to 11.*

Categorisation into bands ranging from six down towards one (the most severe) will span the full range of incidents from localised attacks against individuals or SMEs up to “national cyber emergency”.

New UK cyber attack categorisation system

Cyber DEFCON ratings

The NCSC said it has responded to more than 800 significant incidents since October 2016, and their incident responders will now classify attacks into six specific categories rather than the previous three.

The changes, which are effective immediately, are aimed at improving consistency around the incident response as well gearing the UK up towards making a better use of resources – ultimately leading to more victims receiving support.

The incident category definitions delineate what factors would happen to activate a specific classification, which organisation responds and what actions they would take.

Paul Chichester, the NCSC’s director of operations, told us: “This new joint approach, developed in partnership with UK law enforcement, will strengthen the UK’s ability to respond to the significant, growing and diverse cyber threats we face.

“The new system will offer an improved framework for dealing with incidents, especially as GDPR and the NIS Directive come into force shortly.”

The framework encompasses cyber incidents in all sectors of the economy, including central and local government, industry, charities, universities, schools, small businesses and individuals.

Ollie Gower, deputy director at the National Crime Agency, added: “This new framework will ensure we are using the same language to describe and prioritise cyber threats, helping us deliver an even more joined up response.

“I hope businesses and industry will be encouraged to report any cyber attacks they suffer, which in turn will increase our understanding of the cyber threat facing the UK.”

Any cyber attack which may have a national impact should be reported to the NCSC immediately. This includes cyber attacks which are likely to harm UK national security, the economy, public confidence, or public health and safety. Depending on the incident, the NCSC may be able to provide direct technical support.

People or businesses suffering from a cyber attack below the national impact threshold should contact Action Fraud, the UK’s national fraud and cyber crime reporting centre, which will respond in accordance with the new incident categorisation.

Information processed by the new framework will ultimately be used to generate a more comprehensive national picture of the cyber threat landscape.

The announcement comes on the final day of NCSC’s flagship conference CYBERUK 2018. ®

Bootnote

Disappointingly, the newly introduced classification system doesn’t go up to 11. Nor does it have a hors category, like the most difficult mountain climbs of the Tour De France. Hors signifies climbs that are “beyond categorisation”.

There’s no colour coding in the new system – so there’s no brown alert either.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/12/uk_cyber_alert_revamp/