STE WILLIAMS

US intelligence can’t break vulnerability hoarding habit

US intelligence agencies are, according to the White House, about to become more “transparent” about the process they follow in deciding what to do with software vulnerabilities they find.

That is, deciding whether to notify vendors so the vulnerabilities can be patched, or keep them secret so they can be used to probe or attack criminal or hostile nation-state systems.

In a blog post this past Wednesday, White House Cybersecurity Coordinator Rob Joyce disclosed a new version of the highly controversial Vulnerabilities Equities Process (VEP) – the method used to decide what the government does with the bugs.

Joyce acknowledged what he called “the tension,” – which is more of a ferocious debate – over letting vendors know about the vulnerabilities or hoarding them to be used against “extremely capable actors whose actions might otherwise go undiscovered and unchecked.”

The challenge is to find and sustain the capability to hold rogue cyber actors at risk without increasing the likelihood that known vulnerabilities will be exploited to harm legitimate, law-abiding users of cyberspace.

The new VEP charter, he said, will reduce that tension through more transparency and better representation of the interests of all “stakeholders.”

The document is getting mixed reviews. Heather West, senior policy manager, Americas principal at Mozilla, writing on the company blog, said, “we’re excited to see the White House make progress on this important issue.”

Kate Charlet, Sasha Romanosky and Bert Thompson, writing on the Lawfare blog, called it, “a long-needed step… toward increasing transparency on this controversial process.”

But Bruce Schneier, CTO of IBM Resilient Systems and a regular critic of government hoarding of software vulnerabilities, was much less impressed, calling it, “the same old policy with some new transparency measures – which I’m not sure I trust,” given that:

The devil is in the details, and we don’t know the details – and it has giant loopholes that pretty much anything can fall through.

He cited one of those giant loopholes in the language of the new VEP charter:

The United States Government’s decision to disclose or restrict vulnerability information could be subject to restrictions by partner agreements and sensitive operations. Vulnerabilities that fall within these categories will be cataloged by the originating Department/Agency internally and reported directly to the Chair of the ERB (Equities Review Board).

The details of these categories are outlined in Annex C, which is classified. Quantities of excepted vulnerabilities from each department and agency will be provided in ERB meetings to all members.

But Joyce contends that giving stakeholders a seat at the table, allowing more debate and making those who operate the VEP accountable will give citizens, “confidence in the integrity of the process that underpins decision making about discovered vulnerabilities.”

West called the White House’s move “similar” to what is proposed in the PATCH (Protecting our Ability to Counter Hacking) Act, a bipartisan bill pending in Congress that, among other things, calls for government to hold secret vulnerabilities for a much shorter time. She said the VEP charter ought to include that.

Joyce talked about a six month window for retaining a vulnerability (the charter itself says a year), and a quicker reconsideration for a particularly sensitive vulnerability or one that there isn’t broad agreement about retaining. This reconsideration is critical: just because something is useful today doesn’t make it useful in six months – and indeed, the longer that it is kept, the more likely that someone else has discovered it too.

But, all the discussion about how to decide what to keep secret and what to disclose is irrelevant if you can’t keep the secret flaws secret.

Over the last few years we’ve seen several high profile breaches of government agencies, the highest of which remains that of former National Security Agency (NSA) contractor Edward Snowden, who in 2013 released documents that proved the agency was spying on US citizens.

But, he’s not the only one. Since the summer of 2016, a hacker group called the Shadow Brokers has been releasing a cache of top-secret NSA spying capabilities – software bugs that that the agency, obviously, failed to keep secret.

And starting earlier this year, Wikileaks began releasing no-longer-secret tools used by the CIA to exploit not only foreign targets but also the technology of giants like Microsoft and Apple to enable surveillance.

Those exposed flaws have been used to damage millions of people and thousands of businesses, the WannaCry ransomware attack is just one example.

Joyce had little to say about all that, other than to acknowledge that government:

…also has an important responsibility to closely guard and protect vulnerabilities as carefully as our military services protect the traditional weapons retained to fight our nation’s wars.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Zjd4sMdOzBM/

Amazon Echo and Google Home patched against BlueBorne threat

The Amazon Echo and Google Home are being marketed to the world as the “smart speakers” to put helpful, voice-assisted Internet of Things (IoT) AI into people’s homes.

This week we had wearying confirmation that they also, less helpfully, distribute the same security failings into people’s homes as every other device.

Specifically, Amazon and Google have quietly patched flaws in these devices to protect them against BlueBorne, a haul of eight Bluetooth security vulnerabilities reported by Armis Labs in September:

BlueBorne affects ordinary computers, mobile phones, and the expanding realm of IoT devices. The attack does not require the targeted device to be paired to the attacker’s device, or even to be set on discoverable mode.

Nobody knew Amazon and Google’s products were affected until Armis announced the following issues, which mercifully should already have been automatically patched for the Echo’s 15 million, and the Google Home’s five million users, respectively:

For the Echo range:

  • A remote code execution vulnerability in the Linux kernel (CVE-2017-1000251)
  • An information leak in the SDP (Service Discovery Protocol) Server (CVE-2017-1000250)

If left unpatched, the first of these can allow an attacker to gain full control of an Echo device (demonstrated in a proof-of-concept video), while the second exposes it to what Armis described as a Heartbleed-style attack on the encryption keys used to secure wireless communication.

Updated Echo devices will be running software version 591448720, which can be checked by following the company’s instructions.

For Google Home:

  • An information leak vulnerability in the Android Bluetooth stack (CVE-2017-0785) that could be used to run a DoS (Denial of Service) attack on the device.

The updated Google Home software version is 1.28.99956 (1.28.100429 for the Home Mini). Instructions on how to find this are on Google’s support pages.

Armis has also released a Google Play Store app that will scan for devices vulnerable to BlueBorne.

In fairness to Amazon and Google, BlueBorne is a family of vulnerabilities affecting a technology used by huge numbers of Bluetooth devices across many product classes, including computers, phones and other IoT devices.

What this episode hints at is the potential damage a vulnerability in this kind of device (now being bought by businesses as well as home users) could cause, were it successfully exploited.

Take for instance last month’s glitch in Google’s Home Mini that caused a device to secretly record its owner’s conversations for two days. That was a product design issue but the surveillance potential of these devices was being spelled out.

Armis is also worried about the general sprawl of the Internet of Things itself:

Unlike in the PC and mobile world, in which two or three main OSs control the absolute majority of the market, for IoT devices, no such dominant players exist.

The point being that in a fragmented market, vendors can struggle to work out whether an issue affects them or not.

Perhaps, then, the Echo and Home are at the positive end of the spectrum because, unlike too many IoT devices, at least they can be updated without the user having to do anything. But what happens when they are declared obsolete a few years from now and their makers have moved on to greater things?

History tells us that some of the Echo and Home speakers being bought today will still be out there somewhere. The simple but troubling truth is that while these always-listening products will eventually become obsolete, their vulnerabilities will hang around indefinitely.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yDmOel2OFgo/

User experience test tools: A privacy accident waiting to happen

Researchers working on browser fingerprinting found themselves distracted by a much more serious privacy breach: analytical scripts siphoning off masses of user interactions.

Steven Englehardt (a PhD student at Princeton), Arvind Narayanan (a Princeton assistant professor) and Gunes Acar (KU Lueven), published their study at Freedom to Tinker last week. Their key finding is that session replay scripts are indiscriminate in what they scoop, user permission is absent, and there’s evidence that the data isn’t always handled securely.

Session replay is a popular user experience tool: it lets a publisher watch users navigating their site to work out why users leave a site and what needs improving.

As the authors wrote in their analysis: “These scripts record your keystrokes, mouse movements, and scrolling behavior, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.”

Speaking to Vulture South, Englehardt said the trio decided to analyse fingerprinting by injecting a unique value into Web pages to see where personal information was being sent.

“We didn’t really expect to find” the session replay companies, he said.

The next surprise, he said, is how deep the session replay scripts dig.

Anonymity? They’ve heard of it

“You might think these recordings are anonymous, but some of the companies we studied are offering the option to identify the user — so you know that Richard viewed your site, along with his e-mail address”, Acar told The Register.

One reason this happens, they explained, is that as publishers increasingly put content behind secured paywalls, user activity becomes hard to follow.

Englehardt said the page the user is viewing “might only exist behind the login”, meaning that to capture a session for replay to the publisher, the third-party company has to “scrape the whole page”.

As the researchers wrote in their study, scripts from companies like Yandex, FullStory, Hotjar, UserReplay, Smartlook, Clicktale, and SessionCam “record your keystrokes, mouse movements, and scrolling behaviour, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.”

They also found replay scripts capturing checkout and registration processes.

The extent of that data collection meant “sensitive information such as medical conditions, credit card details and other personal information displayed on a page to leak to the third-party as part of the recording”, they wrote.

There is also the potential for data to leak to the outside world, when the customer views the replay, because some of the session recording companies offer their playback over unsecured HTTP.

“Even when a Website is HTTPS, and the information is sent [to the session replay company] over HTTPS, when the publisher logs in to watch the video, they watch on HTTP”, Englehardt said.

That meant network-based third parties could snoop on the replay.

Publishes who used unsecured publisher dashboards included Yandex, Hotjar, and Smartlook.

The study also found the session replay scripts commonly ignore user privacy settings.

The EasyList and EasyPrivacy ad-blockers don’t block FullStory, Smartlook, or UserReplay scripts, although “EasyPrivacy has filter rules that block Yandex, Hotjar, ClickTale and SessionCam.”

“At least one of the five companies we studied (UserReplay) allows publishers to disable data collection from users who have Do Not Track (DNT) set in their browsers,” the study said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/session_replay_exfiltration/

Germany slaps ban on kids’ smartwatches for being ‘secret spyware’

The German telecoms regulator has banned the sale of children’s smartwatches that allow users to secretly listen in on nearby conversations.

The move is the latest in a string of actions taken by the Federal Network Agency, or Bundesnetzagentur, against devices that allow people to snoop on each other.

The agency said the smartwatches, aimed at 5 to 12-year-olds, come with a SIM card and limited telephony functions – allowing nearby chatter to be monitored from afar, which goes against Germany’s strict anti-surveillance laws.

“The watches are regarded as unauthorised transmitting equipment,” said Bundesnetzagentur president Jochen Homann.

He added that investigations had found “that parents were using them to eavesdrop on teachers in lessons”.

The agency said it had already taken action against “several offers on the internet” and indicated that owners weren’t off the hook yet.

“If the Federal Network Agency has knowledge of the buyers of such devices, it will give the evidence and the evidence of this to the authority,” the agency said.

“It is recommended for parents to take responsibility for destroying the devices themselves and to keep proof of this.”

For instance, they could send the device to a waste management station for destruction and then get a letter of proof, or take photos that “clearly show the destruction of the spyware in question” to show that the device is inoperative.

In February this year, the agency banned Genesis Toys’ Cayla doll as an illegal surveillance device as it came with a microphone and captures children’s speech.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/kids_smartwatches_branded_secret_spyware_slapped_with_ban_in_germany/

It was El Reg wot won it: Bing banishes bogus Brit bank banner ad

Microsoft has axed a Bing search result advert that masqueraded as a legit online banking website – but was in fact a sophisticated phishing operation.

Searching for “TSB” – as in the UK’s TSB Bank – on the Great Britain edition of Bing would bring up, right at the top of the page, a search ad for a phishing website described as “TSB – Welcome to TSB UK – Online Personal Account”. Clicking on the link would direct marks to a phishing page pretending to be the bank’s login portal, we’re told.

A Reg reader told us he tried to report the fraudulent ad to Microsoft, and to TSB, yet the advert remained on search result pages. So he turned to us, we prodded Redmond, and over the weekend, the ad and the account that created it were black holed. Hooray.

The Redmond software giant confirmed Monday morning, US Pacific Time, it had pulled the ad. “Following the recent alert of a phishing scam, targeting customers of TSB with the use of a fraudulent website, we can confirm Microsoft has shut down the account responsible,” Microsoft said.

“We will continue to monitor the situation for any further activity.”

Because it was a legitimate ad purchased on Bing, the link would appear as the top result ahead of the actual TSB page. What’s worse, the phishing site used a typo-squatting URL, and a valid SSL certificate to make the forgery appear even more authentic.

Bing before the weekend with the dodgy ‘persqnal’ TSB bank ad


Screenshot of Bing.com GB

Bing after the weekend with the malicious ad gone, and the real bank as the top search hit

Screenshot of Bing.com GB

Click to enlarge either screenshot

“We’ve tried reporting it, but Bing makes it almost impossible to do,” our tipster told us.

“One of my colleagues typed ‘TSB’ into Internet Explorer causing a search on Bing. She nearly entered her details [into the phishing website]. The advert on the site is one of the most realistic scam sites I’ve seen, with a proper co.uk domain, an actual SSL cert, and even a mobile site.”

While El Reg gets the assist in this case, Redmond said it encourages anyone who suspects a phishing site showing up on their Bing results to report the issue to them directly:

We understand that scamming is deeply worrying for consumers and ensure we provide all necessary support if they see something of concern. If anyone witnesses any suspicious activity, we encourage them to inform us at: https://advertise.bingads.microsoft.com/en-us/resources/policies/report-spam-form. When completing, it is important that ‘phishing’ is selected in the form menu, to ensure the issue is brought to our attention promptly, so appropriate actions can be taken.

Once you have reported a dodgy banner ad to Microsoft, it wouldn’t hurt to kick a note over to [email protected] just in case the complaint gets stuck in a queue somewhere. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/microsoft_bing_dodgy_tsb_bank_ad/

New Guide for Political Campaign Cybersecurity Debuts

The Cybersecurity Campaign Playbook created by bipartisan Defending Digital Democracy Project (D3P) group provides political campaigns with tips for securing data, accounts.

Hillary Clinton’s former campaign manager and strategist for her 2016 presidential bid Robby Mook has helped create a new cybersecurity guide for political campaigns in his current role at the bipartisan Defending Digital Democracy Project (D3P) group at Harvard Kennedy School’s Belfer Center for Science and International Affairs.

The Cybersecurity Campaign Playbook, published today, is a “living and breathing” guide to assist US political campaigns, candidates, and their families, in better securing their data and online accounts, according to the D3P. The group – which consists of co-directors Mook and Eric Rosenbach and Matt Rhoades, former campaign manager for Mitt Romney – worked with former director of the NSA Information Assurance Directorate Debora Plunkett, who gathered input from cybersecurity leaders, business executives, and attorneys, for the guide. 

Facebook CISO Alex Stamos, Google’s manager of information security Heather Adkins, and CrowdStrike CTO Dmitri Alperovitch, were among the cybersecurity leaders who contributed to the playbook. The guide recommends among other best practices that campaigns institute user training, commercial cloud services, strong passwords, and an incident response plan.

“Cybersecurity is an issue that every campaign professional now needs to take seriously, but it can be daunting for people who aren’t IT professionals,” said Mook, a senior fellow with the Belfer Center. “This playbook gives candidates and campaign staff without a technical background the tools to take responsibility for their cybersecurity strategy and significantly reduce risk.”

See more information on the D3P Playbook here.

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/new-guide-for-political-campaign-cybersecurity-debuts/d/d-id/1330458?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DDoS Attack Attempts Doubled in 6 Months

Organizations face an average of eight attempts a day, up from an average of four per day at the beginning of this year.

A rise in DDoS hire-for services and unsecured IoT devices is driving a sharp increase in the average number of daily DDoS attack attempts.

Organizations encounter an average of eight DDoS attack attempts per day, up from four attempts a day at the start of the year, according to a newly published Corero Network Security report that tracks DDoS trends from Q2-Q3 2017.

“The growing availability of DDoS-for-hire services is causing an explosion of attacks,” said Ashley Stephenson, CEO of Corero.

Corero’s report also points to botnets such as the Reaper, which were able to leverage the rise in unsecured IoT devices as weapons for larger DDoS attacks. The report also cites a return of Ransom Denial of Service (RDoS) threats in the third quarter.

Read more about the DDoS report here.

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/mobile/ddos-attack-attempts-doubled-in-6-months/d/d-id/1330460?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

North Korea’s Lazarus Group Evolves Tactics, Goes Mobile

The group believed to be behind the Sony breach and attacks on the SWIFT network pivots from targeted to mass attacks.

The Lazarus Group, the North Korean hacking team thought to be behind last year’s attacks on the SWIFT financial network and the devastating data breach at Sony in 2014, appears to be expanding its attack surface.

Security vendor McAfee says there are signs that the group has deviated from its usual highly targeted attacks and is now using mobile malware to potentially go after a broader, but still geographically focused, swath of victims.

Researchers at the company recently discovered a malicious Android application in the wild that looks very much like the handiwork of the Lazarus Group. The malware is disguised to appear like The Bible, a legitimate Android APK from a developer called the GodPeople that is available on Google Play for translating the Bible into Korean. Lazarus Group’s malware is targeting primarily Android smartphone and tablet users in South Korea.

There’s little that’s remotely holy about the fake application, however: when a user downloads the APK file, it installs a backdoor on the device and effectively turns it into a remote controlled bot.

The backdoor – in the executable and linkable format (ELF) – is similar to several executable files that have been previously associated with the Lazarus group. So, too, is the command and control infrastructure, and the tactics and procedures associated with the new malware.

Researchers at McAfee haven’t seen the malicious application on Google Play itself, and they aren’t sure how the malware is being distributed in the wild. It’s also not clear if this is the first time that the Lazarus Group has operated on a mobile platform. But based on the code similarities between the Android malware and the group’s previous exploits, there’s little doubt that the Lazarus Group is now operating in the mobile world, McAfee says.

The evolution is significant because it means that a lot more people could potentially become victims of the group. Market research firm Statista has estimated the number of mobile users in South Korea at around 40 million this year and growing. Around 79% of those users run Android. So far, though, the distribution of the malware has been very low and it is possible that the intended target is GodPeople itself because of its history of supporting religious groups in North Korea, says Raj Samani, chief scientist at McAfee

“GodPeople is sympathetic to individuals from North Korea, helping to produce a movie about underground church groups banned in the North,” Samani says. “Previous dealings with the Korean Information Security Agency on discoveries in the Korean peninsula have shown that religious groups are often the target of such activities in Korea.”

While this particular Android malware sample appears to be targeted purely at South Korean users, the Lazarus Group has already demonstrated its ability to strike outside of the region. The Sony attacks and the 2016 theft of tens of millions of dollars from multiple banks around the world via the SWIFT network have established Lazarus as a formidable threat actor with deep resources and nation-state backing.

The group’s evolution to mobile as an attack vector in South Korea can be easily adapted to other regions of the world, Samani says. All that the attackers need to do is use the core of the backdoor, change the command and control servers where the malware has to report, and insert it into another app. “Malicious actors are adapting their techniques,” Samani says. “As we migrate to mobile, it is likely we will see them develop mechanisms to steal the information from these platforms.”

From a design standpoint, the Android backdoor is similar to other Lazarus code samples that McAfee and others have previously analyzed. Once installed on an Android device, the malware tries to communicate with one of several command-and-control servers whose addresses it contains. The control servers are located in multiple countries including the US, India, South Korea, Argentina, and Nigeria.

Once a connection has been established, the malware collects and transfers device information to the control server and stands by to execute a series of commands.

“Once the attackers have the backdoor installed, a variety of actions can be taken on the compromised device to keep it active for a longer period of time. Many of the commands in the backdoor are related to uploading downloading and browsing of files,” Samani notes.

Related Content:

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/north-koreas-lazarus-group-evolves-tactics-goes-mobile/d/d-id/1330463?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Amazon to fix Key home security vulnerability

Amazon last month introduced Amazon Key: a combination of a smart lock that gives delivery drivers access to your home so they can deliver packages while you monitor them with an internet-enabled camera called Cloud Cam. At least, that’s what happens in the best case – in the worst of all possible scenarios, you could be opening up your home to random strangers.

The problem, which Amazon is thankfully fixing, is Amazon Key’s vulnerability to the easy-peezy technique of jamming the camera with a deauthentication attack.

It was discovered by security researchers at Rhino Security Labs. Ben Caudill, the founder of the Seattle-based security firm, on Wednesday posted a proof-of-concept video that shows the interior of a home; the door unlocking; a deliveryman delivering a parcel as per normal; a screen showing a gush of deauthentication commands; a paralyzed webcam image that’s frozen on the image of a nice, safely closed door; and another screen that shows the delivery guy waltzing in again, undetected and heading for your Christmas goodies.

The camera doesn’t capture the potentially nefarious, second entry; nor does the Key app log it.

Caudill told Wired that this is quite the goof, given that the whole point of Amazon Key is to secure your stuff from porch pirates:

The camera is very much something Amazon is relying on in pitching the security of this as a safe solution. Disabling that camera on command is a pretty powerful capability when you’re talking about environments where you’re relying heavily on that being a critical safety mechanism.

What should happen is that delivery people lock the door with their app. But for this attack, they instead run a program on their laptop or on what Rhino’s researchers suggest could be a simple handheld device anyone can build out of a Raspberry Pi minicomputer and an antenna that sends deauthentication commands – Rhino calls them deauthorization packets – to the home’s Cloud Cam.

As Wired points out, the spoofed commands aren’t a bug in Cloud Cam, per se; rather, it’s something to which practically all Wi-Fi devices are vulnerable.

For example, we saw employees of Marriott, which manages operations at the Gaylord Opryland, own up to using a Wi-Fi monitoring service to contain and/or deauthenticate packets sent to targeted access points and thus to disrupt access to individual hotspots, back in 2015. They got fined $600,000 for it and, after a fight, threw in the towel on the practice.

We also saw a guy who wanted his own, personal cone of silence get charged with a felony for using a jammer to get people on the train to stop talking on their phones during his commute.

Jamming is common, and it’s most definitely illegal: you’re tampering with a public utility. That’s why the Chicago silence-craving guy got charged with a felony (later reduced to a misdemeanor): both the public and emergency services rely on Wi-Fi access.

But while it’s a common attack, it’s unnerving that Amazon’s camera doesn’t have any protocols to respond to going offline. It just keeps showing a user the last frame it recorded before it froze, with no alarms or alerts to flag its paralysis.

Caudill:

As a partially trusted Amazon delivery person, you can compromise the security of anyone’s house you have temporary access to without any logs or entries that would be unusual or suspicious.

Amazon has told news outlets that it currently notifies customers if Cloud Cam is offline for an “extended period”.

According to CBS News, as of Friday, Amazon was planning to put a software update out “later this week,” to “more quickly provide notifications if the camera goes offline during delivery” and to make sure the “service will not unlock the door if the Wi-Fi is disabled and the camera is not online.”

Amazon also said this type of attack is “unlikely.” It’s not a security issue, in their view, and besides, they thoroughly vet their delivery drivers.

Caudill’s take: this attack costs chump change. It’s achievable by anybody within Wi-Fi range – in other words, delivery people. And more to the point, the whole idea of the $249 Amazon Key package is to open people’s doors to people who specifically aren’t burglars or creeps.

Based on the simplicity of the attack, $20 and some really freely available software you can implement this yourself. It’s not a trivial attack.

Let’s hope Amazon gets this sorted soon.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6zc4gC3A-oU/

It’s 2017, and command injection is still the top threat to web apps

The Open Web Application Security Project will on Monday, US time, reveal its annual analysis of web application risks, but The Register has sniffed out the final draft of the report and can report that it has found familiar attacks top its charts, but exotic exploits are on the rise.

A late pre-release version of the Project’s report [PDF] compiled over 40 submissions from application security companies, plus results of an industry survey that queried 500 respondents.

This year’s Top 10 risks in order were:


*Two 2013 entries merged

The Project explained the three new entries in the list since 2013:

  • XML External Entity vulnerabilities – This was added to the list as a result of data from source code analysis tools. Poor configuration in XML parsers created a range of vulnerabilities that included internal file or shares disclosure, internal port scanning, remote code execution (RCE) or denial of service (DoS);
  • Insecure deserialisation – OWASP said this category came out of its community survey. As well as RCE, this class of vulnerability can lead to replay attacks, injection attacks, and privilege escalation.
  • Insufficient logging and monitoring – This makes it difficult for admins to detect and respond to attacks, and the Project noted it can take as long as 200 days to detect a breach.

There are some good news stories in the trends from 2013 to today. Admins are wise to cross-site request forgery (CSRF), which was reported in fewer than five per cent of all apps; while unvalidated redirects and forwards were reported in less than one per cent of the data set.

OWASP also noted architectural changes which were reflected in current risks, or were likely to emerge in the future.

The take-up of microservices often puts old code behind RESTful or other APIs, but was never intended to be exposed to the outside world. “The base assumptions behind the code, such as trusted callers, are no longer valid”, the report said.

Second, the report noted the emergence of “single page applications” written with Angular or React. While these support highly functional front-ends, moving functionality from the server side to the client “brings its own security challenges”.

Finally, by way of node.js, JavaScript has become the web’s “primary language” (which could account for the rise of deserialisation risks).

The report project’s leads were Andrew van der Stock, Brian Glas, Neil Smithline, and Torsten Gigler. The final release version will be announced at the organisation’s wiki, and on its Twitter account, when it lands. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/20/open_web_application_security_project_2017_report/