STE WILLIAMS

Apple iCloud Keychain easily slurped, ElcomSoft says

ElcomSoft, the Russia-based maker of forensic software, has managed to find a way to access the data stored in Apple’s iCloud Keychain, if Apple ID account credentials are available.

Apple’s iCloud Keychain is a remote copy of the password vault that’s optionally available to users of iOS and macOS devices.

If enabled, it can store copies of credentials for Safari websites, for services like Facebook, Twitter and LinkedIn, and for applications like Calendar, Contacts, and Mail, along with credit card numbers and Wi-Fi network data.

It serves to replicate the contents of the device-specific Keychain database, which is exposed as an app to macOS users and as an API to developers on iOS devices. It also assists with two-factor authentication.

ElcomSoft’s Phone Breaker 7.0 has gained the ability to access and decrypt iCloud Keychain data, under certain circumstances.

ElcomSoft

Phone Breaker 7.0 screen (click to enlarge)

Users of Apple devices who have not enabled two-factor authentication and have not set up an iCloud Security code do not have an iCloud Keychain stored with Apple. Otherwise, the database exists in iCloud accounts, and can be accessed with an Apple ID, password, and – if the device is protected by two-factor authentication – a one-time security code.

In an email to The Register, CEO Vladimir Katalov said this capability is not the consequence of any vulnerability. Rather, it’s intended for forensic investigators and law enforcement, given that an Apple ID and a trusted device are necessary.

Katalov said this is not a exploitation of a vulnerability and there’s nothing Apple can patch. Rather, ElcomSoft is exposing functions that Apple has not made available – Apple does not provide any means of accessing iCloud Keychain.

Katalov said the technique works with beta releases of iOS 11 and macOS High Sierra, which Apple is expected to introduce in a month or two.

There’s no security risk for Apple customers yet, according to Katalov. However, ElcomSoft is planning to implement the ability to access the iCloud Keychain with the help of an authentication token pulled from a PC or Mac.

“That way, that will be able to get just a couple of files from suspect’s computer, and get all passwords and credit card numbers with no need to have anything else (credentials, trusted device etc), and with no traces left,” he said.

ElcomSoft in February found that it was able to recover deleted Safari browsing history data from iCloud. In November 2016, the data harvesting biz discovered Apple’s iCloud Drive was storing iPhone call logs without consent.

Apple’s iCloud Keychain has elicited interest from security researchers because it’s such a tempting target. In March, Apple fixed an iCloud Keychain vulnerability (CVE-2017-2448) that had been disclosed to the company by Alex Radocea, cofounder of Longterm Security, two months earlier.

Radocea elaborated on his findings in May and presented a more detailed account of his work at the Black Hat conference earlier this month.

An Apple spokesperson said the company was looking into ElcomSoft’s claim, but did not respond further. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/22/apple_icloud_keychain_easily_slurped/

Smart robots prove stupidly easy to hack for spying and murder

Robots are increasingly common in the 21st Century, both on the factory floor and in the home, however it appears their security systems are anything but modern and high tech.

In March IOActive released partial research showing that hacking a variety of industrial and home robotics systems wasn’t too difficult. Now, after vendors have been busy patching, they are showing [PDF] how it is done and the potentially lethal consequences.

When it comes to causing serious damage, industrial robots have the biggest potential for harm. They’re weighty beasts, with the ability to hit fleshy humans very hard if so programmed. The researchers found that with access to a factory network, these kinds of systems were trivially easy to hack.

For example, systems from Universal Robots were vulnerable in a variety of ways:

  • A simple stack-based buffer overflow condition would allow new code to be written onto the robots’ systems.
  • A key part of the operating system had no authentication control on the robot’s movements.
  • Units had a static SSH host key that left them open to man-in-the-middle attack.

Youtube Video

It’s unlikely that anyone would be bonkers enough to hack this robot and try to harm people – they’re static and you’d have to make them flail around like President Trump mocking a disabled reporter – but for the smart hacker there are a host of possibilities for these kinds of attacks.

For example, the Stuxnet malware managed to temporarily cripple Iran’s nuclear centrifuges by messing with speed controls on the machinery and hiding this from controllers. Now imagine similar code in an arms factory instructing the robots to make tiny but important changes to their tasks that could cripple the end product – underboring the size of a tank barrel, for example.

Alternately a cunning hacker could have used these now-patched flaws to shut down a production floor. If they had shorted the stock of the target company, the ensuing loss of facilities and knock-on effects could prove very popular.

The spy inside your home

While the industrial side of things could prove expensive, it turns out that home robots also have major issues – particularly with the apps that control them. This allowed them to have a little fun making the UBTech Alpha 2 a bit stabby:

Youtube Video

Admittedly that’s not going to do anyone any harm unless they are immobilized and very thin-skinned – the robot’s too clumsy and slow. But the Alpha 2 and similar house robots like SoftBank Robotics’ Pepper and NAO designs are loaded with microphones and cameras that could allow a hacker full visual access to the owner’s home.

The chief problems these types of robots had was a lack of code signing and protection, and some of the mobile apps that control the devices proved easy to either man-in-the-middle or alter to allow new remote access code to reside in the main app.

Youtube Video

These little home spies could be used to stalk their owners’ lives, tell a burglar if anyone’s home, and possibly even open the front door for them if the machine has the dexterity. Given the expensive nature of the hardware, you’d have thought the manufacturers might have put a bit more thought into basic security measures. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/22/smart_robots_easy_to_hack/

Fake news: Mozilla joins the fight to stop it polluting the web

The fight against fake news has a new participant: Mozilla. The organization, which wants to keep the internet a healthy public resource, has announced its Mozilla Information Trust Initiative (MITI), which is a multi-pronged effort to keep the internet credible.

We should pronounce MITI “mighty”, according to Phillip Smith, Mozilla senior fellow for media, misinformation and trust. He explains that Mozilla started this initiative because fake news is threatening the internet ecosystem which Mozilla’s manifesto has vowed to protect.

Ecosystems can withstand some pollution, he says, which is just as well because all ecosystems have some of it. Eventually, though, the pollution reaches a tipping point. For Smith, the internet is an ecosystem, and fake information is the pollutant. He says:

The question we’re asking at Mozilla is whether it’s reaching a point where it risks tripping a positive feedback loop that’s no longer sustainable.

A multi-faceted approach

MITI will tackle fake news in several ways. It will work on products that target misinformation, both itself and with media organizations. It will also research the spread and effect of fake news (expect some reports soon), and it will also host “creative interventions” that seek to highlight the spread of misinformation in interesting ways. It gives the example of an augmented reality app that uses data visualization to show how fake news affects internet health.

Fake news has been a problem for years, but it has surfaced far more visibly of late. That’s in large part because of the 2016 US presidential election, says Smith.

There are big questions about the role this new form of online disinformation potentially played in influencing peoples’ opinion during a very important and divisive US election.

Tackling fake news is a daunting task with different challenges. One of them is its sheer volume. “It’s an asymmetrical problem,” says Smith. “Fake information is produced in exponentially larger quantities than debunks can be produced.”

Another is speed. Fake news spreads like wildfire, making it around the world with just a few unthoughtful clicks. Research shows that it takes far longer – between 13 and 14 hours after a fake story first appears – to stamp it out.

There have been different attempts to solve the problem. Some sites try to act as “debunk hubs” – go-to sites that act as authoritative voices when debunking fake news. Snopes, the grandma of all debunking sites, has been doing this for two decades. In India, Check4Spam is trying to halt the spread of fake news via WhatsApp. Buzzfeed launched Debunk in an attempt to out-virus viral falsehoods with stories correcting them.

Automating the fake news fight

Other organizations, already acting as fact-check hubs, are aiming for more automation. A tool from UK fact-checking organization Full Fact promises to scan newspaper headlines, live TV subtitles and parliamentary broadcasts for statements that match its existing database of facts. The goal is to debunk or confirm statements in real time. Representatives have likened it to an immune system for online fakery.

This idea of automated immunity has found traction among the hyperscale search engine and social media sites. With the enormous power they wield on the web, these players risk being infection vectors for fake news if they don’t become part of the solution.

Twitter seems behind the curve when it comes to fake news. It has reportedly been mulling the idea of a fake news tab, but has said little on the record, other than a mid-June blog post explaining that it’s working on detecting spammy bots.

Google has rolled out its own fact-checking tool for Google News internationally. Unlike Facebook, it isn’t relying on users to tag dodgy stories. Instead, its list of 115 partner organizations will check the facts and label the stories accordingly. They won’t be checking every story, though, and Google won’t be following a set rule to counter different opinions over whether something is fake news.

That highlights another problem for fake news fighters: it isn’t always easy to spot, or quantify. Smith points out that fake news isn’t always binary. Often, the falsehoods lie on a continuum.

“Is it mostly right, but with an incorrect fact? Is it completely fabricated?” he asks, articulating the subtleties of some fake news. “So there is a range, and I think it’s hard to automate the identification or categorization of content with that nuance.”

That doesn’t mean people aren’t trying. Full Fact is one organization behind the Fake News Challenge, which organizes artificial intelligence experts to detect fake news using natural language processing and machine learning algorithms.

It’s a good effort, but Smith says that it has its shortcomings. “None of the teams were able to produce a reliable model for categorizing content that has the nuance that a human would require to discern,” he says.

From technology to literacy

With that in mind, should we be using technology to pick news stories for readers, or simply to advise them? Smith says that technology has a place, but shouldn’t overstep its bounds.

We believe that Firefox users are smart people and are capable of making these decisions or discernments themselves.

Google won’t use its fact-checking information to alter search results, but Facebook wants to use its own algorithms to alter content rankings.

The social media giant has introduced a tag that enables people to report fake news stories (although the reporting option doesn’t appear to have rolled out across all countries yet).  It has partnered with third party organizations like Snopes that support Poynter’s fact-checkers’ code.

Facebook, which already collects vast amounts of data about how you interact with its site, has vowed to watch whether reading an article makes people less likely to share it. It will fold this into its rankings, it warned.

The thing is, Facebook’s anti-fake news measures aren’t working that well. Untagged copies of fake news stories are still showing up on its site. Are we really ready to entrust our news choices to its code?

“I’m not sure technology is going to be the answer,” says Richard Sambrook, deputy head of school and director of Cardiff University’s Centre for Journalism. He argues that online users are ultimately responsible for their own media literacy.

They also need to take responsibility for their own news diets – and realise that if you only consume junk, it’s not good for your health! More seriously, we all need to protect against only seeing our own views reflected back at us in filter bubbles or echo chambers.

That’s where the other part of Mozilla’s work will come in. Alongside product partnerships, “creative interventions” and research, MITI’s other weapon in the fight against the spread of online misinformation is literacy. Says Smith:

There is evidence that online knowledge and education are incredibly important to the next billion people coming online “What is lacking right now is a web or media literacy for those people, or resources for those people to use in understanding their information environment.

It isn’t just newcomers to the web that may need some help with media literacy, other studies suggest. Stanford University’s recent research into this area suggests that young people – supposedly our savvy digital natives – are just as vulnerable as others when it comes to critical thinking about what they read and see online.

Mozilla has focused on literacy for a long time, Smith points out. Under MITI, it will develop a web curriculum to help with media literacy, and continue investing in Mission:Information, an existing curriculum aimed specifically at teens.

Targeting kids will be critical, warns Sambrook. “Awareness is a big part of the answer, but we also need to take media literacy more seriously from junior school onwards,” he says. “Investment in media literacy will take a generation or more to catch up.”

Smith also cites other resources to help increase media literacy, including the University of Washington open source media literacy course “Calling Bullshit”, which is available for free online here. OpenSources is curating a list of credible and non-credible sources, along with its reasons, while Full Fact has a handy checklist along with a fact-checker to help verify claims.

There are many more online resources for fact-checking, but the challenge will be getting people to use them and develop their own critical faculties, rather than relying on some opaque algorithm somewhere to make their evaluations for them.

As new fake news techniques emerge, Smith doesn’t entirely rule out the use of technology to fight it. But how we apply that technology will be critical, especially as purveyors of fake news take advantage of new techniques such as the manipulation of video using AI.

“There are pushes to create tools that identify false information created through those means,” he says, adding that AI may play a part in identifying manipulated content in the future. “That will be pretty critical very soon.”

He doesn’t rule out the idea of a common standard for uniquely hashing fake content and storing them in an accessible way, much as anti-malware companies use digital fingerprinting to identify malware. Other technologies could be used to accelerate literacy, such as by privately notifying a person when they have shared content later found to be fake.

Unless we get this right, the future looks dark, warns Sambrook, who describes a future in which Smith’s ecosystem is overrun with fake news, hopelessly polluted with an ocean of misinformation.

“The world is also becoming more polarised politically and less tolerant. I am afraid I see no signs of that being reversed. It may be a period, like the 1960s in the USA, where division eventually recedes, or it may end in war or civil violence. Given the disruption technology is bringing in all areas of the economy and employment, I’m not optimistic, I’m afraid.

Technology may have a place in fighting that future, but ultimately it’s going to come down to us. Marshall McLuhan voiced it best in 1964, five years before researchers flipped the switch on the internet’s first router: “Faced with information overload, we have no alternative but pattern recognition.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3IbiJ1njNXU/

‘Smart’ solar power inverters raise risk of energy grid attacks

Earlier this month, a researcher gave a presentation on a clutch of software flaws in one manufacturer’s solar power inverters he believes could, if exploited widely enough and with clever timing, disrupt the energy grid of an entire country.

Given the dearth of research on this class of device, it’s an eye-catching if sensational claim that shouldn’t come as a total surprise in the light of recent technological developments.

Every solar power system has a wall-mounted inverter to convert DC photovoltaic (PV) power generated by solar panels into AC power that can be used by the owner or exported to the grid should any be left over.

A growing number of these come with “smart” software interfaces designed to let engineers monitor the inverter remotely while giving customers the fashionable ability to analyse their energy consumption using an app.

According to researcher Willem Westerhof, it is this software layer that creates the opening for attackers. In total, his “Horus” research identified 21 vulnerabilities (14 of which have formal CVE numbers) in inverters from German manufacturer SMA, disclosed to the company in December 2016.

Westerhof doesn’t offer detail on how they might be exploited for security reasons, but studying the CVE descriptions reveals a mixture of default and weakly secured passwords, vulnerable remote authentication, dodgy firmware updating, and even the ability to induce a denial-of-service state.

These formed the basis for a proof-of-concept black box test (ie without special knowledge of the target’s design) on the SMA products, which confirmed that an attacker could use them to compromise inverters in a range of ways.

Westerhof then modelled theoretical attacks whereby large numbers of these inverters were taken offline suddenly, preventing them from feeding electricity to a national grid without a backup power generation source being available.

Given that it is difficult to know how much power is supplied in this way at any given moment, this meant using mathematical modelling to estimate the effect of removing them. Claims Westerhof:

An attacker capable of controlling the flow of power from a large number of these devices could therefore cause peaks or dips of several GigaWatts causing massive balancing issues which may lead to large scale power outages.

This alarming scenario rests on a number of big assumptions, not least the prevalence of smart inverters from one company. The inverter market in most countries remains fragmented, featuring several brands and many models lacking internet capability. This means that disrupting the grid by attack equipment from one vendor is probably far-fetched.

SMA told journalists that the attack scenarios could be countered by users changing passwords – which doesn’t address the fact these issues exist in the first place on such an extensive scale. None of the CVEs appear to have been patched.

Observers are left with the feeling that while the doom-laden possibilities mentioned by Westerhof are pretty exaggerated, the weak software design implied by his findings is worth knowing about.

If independent researchers don’t rummage around and find these problems before they become serious, who will? With so many industries busily adding software intelligence to once passive devices, the competence of vendors is still taken on trust. That cosy assumption might be the real story here.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Na2AhNsgELY/

US appeals court curbs police power to seize cellphones

A US court of appeals in the DC circuit at the end of last week tossed out evidence police seized under a search warrant that sought cellphones and electronic devices without showing probable cause that the suspect actually owned any.

Most of us nowadays carry a cellphone, and those devices often contain a chronicle of our daily deeds and misdeeds. But the assumption that a suspect has a phone doesn’t make it OK to search his home, Circuit Judge Sri Srinivasan, joined by Judge Nina Pillard, wrote in a decision (PDF) on Friday.

Most of us nowadays carry a cellphone. And our phones frequently contain information chronicling our daily lives – where we go, whom we see, what we say to our friends, and the like. When a person is suspected of a crime, his phone thus can serve as a fruitful source of evidence, especially if he committed the offense in concert with others with whom he might communicate about it.

Does this mean that, whenever officers have reason to suspect a person of involvement in a crime, they have probable cause to search his home for cell phones because he might own one and it might contain relevant evidence?

The case is that of Ezra Griffith. As a 23-year-old, he’d already been convicted of attempted robbery. Police were investigating a homicide when they got tipped off that Griffith might have been involved. Actually, Griffith himself tipped them off: while he was in jail for the attempted robbery, he used prison phones – as in, the kind that record conversations – to talk to a few people about the police’s interest in his vehicle, which had apparently been caught on surveillance cameras near the scene of the shooting death.

After he got out of jail, Griffith moved in with his girlfriend. Police got a warrant to search this residence as part of the ongoing homicide investigation.

In the affidavit to get the warrant, a 22-year law enforcement veteran made the following declaration:

Based upon your affiant’s professional training and experience and your affiant’s work with other veteran police officers and detectives, I know that gang/crew members involved in criminal activity maintain regular contact with each other, even when they are arrested or incarcerated, and that they often stay advised and share intelligence about their activities through cell phones and other electronic communication devices and the internet, to include Facebook, Twitter and E-mail accounts.

Based upon the aforementioned facts and circumstances, and your affiant’s experience and training, there is probable cause to believe that secreted inside of [Lewis’s apartment] is evidence relating to the homicide discussed above.

Here’s what was left out of that search warrant affidavit: any mention of Griffith owning a cellphone, and any evidence related to the homicide that might be found on the phone. In fact, the law enforcement agent who made out the affidavit can’t have had much faith that Griffith did in fact have a phone, given that he broadened it to include any electronics that might be in the girlfriend’s apartment.

Those assertions just weren’t supported by much in the warrant, the Appeals Court pointed out.

The government’s argument in support of probable cause to search the apartment rests on the prospect of finding one specific item there: a cellphone owned by Griffith. Yet the affidavit supporting the warrant application provided virtually no reason to suspect that Griffith in fact owned a cellphone, let alone that any phone belonging to him and containing incriminating information would be found in the residence. At the same time, the warrant authorized the wholesale seizure of all electronic devices discovered in the apartment, including items owned by third parties. In those circumstances, we conclude that the warrant was unsupported by probable cause and unduly broad in its reach.

It’s not that probable cause was lacking, the court said. It’s just that the probable cause was appropriate for an arrest warrant, not a search warrant:

To obtain a warrant to search for and seize a suspect’s possessions or property, the government must do more than show probable cause to arrest him. The government failed to make the requisite showing in this case.

In fact, the Appeals Court noted, if the police had obtained an arrest warrant, it would have allowed them to seize the broad array of things they were after: all electronic devices, including cellphones, computers, PDAs, tablets, CDs, DVDs, or external drives; and anything that might have mentioned the shooting death, be it handwritten notes, papers, photographs, or newspaper clippings.

When police showed up at the front door with the search warrant, one officer went around to the back of the house. That’s where he says he saw Griffith toss a gun out the bedroom window and on to the ground.

In April 2013, Griffith was convicted of unlawful possession of a firearm by a felon.

That’s the conviction that that DC Appeals court tossed out last week, saying in a two-to-one ruling that the police found the weapon only because they had drafted an “overly broad” search warrant.

Well, get ready for one hot mess. The ruling is sending shockwaves on the basis of privacy, device security, and how American law enforcement conduct investigations. Here’s a tweet from well-known cybercrime law professor Orin Kerr:

Writing in dissent, judge Janice Rogers Brown fretted about what the decision would mean to law enforcement trying to rely on their judgement to do their jobs:

This result is directly contrary to the purpose of the exclusionary rule and Supreme Court precedent that reserves suppression only for the most serious police misconduct. If courts are going to impose a remedy as extreme as excluding evidence that is probative, reliable, and often determinative of a defendant’s guilt, we have a duty to protect officers who are doing their best to stay within the bounds of our ever-evolving jurisprudence.

We live in a society where virtually every action an officer takes is now being heavily scrutinized. Thus, the need for vindication when law enforcement officers behave in an exemplary fashion is more critical than ever. Unfortunately, the officers in this case are not going to get the vindication they deserve. Furthermore, I have no doubt this case will be used in future cases to further undermine the good faith exception until either this Court sitting en banc or the Supreme Court steps in to cure today’s grievous error.

In essence, the good-faith exemption that Judge Brown was worried about allows evidence to be collected in ways that violate Fourth Amendment rights to privacy if, in fact, police officers were acting in good faith but relying on a flawed search warrant. In other words, if police think their actions are legal, they’re generally held to be legal.

That exemption likely isn’t going away just because of this ruling. As TechDirt notes, it’s “pretty much the rule everywhere”.

It’s not like we’ve never seen the courts rule against warrantless phone searches. US v. Griffith is only the latest case in which police have been asked to provide probable cause before seizing and searching our gadgets. The Supreme Court has upheld the notion that our privacy is at stake when searching our electronics, ruling against warrantless phone search in cases such as US vs. Jones and Riley vs. California and tackling the warrantless seizure of cellphone location records more recently.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wl0CppFxmSg/

US Navy suffers third ship collision this year

The accident-prone US Navy has suspended all of its warship operations around the world following its third collision at sea this year.

The latest incident took place between general-purpose destroyer USS John S McCain and a Liberian-flagged oil tanker, the Alnic MC, off the coast of Singapore, where the American warship was due to make a port stop.

Photos of the warship after the collision (such as those on The Guardian website) clearly show the impact of the tanker’s bulbous bow on the destroyer’s side. Five sailors were injured and ten were listed as “missing” after the collision.

At the time of writing a body had been found by the Malaysian Navy, and human remains had been identified inside the damaged section of the destroyer, as acknowledged by US Navy Admiral Scott Swift during a press conference today, but these had not been formally identified.

News reports indicated that the McCain may have suffered a steering gear defect that left her broadside on a main shipping channel, though accounts vary as to what happened and how. There appears to be no evidence to support theories that the ships may have been hacked, though investigators are said to be considering all possibilities.

Admiral Swift confirmed that all USN Seventh Fleet ships would be “stopped by August 28”, in line with an order from the chief of the American Navy, Admiral John Richardson, to cease all operations. During the “pause” US Navy investigators would be “focused on navigation, ships’ mechanical systems and bridge resource management”, according to Admiral Swift.

Though the admiral said he had visited the McCain and denied, in response to reporters’ questions, that her crew were overworked (“I didn’t see a crew taking a knee, so to speak”) his comment about bridge resource management may reveal the initial direction of the investigation into this latest collision. A modern warship like the McCain, an Arleigh Burke-class destroyer, is packed with radar and passive sensor systems. There is no reason why her command team could, or should, have been so unaware of other ships nearby.

The Chinese stuck their oar in, with a state-run propaganda outlet using the incident to label the US Navy “a growing risk to commercial shipping”. This is mainly because the Chinese resent USN freedom-of-navigation operations around the South China Sea, large chunks of which China wants to claim as its own territory rather than leaving it as international waters.

A couple of months ago, the US Naval Institute suggested that the USN change some of its practices. These ideas included turning on the ship’s automatic identification system (an onboard beacon that broadcasts the ship’s location, direction and speed), additional training in the use of automatic radar plotting aids for USN watchkeepers.

“In the commercial world, deck watch officers complete a similar course,” noted the institute. It also recommended that the USN buys display systems that integrate all of the various sensor outputs onto one screen, rather than relying on humans to mentally integrate multiple screens, something it notes “may lead to mistakes”.

CNN noted that the USN has suffered four maritime mishaps this year, including three collisions and one ship running aground. The most recent collision involved the USS Fitzgerald, another Arleigh Burke-class destroyer, and cost the lives of seven sailors. Her captain, who was injured in the collision, was sacked along with his second-in-command and the ship’s most senior non-commissioned officer. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/22/us_navy_4th_ship_mishap_of_the_year/

Foxit backtracks after declining to fix zero-days exposed by ZDI

What’s the best way to make a company patch security flaws in its software?

Ordinarily, it should be to tell that company about them. Lots of researchers do this all the time in return for a bug bounty or, perhaps, a namecheck in the patching notes.

Sometimes, however, the company doing the “telling” is a larger company that makes it money out of collecting and reselling vulnerability intelligence, in which case things can occasionally end up being more complicated – and contentious.

This was the scenario when, earlier this year, Zero Day Initiative (ZDI) told Foxit about two zero-day (i.e. undisclosed and unpatched) security vulnerabilities in its PDF Reader and PhantomPDF software, reportedly installed more than 400m times between them.

The flaws – now designated CVE-2017-10951 and CVE-2017-10952 – are JavaScript command injection and file write vulnerabilities, and are serious enough to allow an attacker to take over a target PC, a high priority for a fix surely.

But Foxit did not offer fixes for the flaws and it was on that basis that ZDI last week made them public according to the company’s stringent 120-day disclosure policy.

Declining to fix the issues, Foxit said it preferred to rely on the software’s “Safe Reading” mode for protection. This is:

Enabled by default to control the running of JavaScript, which can effectively guard against potential vulnerabilities from unauthorized JavaScript actions.

Because it’s set by default in the style of Adobe’s sandboxed Protected Mode, this point is accurate, but assumes the user doesn’t disengage it. Normally this would only happen when a user receives a PDF from a known contact, but that can still be a dangerously subjective judgment.

A few days on from ZDI going public and Foxit’s position has suddenly changed:

We are currently working to rapidly address the two vulnerabilities reported on the Zero Day Initiative blog and will quickly deliver software improvements.

The company had “miscommunicated” during its initial response and planned changes to avoid such a thing happening again, it reportedly told ZDI.

Given that Foxit it has been busily patching its software this year, we can probably take this statement at face value although embarrassment seems to have been a factor. Meanwhile, the rise of bug bounty programmes has made life harder for companies lacking such a thing.

When professional researchers discover flaws, they are more likely report them to companies that will pay them, be those third parties or the affected vendor. The catch is that bug bounties are competitive and so smaller companies are always at a disadvantage when a larger outfit is interested.

It could be worse. Earlier this year, star researcher Tavis Ormandy shone the spotlight on password manger LastPass, which found itself fixing a succession of flaws to meet Google Project Zero’s disclosure deadline in the full glare of public attention.

Foxit users should nevertheless be on their guard over these flaws. The lack of patches is represents a significant risk and it might be an idea either to mandate Safe Reading mode or move to another reader until they are forthcoming.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yXy_CGVqpCQ/

Boffins blast beats to bury secret sonar in your ‘smart’ home

Researchers at the University of Washington have devised a way of conducting surreptitious sonar surveillance using home devices equipped with microphones and speakers.

The technique, called CovertBand, looks beyond the obvious possibility of using a microphone-equipped device for eavesdropping. It explores how devices with audio inputs and outputs can be turned into echo-location devices capable of calculating the positions and activities of people in a room.

In a paper [PDF] titled “CovertBand: Activity Information Leakage using Music,” Rajalakshmi Nandakumar, Alex Takakuwa, Tadayoshi Kohno, and Shyamnath Gollakota describe a way to transmit acoustic pulses in the 18‑20 kHz range, masked by music, from the speaker and tracking sound reflected by the human body using microphones.

“Our implementation, CovertBand, monitors minute changes to these reflections to track multiple people concurrently and to recognize different types of motion, leaking information about where people are in addition to what they may be doing,” the paper explains.

Sounds of 18‑20 kHz are within the range of human hearing for some people. What’s more, the speakers of home devices tend to create audible harmonics when playing sounds at this frequency.

To conceal the sound signals, the researchers propose a compromised media app that plays music to cover sonar pings. They suggest that a malicious advertising library would be a suitable vehicle for implementing this capability.

Not all songs work equally well to hide the attack. Songs with lots of percussion proved the most effective at masking sonar pulses, according to the paper.

The researchers tested CovertBand in five homes in the Seattle area and were able to demonstrate that they could identify the position of multiple individuals through barriers.

“These tests show CovertBand can track walking subjects with a mean tracking error of 18cm and subjects moving at a fixed position with an accuracy of 8cm at up to 6m in line-of-sight and 3m through barriers,” the paper says.

CovertBand is one of several potential mechanisms for tracking people’s location using sound, including frequency-modulated continuous-wave radar, software-based radios, Wi-Fi signals, gesture sonar, and acoustic couplers attached to walls. The authors suggest their approach has the advantage of working with off-the-shelf hardware.

There are a number of possible defenses, such as soundproofing, high-frequency jamming, and countermeasures involving smartphone apps or a Raspberry Pi with a mic. But, the researchers explain, these assume that a victim is aware of the risks and is taking steps to mitigate them.

The paper is scheduled to be published next month at UBICOMP 2017. There’s a video too. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/22/boffins_blast_beats_to_bury_secret_sonar/

Disbanding your security team may not be an entirely dumb idea

Disbanding your security team may not be an entirely dumb idea, because plenty of other people in your organisation already overlap with their responsibilities, or could usefully do their jobs.

That’s an idea advanced by analyst firm Gartner’s vice president and research fellow Tom Scholtz, who has raised it as a deliberately provocative gesture to get people thinking about how to best secure their organisations.

Scholtz’s hypothesis is that when organisations perceive more risk, they create a dedicated team to address it. That team, he said, grows as the scope of risk grows. With business quickly expanding their online activities, that means lots more risk and lots more people in the central team. Which might do the job but also reminded Scholtz that big teams are seldom noted for efficiency.

He also says plenty of businesses see centralised security as roadblocks. “I met one chief security officer who said his team is known as the ‘business prevention department’,” Scholtz told Gartner’s Security and Risk Management Summit in Sydney today.

He therefore looked at how security teams might become less obstructive and hit on the idea of pushing responsibility for security into other teams. One area where this could work, he said, is endpoint security, a field in which many organisations have dedicated and skilled teams to tend desktops and/or servers. Data security is another area ripe for potential devolution, as Scholtz said security teams often have responsibility to determine the value of data and how it can be used, as do the teams that use that data. Yet both teams exist in their own silo and duplicate elements of each other’s work. Giving the job to one team could therefore be useful.

He also pointed out that security teams’ natural proclivities mean they are often not the best educators inside a business, yet other teams are dedicated to the task and therefore excellent candidates for the job of explaining how to control risk.

Scholtz’s research led him to believe that organisations will still need central security teams, but that devolution is unlikely to hurt if done well. Indeed, he said he’s met CIOs who are already making the idea happen, by always looking for other organisations to take responsibility for tasks they don’t think belong in a central technology office.

Making the move will also require a culture that sees people willing to learn, fast, and take on new responsibilities. Organisations considering such devolution will also need strong cross-team co-ordination structures, plus the ability to understand how to integrate security requirements into an overall security solution design.

Even those organisations who ultimately see such devolution as too risky, Scholtz said, can still take something away from the theory, by using it to ensure that business unit or team leaders feel accountable for securing their own tools. Devolving security can also help organisations identify which security functions have been commoditised and are therefore suitable for outsourcing. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/22/why_not_disband_your_security_team/

Phisherfolk dangle bait at dot-fish domain

Netcraft ‘net watchers have cast a fly over the lake of generic TLDs, and turned up the first .fish domain dedicated to – wait for it – phishing.

The net-trawling service has previously landed sites on both the .fish and .fishing gTLDs, but parser.fish has earned the distinction of being the first baited with in-plaice malicious content.

If a user was hooked, they’d get reeled in, redirected, and left to sharks operating an imitation of the French banking collective BRED (but hosted in Vietnam).

Netcraft notes that fishing isn’t much of a school on the Internet: there’s a sole (their pun, not ours) .fish and .fishing site each in the company’s top million.

Although parser.fish has an anonymous Tucows registration, it cod be that the owner met their own white whale and was compromised to host the badware, Netcraft writes.

The site’s been filleted and the malicious content is gone, so hopefully not too many punters shelled out their hard-earned clams after seeing chum in the water. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/22/phishers_dangle_bait_at_dot_fish_domain/