STE WILLIAMS

Trump Names New Head of Economic Council for Cybersecurity, Technology

Grace Koh will be special assistant to the President for technology, telecom, and cybersecurity.

The White House has appointed Grace Koh to head National Economic Council’s cyber division where she will be special assistant to President Donald Trump in matters of technology, telecom, and cybersecurity, reports Nextgov.

Koh comes from the House Energy and Commerce Committee, where she worked as technology counsel and was previously employed as policy counsel for cable company Cox. She is part of a 13-member team appointed to the Council.

“We have assembled a best-in-class team of policy advisers to drive President Trump’s bold plan for job creation and economic growth,” said Council director Gary Cohn.

Read more on Nextgov.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/careers-and-people/trump-names-new-head-of-economic-council-for-cybersecurity-technology/d/d-id/1328290?_mc=RSS_DR_EDT

CloudPets’ woes worsen: Webpages can turn kids’ stuffed toys into creepy audio bugs

As the world learns of its embarrassingly leaky customer database, internet-connected cuddly toy maker CloudPets is under further scrutiny. This time for not securing its gizmos against remote exploitation via the Bluetooth Web API.

Basically, it is possible for a webpage to connect to CloudPets plushie, via Bluetooth in the computer or handheld viewing the page, without any authentication, and start controlling the gadget and recording from its builtin microphone. You can also play sounds through it. Here’s an example of such a webpage that can take over a CloudPets gizmo; the browser opening the page has to be within Bluetooth range of the CloudPets toy for it to work. You must also allow the browser to pair with the cuddly electronics.

It is possible, for example, to use this API, with CloudPets’ insecure implementation, to snoop on families from outside their house, or from the other side of a wall. Just pull out your phone, open the webpage, agree to pair it with the nearby toy, and listen in.

Security analyst and W3C invited expert Lucasz Olejnik has warned last year of the dangers to privacy caused by software and hardware mishandling connections from the web to devices via Bluetooth. And CloudPets seems to have put its foot right in it.

On Tuesday, infosec research outfit Context Information Security revealed it was already looking at CloudPets’ use of Web Bluetooth before news broke of the toymaker’s inability to secure more than two million voice recordings from its mic’d-up stuffed animals. Now, Context IS has brought forward the publication of its study, pouring fuel on the fire.

The team’s conclusion is that security of the Bluetooth Web API implementation in the CloudPets devices is inadequate.

“When first setting up the toy using the official CloudPets app, you have to press the paw button to ‘confirm’ the setup. I initially thought this might be some sort of security mechanism, but it turns out this isn’t required at all by the toy itself,” report author Paul Stone writes.

“Anyone can connect to the toy, as long as it is switched on and not currently connected to anything else. Bluetooth LE typically has a range of about 10 – 30 meters, so someone standing outside your house could easily connect to the toy, upload audio recordings, and receive audio from the microphone.”

Stone is also unimpressed with the toys’ firmware handling: “The CloudPets app performs a firmware update when you first set up the toy, and the firmware files are included in the APK. The firmware is not signed or encrypted – it’s only validated using a CRC16 checksum. Therefore it would be perfectly possible to remotely modify the toy’s firmware.”

Here’s a demo of Stone taking over CloudPets stuffed toy from a webpage, rather than the official app:

Youtube Video

Olejnik seems grimly vindicated:

Stone has put the code for his bear-busting proof-of-concept on GitHub for anyone to check out. He also said he has been trying to warn CloudPets of the security blunders since October but has since given up after hitting silence. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/01/cloudpets_woes_worsen_mics_can_be_pwned/

Tricksy bugs in Zscaler admin portal let you ruin a coworker’s day

Cloud management software peddler Zscaler has plugged cross-site scripting holes in the admin portal it provides to customers.

People logged into the website could have exploited the bugs to inject malicious HTML and JavaScript into the browsers of other users of the site, allowing them to take over their accounts and perform actions as their victims.

In an advisory on the flaws published this week, the biz acknowledged the bugs while playing down the threat. It suggested its programming blunders would only put at risk users within the same company. In other words, you could only inject code into the webpages of your coworkers while they were using Zscaler’s admin portal. The Silicon Valley-based biz explained:

Zscaler has addressed persistent XSS vulnerabilities identified in admin.zscaler[X].net and mobile.zscaler[X].net portals. The post-auth vulnerabilities would have allowed authenticated admin users to inject client-side content into certain admin UI pages, which could impact other admin users of the same company.

Zscaler credited security researcher Alex Haynes with discovering the flaws.

Haynes previously unearthed cross-site scripting vulnerabilities within services from Forcepoint, another cloud software player. These flaws were resolved last October.

Cross-site scripting flaws are one of the most common classes of web vulnerabilities. Here’s a handy cheat sheet on how to program your web app to avoid one of these security shortcoming. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/01/cloud_security_vulns/

Prisoners’ ‘innovative’ anti-IMSI catcher defence was… er, tinfoil

Exclusive Prisoners at a Scottish jail evaded an IMSI catcher deployed to collar them making illegal phone calls – by putting up tinfoil after bungling guards left the spy gear visible to inmates.

“As you are also aware the invisible grabber at HMP Shott [sic] was visible,” Maurice Dickie of the Scottish Prison Service wrote in an internal email of May 2014.

This referred to the trial of an IMSI catcher in North Lanarkshire prison HMP Shotts that year.

The idea was to use the IMSI catcher to find out which prisoners were making illegal calls using smuggled mobile phones from within the jail. Officially, the trial was declared a failure, having evidently not caught any lags making unlawful mobile calls, because prisoners were said to have developed “innovative countermeasures”.

The Register understands these “countermeasures” were just tinfoil used to block line of sight to the IMSI catcher after prisoners spotted the device, which appeared to have been placed on the “inside of the prison perimeter”.

Improperly redacted copies of emails seen by The Reg revealed the cockup. Ofcom, which regulates the use of IMSI catchers in the UK, declined to comment. The Scottish Prison Service had not responded by the time of publication.

IMSI catchers are known as Stingrays in the US. They are fake mobile network base stations used to fool nearby mobile phones into connecting to them, thus revealing the unique International Mobile Subscriber Identity number. They are used extensively in America, where law enforcement agencies must apply for a court warrant to use them.

New proposals in the Prisons and Courts Bill currently before Parliament will allow British mobile network operators to deploy them under authorisation from the Justice Secretary.

Similar authorisations for mobile network snooping, though required by law, are normally given on a blanket basis and for practical purposes do not provide any meaningful safeguard against misuse. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/01/imsi_catcher/

Health firm gets £200k slap after IVF patients’ records leak online

A private health firm in the UK has been fined £200,000 after fertility patients’ confidential conversations leaked online.

The £200,000 monetary penalty was levied following an investigation by Blighty’s Information Commissioner’s Office (ICO) into the way the Lister Hospital in London was transferring, transcribing and storing recordings of IVF appointments.

Problems were discovered in April 2015 after a patient discovered that transcripts from interviews recorded with Lister Hospital IVF patients could be freely accessed by searching online.

A subsequent investigation by data privacy watchdogs revealed the hospital had been routinely sending unencrypted audio recordings of the interviews by email to a company in India since at least 2009, six years prior to the probe. Private conversations between doctors and various hospital patients wishing to undertake fertility treatment were transcribed in India and then sent back to the hospital.

Worse yet, the Indian firm stored audio files and transcripts on an insecure server, leaving the confidential data accessible to world+dog.

HCA International breached the Data Protection Act 1998 by failing to ensure that their sub-contractor acted responsibly, earning them a heavy fine along with a public rebuke from the ICO.

Head of ICO enforcement Steve Eckersley said: “The reputation of the medical profession is built on trust. HCA International has not only broken the law, it has betrayed the trust of its patients.

“These people were discussing intimate details about fertility and treatment options and certainly didn’t expect this information to be placed online. The hospital had a duty to keep the information secure. Once information is online it can be accessed by anyone and could have caused even more distress to people who were already going through a difficult time,” he added.

HCA International already had appropriate safeguards in place in other areas of its business. “The situation could have been avoided entirely if HCA International had taken the time to check up on the methods used by the contract company,” Eckersley concluded.

The General Data Protection Regulation (GDPR), the new data protection law coming into force in the UK in May 2018, will strengthen the ICO’s powers to fine companies. Fines of up to four per cent of a company’s global turnover could be issued where a serious breach of data protection law has occurred. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/28/health_firm_fined_over_data_leak/

Massive Necurs Spam Botnet Now Equipped to Launch DDoS Attacks

With more than one million active bots at any time, a Necurs-enabled DDoS attack could dwarf such an attack by the Mirai botnet.

In an ominous development, the world’s largest spam botnet has acquired capabilities that could allow it to be used in massive distributed denial-of-service attacks.

Security researchers at BitSight’s Anubis Labs recently observed the Necurs botnet loading a component in infected systems that allow the systems to perform DDoS attacks.

The addition of the new component appears to have started at least six months ago, which is when BitSight researchers first observed some unusual communications on a Necurs-infected system.

In addition to communicating via port 80, the Necurs-infected system was also using a different port  as well as what appeared to be a different protocol, to communicate with a set of command and control addresses.

A subsequent analysis showed the communications from the infected system to be requests to download two separate modules. One of them was for the usual spam distribution purposes. And the other was for a proxy module that would cause the bot to make HTTP or UDP requests to a target system “in an endless loop,” BitSight said in a recent alert.

The bot is modular in design; the proxy and DDOS features are part of a module first was deployed in September, says Tiago Pereira, threat intelligence researcher at Bitsight’s Anubis Labs. “The SOCKS/DDOS module is being loaded in all the bots in the botnet,” he says. At the same time, the spam modules are also still being loaded on all infected system, he says.

Pereira says BitSight’s sinkholes measured an average of over 1 million active Necurs infected system every single day. “Simply taking into account its size—more than double the size of Mirai—we would expect it to produce a very powerful DDoS attack,” he says.

Security researchers estimate the overall size of the Necurs botnet to be upwards of 5 million infected systems, though only about 1 million are active at any give time. In addition to being used for spam distribution, the botnet has also been used to deliver the notorious Locky ransomware and the Dridex banking Trojan.

The botnet went offline somewhat inexplicably for a few weeks last year, resulting in an almost immediate drop-off in Locky and Dridex distributions. But it came back online equally inexplicably and with renewed vigor in June, and since then had been used to distribute spam and malware.

With the addition of the new DDoS module, Necurs appears set to become even more versatile than it is already.

Ben Herzberg, security group research manager at Imperva, says it’s interesting that Necurs has now added a feature for DDoS attacks. But threat actors are likely to increasingly favor using IoT botnets such as Mirai because they are easier to infect and use than desktop botnets like Necurs, he said.

“Therefore, it seems likely that this is either a test module, or something to be used in a ‘doomsday scenario’ – for example when the botnet operators need it for a very good reason – not just as a normal DDoS-for-hire campaign,” he said in a statement.

Word of the Necurs botnet update comes amid reports of changes to Neutrino, another well-known malware sample that has been used to assemble a large botnet. The authors of the Neutrino bot have developed a new protective loader that makes it harder for malware tools to detect the bot, Malwarebytes Labs said in an alert this week.

The new loader is designed to check whether it is being deployed in a controlled environment like a sandbox or a virtual machine and to delete itself automatically if that is indeed the case, Malwarebytes researchers Hasherezade and Jerome Segura said.

The tweak is not major. But the new protection layer is “very scrupulous in its task of fingerprinting the environment and not allowing the bot to be discovered,” they said.

Related stories:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/massive-necurs-spam-botnet-now-equipped-to-launch-ddos-attacks/d/d-id/1328287?_mc=RSS_DR_EDT

Judge denies blanket right to compel fingerprint iPhone unlocking

In May 2016, government lawyers requested warrant to enter a building in Lancaster, California, seize mobile devices, and to force anyone inside – or in the immediate vicinity – to use their fingerprints to unlock them.

According to Forbes, which first reported the case, the warrant was granted and executed.

The feds wanted to do that again in Chicago, in an investigation into child abuse imagery, which we briefly reported on last week. But the judge’s reasons for turning down that request are worth a closer look.

It’s not that investigators lacked probable cause to search the home, the judge, David Weisman wrote in a 14-page opinion. Rather, the Chicago warrant application – unlike the earlier California warrant – didn’t specify who in the building might be involved in the crime. Nor did it specify which Apple encrypted devices might have been used for criminal activity – if any.

The judge said:

The request is made without any specific facts as to who is involved in the criminal conduct linked to the subject premises, or specific facts as to what particular Apple-branded encrypted device is being employed (if any).

First, the Court finds that the warrant does not establish sufficient probable cause to compel any person who happens to be at the subject premises at the time of the search to give his fingerprint to unlock an unspecified Apple electronic device.

The court document was filed on February 16, but it was only found and circulated by lawyers last week. One of those lawyers, Riana Pfefferkorn, posted the opinion and shared the link on Twitter.

The decision fits into an ongoing tension between individuals’ rights to privacy vs government attempts to get past encryption. Noted battles have included the epic Apple vs FBI fights over backdooring iPhone encryption in multiple cases: that of terrorist Syed Farook’s phone, for one, as well as another, in Brooklyn, concerned with the phone of an alleged drug dealer.

In Chicago, Judge Weisman wrote in his opinion that perhaps the “crux of the problem” is that the warrant is using the same overly broad search and seizure approach as a boilerplate for such requests.

He quoted the government in its warrant application:

[T]his is the language that we are making standard in all of our search warrants.

As such, the government is running up against Fourth Amendment protections against unreasonable search and seizure, as well as Fifth Amendment protection against self-incrimination.

In general, courts have tended to hold that passcodes, which are knowledge stored in our heads, are protected by the Fifth Amendment.

But as privacy and legal experts have been saying ever since Apple introduced Touch ID, biometric information such as fingerprints are like our DNA samples or our voice imprints: they don’t reveal anything that we know, meaning they don’t count as testimony against ourselves.

Does that analogy still hold water? Some lawyers who agreed with the judge’s ruling say no.

Abraham Rein, a Philadelphia-based tech lawyer, had this to say to Ars Technica:

As I read the opinion, the government relies on old fingerprinting cases to argue that the Fourth and Fifth Amendments don’t stand in the way of what they are seeking to do here.

But (as the court points out) there is a big difference between using a fingerprint to identify a person and using one to gain access to a potentially vast trove of data about them and possibly about innocent third parties, too.

The old fingerprinting cases aren’t really good analogs for this new situation. Same is true with old cases about using keys to unlock locks – here, we’re not talking about a key but about part of a person’s body.

But other lawyers said that fingerprints can’t be construed as testimonial against oneself, given that the brain isn’t involved. Ars quoted well-known privacy and tech law expert Orin Kerr:

I just think that it’s really clear that [fingerprints are] not testimonial – because you’re not using your brain. It can’t be testimonial if you can cut their finger off.

At any rate, the judge wrote, the court opinion shouldn’t be interpreted to mean that the government can’t force fingerprinting – as long as it gets its Fourth and Fifth Amendment obligations ironed out. That includes establishing a connection between the people it wants to search and criminal conduct.

After the execution of the warrant, the government might in fact have the evidence it needs to get additional search warrants.

But at this point, “We’re simply not there yet,” he wrote.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9dOGHgMyAJY/

MWC: Completely superfluous ‘AI’ added to consumer items

Naked Security is reporting this week from Mobile World Congress in Barcelona

All of a sudden, Olay – famous for its skincare products and cheering inducements to “Love the skin you’re in” – has an artificial intelligence (AI) capability. You heard it right: Olay, maker of your mum’s face cream, is now in the machine intelligence game.

Naked Security reported on a crop of “smart” devices that are anything but smart from the CES tech show in Las Vegas in January, but tech companies have moved on for MWC, focusing instead on adding AI and machine learning to products that would do just as well without them.

With the launch of Olay Skin Advisor, an app that claims to reveal women’s true “skin age” based on a selfie, Olay’s new skinbot delivers personalised product information based on perceived “improvement areas”.

So now Olay’s answer to HAL 9000 wants you to know that your skin might not be quite so loveable after all. That blotch on your left cheek – it could do with some work. The greasy patch on your nose – there’s a potion for that!

Olay claims to be the first company of its kind to use such deep learning technology, and has the entirely modest aim of “transforming the way that women shop for beauty products”. Indeed, so serious is it about these new opportunities that it’s is sending a whole team of scientists, researchers and AI experts to Mobile World Congress (MWC) in Barcelona in order to big up its credentials.

And Olay is far from being the only non-tech company at the event pushing a dubious AI message. Somewhat bizarrely, machine learning has also found its way into the dental hygiene segment, including through an AI-enabled toothbrush called Ara.

Designations such as “smart toothbrush” are clearly now entirely insufficient; this is a dental implement with actual intelligence! You can bet that Isaac Asimov never saw that one coming.

Ara apparently has AI embedded directly in the brush handle, enabling it to capture data about brushing efficiency and thereby remove more bacteria, reduce plaque and prevent gingivitis.

Kolibree, the product’s manufacturer, claims that it uses patent-pending M2M technology to provide a “personalised, interactive tooth brushing experience”; each time the user brushes, embedded algorithms in the toothbrush learn the user’s brushing pattern, meaning it can make personalised recommendations.

Of course, the toothbrush also syncs with the obligatory app, which serves as a personal and highly useful record of your brushing history. The possibilities are therefore endless. In future years, feeling nostalgic perhaps on a rainy Sunday afternoon sometime in the early 2020s, you might be tempted to access the record of that particularly vigorous session back in March ’17… Such is the transformative power of technology.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0EWGYNzNbA4/

News in brief: moon tourists to launch ‘next year’; health provider fined after breach; drone pilot jailed

Your daily round-up of some of the other stories in the news

Fly me to the moon

Space tourism has always been a staple of science fiction, but if all goes according to plan, two unnamed private citizens will be boarding a Dragon 2 spacecraft above a SpaceX rocket by the end of next year to do a loop around the moon, blasting off from the iconic Cape Canaverel in Florida

The two, who have paid “a significant deposit” for the privilege, won’t be the first space tourists by any means – seven well-off folk who have paid estimated fees of $20m each have blasted off in a Russian Soyuz to join the ISS. However, they will, if all goes well, be the first to go to and fly around the moon – which humans haven’t visited since December 1972.

SpaceX CEO Elon Musk told reporters that the trip will last about a week – though it’s worth noting that neither the rocket nor the spacecraft have flown yet, although the rocket will get a test flight this summer, with an unmanned trip for the spacecraft scheduled for later this year.

Healthcare provider fined after data exposed online

Back on earth, a healthcare provider based at a London hospital has been fined £200,000 after a patient discovered that details of private consultations between patients and a doctor about fertility treatment could be freely accessed online.

The UK’s Information Commissioner’s Office said that HCA International had been routinely sending unencrypted audio files of consultations at the Lister Hospital in London to a company in India for transcription since 2009, where they were stored on an insecure server.

This isn’t the first time an Indian company has exposed sensitive patient data online: in December, we reported that the records of 43,000 people, including HIV patients, were available on the servers of Health Solutions in Mumbai.

Steve Eckersley, the ICO’s head of enforcement, said that HCA International had “not only broken the law, it has betrayed the trust of its patients”.

Drone pilot jailed for 30 days

A 38-year-old man from Washington State was sentenced to 30 days in jail on Friday after knocking out a woman at a Gay Pride event in Seattle. Paul Skinner, of Oak Harbor, was the first person to be charged with mishandling a drone in a public space, said Seattle prosecutors at the time of his conviction in his December.

As we reported in January, Skinner had turned himself in after the incident in 2015, and plans to appeal the sentence.

The prosecutor, Seattle City attorney Pete Holmes, had sought a sentence of 90 days, saying at the time of the trial that drones are “a serious public safety issue that will only get worse”. Skinner could have faced a maximum sentence of 364 days in prison and a $5,000 fine.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/00N3_TLDMDY/

‘Filecode’ ransomware attacks your Mac – how to recover for free

Thanks to Anna Szalay and Xinran Wu of SophosLabs for their behind-the-scenes work on this article.

Last week, SophosLabs showed us a new ransomware sample.

That might not sound particularly newsworthy, given the number of malware variants that show up every day, but this one is more interesting than usual…

…because it’s targeted at Mac users. (No smirking from the Windows tent, please!)

In fact, it was clearly written for the Mac on a Mac by a Mac user, rather than adapted (or ported, to use the jargon term, in the sense of “carried across”) from another operating system.

This ransomware, detected and blocked by Sophos as OSX/Filecode-K and OSX/Filecode-L, was written in the Swift programming language, a relatively recent programming environment that comes from Apple and is primarily aimed at the macOS and iOS platforms.

Swift was released as an open-source project in 2016 and can now officially be used on Linux as well as on Apple platforms, and also on Windows 10 via Microsoft’s Linux subsystem. Nevertheless, malware programmers who choose Swift for their coding almost certainly have their eyes set firmly on making Mac users into their victims.

The good news is that you aren’t likely to be troubled by the Filecode ransomware, for a number of reasons:

  • Filecode apparently showed up because it was planted in various guises on software piracy sites, masquerading as cracking tools for mainstream commercial software products. So far, we’re not aware of Filecode attacks coming from any other quarter, so if you stay away from sites claiming to help you bypass the licensing checks built into commerical software, you should be OK.
  • Filecode relies on built-in macOS tools to help it find and scramble your files, but doesn’t utilise these tools reliably. As a result, in our tests, the malware sometimes got stuck after encrypting just a few files.
  • Filecode uses an encryption algorithm that can almost certainly be defeated without paying the ransom. As long as you have an original, unencrypted copy of one of the files that ended up scrambled, it’s very likely that you will be able to use one of a number of free tools to “crack” the decryption key and to recover the files for yourself.

Ironically, the fact that you can recover without paying comes as a double relief.

That’s because the crook behind this ransomware failed to keep a copy of the random encryption key chosen for each victim’s computer.

We’ve written about this sort of ransomware before, dubbing it “boneidleware“, because the crooks were sufficiently inept or lazy that they didn’t even bother to set up a payment system, scrambling (or simply deleting) your files, throwing away the key, and then asking for money in the hope that at least some victims would pay up anyway.

CryptoLocker, back in 2013, was the the first widespread ransomware. The crooks behind it set up an extortion process that could reliably supply decryption keys to victims who paid the ransom. Word got around that paying up, no matter how much it might hurt to do so, would probably save your data, and that’s what many people did, creating a “seller’s market” for ransomware demands. But recent attacks, where paying up doesn’t do any good, have started to change public opinion. In a neat irony, it looks as though the latest waves of ransomware have turned out to be the strongest anti-ransomware message of all!

We’ve seen three versions of Filecode: one claims to crack Adobe Premiere, the second to crack Office 2016, and the third, called Prova, seems to be a version that wasn’t supposed to be released:

The word prova means “test” in Italian.

This version only encrypts files in a directory called /Desktop/test, and doesn’t make any effort to hide the giveaway text messages stored inside the program:

If you run one of the “Patcher” versions of this ransomware, you’ll see a popup window that makes it clear the program is about to get up to no good:

If you click [Start], the process will begin under the guise of a fictitious message, shown here still pretending everything would be OK, even after the files on the Mac desktop had been encrypted:

In fact, Filecode goes through all the files it can access in the /Users directory, using the built-in macOS program find to list your files, zip to encrypt them, and rm to delete them. (Files removed using the rm program don’t go into the Trash and so can’t easily be recovered.)

The ZIP password used is a randomly-chosen 25-character text string, so that although each infected Mac will be scrambled with a different password, all the files in one run of the malware will have the same key. (As we’ll see below, that’s a silver lining in this case.)

Note that the Filecode malware doesn’t need administrator privileges. When you run the ransomware app, you implicitly give it the right to read and write the same set of files that you could read and modify yourself using any other app, such as Word or Photos. Ransomware doesn’t need system-wide access to attack your personal files, and those are the files that are mot valuable to you. Because of this, you won’t see any giveaway warnings popping up to say “This app wants to make changes – Enter your password to allow this”. Generally speaking, only apps that need to install components that can be used outside the app itself, such as kernel drivers or browser plugins, will alert you with a password popup. We regularly meet Mac users who still don’t realise this, and who therefore think that looking out for password popups is a necessary and sufficient precaution against malware attack.

Filecode leaves behind a raft of text files that tell you how to pay 0.25 bitcoins (about $300 on the date we published this) to the crook behind the attack, giving you a Bitcoin address for the money and a temporary email address to make contact.

You’re then supposed to leave your computer connected to the internet so the criminal can access it remotely – instructions on how this part works are not supplied at this stage – and he promises to let himself in and unscramble your files within 24 hours.

Apparently, for BTC 0.45 (about $530) instead of BTC 0.25, you can buy the expedited service and he’ll unlock your files within 10 minutes:

The real problem comes if you don’t have a backup and make the stressful decision to pay up in the hope of making the best of a bad job.

We couldn’t see anywhere in the code where the crook keeps a record of the encryption key that he passes to the ZIP program, either by secretly saving the password locally or uploading it to himself.

In other words, Filecode seems to be yet another example of “boneidleware“, in which the crook either neglects, forgets or isn’t able to create a reliable back-end system to keep track of keys and payments, leaving both you and him with no straightforward recovery process.

Even if you did grant the crook access to your computer to “fix” the very problem he created in the first place, and even if he were able to connect in remotely to have a go, we think that he’d have no better approach that trying to crack the ZIP encryption from scratch.

So, in the unlikely event you are hit by this ransomware, you might as well learn how to crack the ZIP encryption yourself, and avoid having to rely on a criminal for help.

Cracking your own code

In our tests, a ZIP cracking tool called PKCRACK (it’s free to download, but you have to send a postcard to the author if you use it) was able to figure out how to recover Filecode-encrypted files in just 42 seconds.

That’s because the standard encryption algorithm used in the ZIP application was created by the late Phil Katz (the PK in the original PKZIP software), who was a programmer but not a cryptographer.

The algorithm was soon deconstructed and cracked, and software tools to automate the process quickly followed.

PKCRACK doesn’t work out the actual 25-character password used by the ransomware in the ZIP command; instead, it reverse-engineers three 32-bit (4 byte) key values that can be used to configure the internals of the decryption algorithm correctly, essentially short-circuiting the need to start with the password to generate the key material:

If we assume a choice of 62 different characters (A-Z, a-z and 0-9), then there are a whopping 6225 alternatives to try, or about one billion billion billion billion billion.

But by focusing on the three internal 32-bit key values inside PKZIP’s encryption process, and the fact that only a small subset of combinations are possible, PKCRACK and other ZIP recovery tools can do the job almost immediately.

The only caveat is that you need to have an original copy of any one of the files that was encrypted by Filecode, because ZIP cracking tools rely on what’s called a known plaintext attack, where comparing the input and output of the encryption algorithm for a known file greatly speeds up recovery of the key.

Once you’ve cracked the 32-bit key values for one file, you can use the same values to decrypt all the other files directly, so you’re home free.

What to do?

Watch this space for our followup article giving a step-by-step description of how we got our own files back for free from our sacrificial test Mac!

LISTEN NOW

(Audio player above not working? Listen on Soundcloud or access via iTunes.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qJgenobYg8M/