STE WILLIAMS

Hackers could exploit solar power equipment flaws to cripple green grids, claims researcher

A Dutch researcher says he found a way to cause mischief on power grids by exploiting software bugs in solar power systems.

Specifically, Willem Westerhof, a cybersecurity researcher at ITsec, said he uncovered worrying flaws within power inverters – the electrical gear turns direct current from solar panels into alternating current that can be fed into national grids.

These vulnerabilities could be exploited remotely if the equipment was connected to a network accessible to an attacker, it is claimed: a hacker on the same LAN, or reaching an internet-facing inverter from the other side of the world, could get busy abusing the bugs to control the amount of juice going out onto the grid.

Westerhof said he discovered 21 vulnerabilities in inverters manufactured by German specialists SMA Solar Technology, which sells more than $1bn of kit every year. Since at its daily power generation peak, solar accounts for almost half of Germany’s energy production, an inverter hack would have serious consequences.

“In Europe there is over 90 GW of [photovoltaic] power installed. An attacker capable of controlling the flow of power from a large number of these devices could therefore cause peaks or dips of several GigaWatts, causing massive balancing issues which may lead to large-scale power outages,” he said.

The attack scenario – which Westerhof named Horus after the Egyptian god of the sun – would involve hackers subverting a large number of inverters. He argues these could be hijacked and programmed to either:

  1. Flood power onto the grid, causing other generators to shut down to prevent the network overloading, or
  2. Underpower the grid to cause brownouts or blackouts.

Causing massive fluctuations – gigawatts-worth – in power generation in a very short time period would be rather irritating if done at peak solar panel generating time. He cited the 2015 solar eclipse over Germany, which caused a massive drop-off in power generation. Because this happened at a predictable time, the solar slump was manageable. But an attack at random moments and high speed would cause major problems.

After examining SMA’s inverters, Westerhof contacted the manufacturer in December with his findings, following responsible disclosure best practices. However, he ran into a morass of buck-passing over fixing the issue, which is why the publication of his research was delayed to this month.

“Government officials state that the energy sector should work out how to deal with these issues themselves. They can only play a role in the form of advising and consultancy to the sector,” he explained.

“Power grid regulators state that vendors are responsible for creating secure devices. Vendors then state that users are responsible for making sure the device is in a 100 per cent secure environment. Users state that they can’t all be cybersecurity experts and it should be secure out of the box. All in all, everyone was simply pointing to another one.”

In the end, SMA patched the vulnerabilities in its kit, fixes were rolled out, energy grid bosses agreed to get the matter onto the agenda at their next security conference, and governments agreed to coordinate to harden up their systems, we’re told. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/07/solar_power_flaw/

One-Third of Businesses Hit with Malware-less Threats

Scripting attacks, credential compromise, privilege escalation, and other malware-less threats affect IT systems and add to staff workload.

Malware-less attacks affect nearly one-third of businesses, the SANS Institute found in a new survey on the threat landscape. Nearly one-third of respondents reported a malware-less threat entering their organizations, affecting IT staff and increasing workloads.

These attacks are harder to find and address because they cannot be detected by signature-based technologies. SANS found scripting attacks are the most common malware-less incident, and credential compromise and privilege escalation caused the largest impact.

The most common threats seen among businesses were phishing (72%), spyware (50%), ransomware (49%), and Trojans (47%). Phishing caused the greatest damage. Few respondents face zero-day threats; 76% said less than 10% of significant threats they faced were zero-days.

“Today’s threats predominately leverage the same old vulnerabilities and techniques,” said report author and SANS analyst Lee Neely in a statement. “The time is ripe to change our protections as well as remediation processes to stem the tide of successful threat vectors.”

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/one-third-of-businesses-hit-with-malware-less-threats/d/d-id/1329573?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Man Who Hacked his Former Employer Gets 18-Month Prison Sentence

A Tennessee man also must pay restitution of nearly $172,400 to his former employer after hacking into its systems to gain an edge for his new company.

A federal court sentenced a Tennessee man to an 18-month prison sentence and ordered him to pay $172,394 in restitution, following his breach into a former employer’s network and copying of emails in order to give his new company a competitive edge, according to the US Department of Justice.

Jason Needham, 45, was sentenced for breaching the computer networks and email of his former employer, Allen Hoshall. Needham, a co-owner of HNA Engineering, admitted to breaking into Allen Hoshall’s servers over a two-year period to download copies of rendered engineering schematics and access more than 100 PDF documents with information including his rival’s project proposals and budgetary documents.

The former Allen Hoshall employee also acknowledged he accessed his former colleague’s email account to glean information about the company’s project proposals, marketing plans, fee structures, and account credentials for Allen Hoshall’s internal document-sharing system. The information that Needham accessed carried an estimated worth of $500,000.

“We believe that computer crimes are serious and that pursuing and prosecuting violators in an ethical and responsible manner are important aspects of maintaining the safety and security of private, confidential information for everyone,” an Allen Hoshall spokesperson said. Needham pleaded guilty to the charges back in April.

Read more about the DOJ case here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/man-who-hacked-his-former-employer-gets-18-month-prison-sentence/d/d-id/1329574?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Good guys and bad guys race against time over disclosing vulnerabilities

When a software vulnerability is discovered, especially by a nation state or government agency, that agency might choose to sit on that discovery, secretly hanging on to their findings in case the vulnerability can be used, secret weapon-style, at a convenient time of their choosing. But a new research paper recently examined how often vulnerabilities are independently discovered by researchers and found that time is not always on the side of whoever got there first.

Released from the Cyber Security Project at Harvard’s Belfer Center for Science and International Affairs, this paper dives into how often vulnerabilities in Google Chrome, Mozilla Firefox, Google Android and OpenSSL are rediscovered over a span of several years up to 2016. From their dataset, which studied more than 4,300 vulnerabilities, between 15% and 20% of vulnerabilities were rediscovered within the same year, with rediscovery as high as 23% for Android vulnerabilities in just one year.

In a narrow subset of cases, the same vulnerabilities are rediscovered over a short span of time many times over — for example, 6% of vulnerabilities for Android were rediscovered three times or more in just one year (2015-16), and 4% of Firefox and 2% of Chrome vulnerabilities were rediscovered more than twice between 2012 and 2016.

The paper also found that rediscovery tends to occur over a period of months after the vulnerability is initially found. In Android’s case, 20% of the rediscovered vulnerabilities happen in the same month as the original discovery, with another 20% within the first three months.

This kind of gap might be a boon to defenders if they act quickly to mitigate the vulnerability — assuming mitigation or patching is even possible. On the flipside, this kind of lag can also mean that if someone with malicious intent discovers a vulnerability first and sells it on the black market, it could be several months before a “good guy” catches up.

Whichever set of events happens, there’s a trend that applies to all the data in this paper: the rate of vulnerability rediscovery is going up across the board. For example, 2% of Chrome vulnerabilities were rediscovered in 2009, whereas 19.1% were rediscovered in 2016.

There are a number of possible interpretations here: Perhaps we’re getting better at finding vulns, or more eyes are on the problem, or perhaps we’re getting better at sharing information more effectively. Or, if we’re feeling a bit cynical, perhaps there are just more vulnerabilities to be found as software matures.

It’s worth noting that this paper doesn’t make suppositions about vulnerabilities in the world at large, but only about their own dataset. It is entirely possible that vulnerability rediscovery rate is much higher in actuality, simply because we don’t have the full picture on how quickly criminals make the same discoveries, and (right now at least) they’re not going to share that data.

Why does any of this matter?

The research here now assigns numbers to a long-held principle in security: when a vulnerability is discovered by a “good guy”, chances are someone out there with criminal intent already knew about it and is actively exploiting it. The paper argues that when we better understand the likelihood of a vulnerability’s rediscovery, we can apply more pressure to the vendor who “owns” the vulnerability to pay more attention to it and prioritize a fix. (This same principle can also work in motivating more vendors to support bug bounties.)

The opposite situation also applies: if a type of vulnerability has a higher chance of being rediscovered, and that next discovery is by a criminal actor who intends to prey upon the unpatched, it’s a greater motivation for a vendor to get that patch deployed.

From the paper:

Understanding the speed of rediscovery helps inform companies, showing how quickly a disclosed but unpatched bug could be rediscovered by a malicious party and used to assault the company’s software. This information should drive patch cycles to be more responsive to vulnerabilities with short rediscovery lag, while allowing more time for those where the lag is longer.

Vulnerability rediscovery rates are also a key variable in the discussion about whether government agencies that are stockpiling vulnerabilities in secret should disclose their vulnerabilities more often, especially in light of the NSA-held vulnerability data leaked by the Shadow Brokers earlier this year, which lead to WannaCry and Petya. The potential rate of rediscovery is one of the variables the government agencies need to keep in mind if a vulnerability they’ve discovered is likely to stay secret for long. Is it for everyone’s greater good to always disclose vulnerabilities as soon as possible?

The reality is that a number of the questions addressed in this paper have been around for a while, and while it assigns some valuable data to certain angles of the argument, the issues at hand are still up for debate, especially after the Shadow Brokers leak:

  • How much time does a vendor really have to fix a vulnerability found by a “good guy” before a “bad guy” makes the same discovery?
  • Are government agencies doing more harm than good to themselves and their citizens by seemingly hoarding vulnerabilities?
  • Are there still too many logistical barriers in place for security researchers to responsibly and easily share their vulnerability discoveries?

The paper is an interesting read for those looking for data around the lifecycle of vulnerabilities. Let us know what you think — has this research changed your mind about how organizations should share vulnerability information?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sBRwIyID5Kk/

Congress looks to take the wheel on autonomous vehicles

What is the US Congress doing to enable the upsides of the coming autonomous vehicle (AV) world and to protect everybody from the downsides?

Well, they’re discussing it. In some cases they’ve proposed legislation about it. What will actually get done and when is not yet clear.

What is clear is that so far, while cybersecurity and privacy are components of both discussion and legislation, the language is a long way from airtight, which could be a problem.

Autonomous vehicles (AVs), otherwise known as self-driving cars, are automatic targets. Given the massive amount of data collection and connectivity necessary to make such a system function, how could they not be?

The gleam in the AV industry’s eye is for hundreds of millions of “devices” to be collecting and sharing data – through V2V (vehicle to vehicle) communication – that will identify drivers and perhaps their passengers, track their location, speed, driving habits and more.

Besides the obvious privacy implications, that offers opportunities for hackers to take control of critical systems like brakes, steering, accelerator, locks etc, or demand a ransom to leave them alone. In short, AVs are a “target-rich environment”.

And, of course, multiple giants of both the auto industry and the internet – Ford, GM, Toyota, Google, Apple, Tesla, Uber, Lyft and more – are racing to get their models on the road.

So what is Congress doing?

The House Energy and Commerce Committee, in a rare display of bipartisanship late last week, unanimously approved the SELF DRIVE Act, which contains sections on both cybersecurity and privacy. The bill will now move to a vote in the full House.

Among other things, it would require manufacturers of any “highly automated vehicles” to have a cybersecurity plan that includes, “a process for identifying, assessing, and mitigating reasonably foreseeable vulnerabilities from cyber attacks or unauthorized intrusions, including false and spurious messages and malicious vehicle control commands.”

Its privacy provisions track pretty closely to the “Privacy Principles” issued in 2014 by the Alliance of Automobile Manufacturers and the Association of Global Automakers, which call for vehicle owners to be given “clear, meaningful notice” about the collection and use of driver data; “certain choices” about how it is collected, used and shared; along with other provisions about data security, minimization and de-identification and retention.

And Senator Edward Markey (D-Mass.) introduced a bill in March titled the Security and Privacy in Your (SPY) Car Act that would direct the National Highway Traffic Safety Administration (NHTSA) to “conduct a rulemaking” to protect against “unauthorized access” to the vehicle’s electronic controls or driver data.

Elsewhere in the Senate, autonomous vehicle cybersecurity got some lip service prior to a hearing titled “Paving the Way for Self-Driving Vehicles” before the Senate Committee on Commerce, Science, and Transportation about six weeks ago.

Committee chairman Senator John Thune (R-S.D.), along with ranking minority member Bill Nelson (D-Fla.) plus Gary Peters (D-Mich.), issued a list of bipartisan “principles”, the last of which declared that “cybersecurity should be a top priority for manufacturers of self-driving vehicles and it must be an integral feature of self-driving vehicles from the very beginning of their development”.

But Thune, in his opening statement at the hearing, didn’t even mention cybersecurity or privacy, and the witnesses didn’t include a cybersecurity expert or a privacy advocate.

The only witness who even brought those topics up was John M Maddox, president and CEO of the American Center for Mobility.

There is no word yet on when the committee will file legislation – Thune’s office had not responded to questions at the time of this post.

But, as numerous experts note, whatever the intent, legislative language frequently leaves a lot of, uh, wiggle room.

Who is going to define “reasonably foreseeable vulnerabilities” mentioned in the SELF DRIVE Act; or “reasonable measures to protect against hacking attacks” in the SPY Car Act?

Lee Tien, senior staff attorney at the Electronic Freedom Foundation, said “reasonable” can depend on what levels of security and privacy are “practically possible”.

“Cars have lots of parts,” he said. “They’re not all made by Ford or GM or whomever. There’s a lot of assembling of parts – hardware, software, firmware – made by other companies. Who has vetted those parts and their code? Who knows, at a deep level, what that code does? How much modeling of how the systems work together has been done?”

And regarding privacy, he noted that the Markey bill has a huge exception for “driving data stored as part of the EDR [event data recorder] system or other safety systems onboard … that are required for post-incident investigations, emissions history checks, crash avoidance or mitigation, or other regulatory compliance programs”.

“That could swallow the rule, frankly,” he said.

Some experts say Congress should stay out of it – that this is a problem for the private sector to solve.

Gary McGraw, vice-president of security technology at Synopsys, contends that,“the government won’t ever figure it out. They can’t figure out health care, so why would we think they can figure this out?

He said manufacturers are “paying a lot more attention to it. They’re working on it.” And he said he has some faith in the market as well. “This [security] could be a real differentiator for customers,” he said.

Indeed, security appears to be more of a “top priority” for the industry than it does for Congress.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nI-c3A51GSA/

HMS Queen Liz will arrive in Portsmouth soon, says MoD

New aircraft carrier HMS Queen Elizabeth could arrive at her home port, Portsmouth, within the next fortnight, according to the Ministry of Defence.

The 65,000-tonne warship, the first true aircraft carrier in Royal Navy service for almost a decade, is currently undergoing sea trials off the coast of Scotland.

While the ship is packed with automation and semi-automation technology, meaning her crew is less than half the size of a US aircraft carrier (though QE is about a third smaller than US supercarriers, in fairness), the manpower demands on the already overstretched Royal Navy have caused some observers to question whether she is really worth it.

In a statement issued by the MoD over the weekend, Defence Secretary Sir Michael Fallon said: “In just two weeks’ time, the most powerful warship ever built for Britain’s famous Royal Navy is set to sail into her proud new home in Portsmouth.”

The window for entry will open next Thursday (August 17) and close on August 22, with the ship definitely arriving between those two dates. As the warship can only approach Portsmouth at high tide thanks to her 11 metre draught (depth below the sea’s surface), and as public interest in her arrival boils over, keen naval gazers can expect her to arrive in daylight.

Though El Reg doesn’t have a subscription to any tide table websites, someone with enough time and patience could probably look up the daytime high tide times during the arrival window and pick the one which would offer the best light conditions for TV news and photographers. Combine that with the weather forecast for Portsmouth (you can’t bring the ship alongside if it’s foggy, as we found out when an American ship arrived to test out the new jetty for the carriers) and you’ll then have a better idea than most, outside the MoD, of when the ship will arrive.

HMS QE has been on sea trials for the past month or so. Aside from an issue with a propeller shaft, possibly caused by a fishing net becoming snagged in the prop blades, the trials appear to have been successful.

Meanwhile, Britain’s carrier battle group staff – the officers who will command QE and her sister ship, HMS Prince of Wales, on active deployments – have been testing their skills with the USS George H W Bush, which is currently sailing around near Scotland as well. Ship spotters hope for a photo opportunity with the two carriers, though the MoD has been tight-lipped over whether this will happen.

She will carry F-35B fighter jets bought from America. These, we are told, will arrive next year. So far the UK has ten jets and around a hundred personnel in training over in the US. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/07/hms_queen_elizabeth_portsmouth_arrival_window/

Risky Business: Why Enterprises Can’t Abdicate Cloud Security

It’s imperative for public and private sector organizations to recognize the essential truth that governance of data entrusted to them cannot be relinquished, regardless of where the data is maintained.

The recent reports that Verizon and Dow Jones left their respective Amazon Web Services platforms exposed to unauthorized access should serve as a wakeup call to all who are engaged in cloud computing. At the outset, it is important to note that the two incidents were inadvertently caused by individuals acting on behalf of these companies, and were not the result of an AWS vulnerability.

The technology community continues to perpetuate the falsehood that migration to the cloud solves both capacity and security issues. This is a dangerously misguided perspective that subjects enterprises that subscribe to it to significant risk. Unfortunately, many executives are led to believe that by transferring core processes to a cloud environment, the responsibility for securing the data residing therein is also transferred to the hosting third party. This is simply untrue. 

It’s imperative for public and private sector organizations to recognize that governance of the data entrusted to them cannot be abdicated, regardless of where the data is maintained. While outsourcing may assist with scale and resource limitations, management must exercise ongoing and adequate oversight of information maintained within cloud environments.

Concurrent with every cloud deployment, specific controls should be implemented. The following is a list of three necessary controls, and a brief description.

1. Employee Education
Prior to providing anyone with access to an organization’s cloud platform, each individual should receive comprehensive, formal direction on acceptable use. This requirement should apply to all employees, contractors and vendors. Without clearly written guidance, and corresponding awareness training, individuals will behave arbitrarily when accessing cloud applications. An informed employee is management’s responsibility.

2. Vendor Oversight 
As with all service providers, ongoing oversight of cloud vendors should be a core element of an organization’s security and compliance strategy. Through a combination of on-site visits, third party risk assessments, compliance attestations and contractual provisions, management must continually verify that the hosted environment is protected from internal and external threats. 

3. System Monitoring
With increased frequency, organizations are electing to maintain customer information and intellectual property in the cloud. To protect against unauthorized access of this information, and to comply with related compliance mandates, it is essential that all accesses be continuously logged, monitored and analyzed.  Cloud database vendors have recently implemented robust auditing and alerting tools within their hosted applications. The functionality to perform real time monitoring of cloud data is now available; organizations must therefore devote the necessary resources to do so.

Unauthorized access of customer and proprietary information has emerged into a crisis that is currently undermining all industries. Any organization that is be perceived to be unable to protect the sensitive data entrusted to them, regardless of where it is maintained, undoubtedly will experience the consequences that accompany an apprehensive public.

Related Content:

John Moynihan, CGEIT, CRISC, is President of Minuteman Governance, a Massachusetts cybersecurity consultancy that provides services to public and private sector clients throughout the United States. Prior to founding this firm, he was CISO at the Massachusetts Department of … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/risky-business-why-enterprises-cant-abdicate-cloud-security/a/d-id/1329510?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DJI drones: ‘Cyber vulnerabilities’ prompt blanket US Army ban

The US Army has issued a global order banning its units from using drones made by Chinese firm DJI, citing “cyber vulnerabilities”.

The memorandum, issued by the US Army’s Lieutenant General Joseph Anderson, orders all US Army units with DJI products to immediately stop using them.

“Due to increased awareness of cyber vulnerabilities associated with DJI products, it is directed that the US Army halt use of all DJI products,” the memo read.

In the memo, soldiers are also ordered to remove all batteries and storage media from their DJI drones and await further instructions.

DJI told The Register: “We are surprised and disappointed to read reports of the US Army’s unprompted restriction on DJI drones as we were not consulted during their decision. We are happy to work directly with any organization, including the US Army, that has concerns about our management of cyber issues.”

The firm’s spokesman added: “We’ll be reaching out to the US Army to confirm the memo and to understand what is specifically meant by ‘cyber vulnerabilities’.”

Drone blog sUAS News posted the text of the memo earlier this morning, along with a screenshot of what it says is the original document. sUAS News’ Gary Mortimer vouched for the memo’s authenticity to El Reg but declined to say how it had found its way to him.

The US Army told us late on Friday evening: “We can confirm that guidance was issued; however, we are currently reviewing the guidance and cannot comment further at this time.”

Rumours that such a move were on the cards have been swirling for a while.

Bad news for DJI – and for governmental users around the world

Security concerns have been looming over DJI – Da-Jiang Innovation Corporation – and its products for a while. The company’s background, as its full name suggests, is 100 per cent Chinese and it is headquartered in Shenzhen, south-east China.

In April 2016 news went round the world that DJI drones were quietly beaming data back to Chinese state authorities, via DJI’s proprietary controller app. That data included aircraft telemetry and GPS location data.

All new users of DJI drones must register with the company, meaning it is trivial for it to identify users and what their likely uses of the drones are. The company appears to be co-operating with the US government already, judging by its imposition of no-fly zones in Iraq and Syria during a US-backed military offensive. Irritated hackers later modified DJI’s firmware to allow flights outside of these no-fly zones, bypassing software-imposed performance limitations.

That the US Army would ban use of all DJI products across its 1.4 million personnel is surprising. More or less all modern consumer-grade technology is insecure, to a lesser or greater extent. Nonetheless, ease of use, a relatively low price point (something DJI prides itself on, to the point that nascent US rival 3D Robotics found itself unable to compete with DJI in the drone hardware market) and availability tends to trump security concerns.

This happens in particular at cash-strapped state agencies looking for a cheap and easy way to replace expensive capabilities – such as Devon and Cornwall Police raising a drone surveillance unit as an alternative to deploying the force helicopter at a cost of thousands of pounds. ®

Update

A DJI spokesman from America also got in touch with us late on Friday evening (3 August) to dispute our assertion that DJI beams sensitive data back to China. He pointed us to a statement issued by the firm last year, which says: “Some recent news stories have claimed DJI routinely shares customer information and drone video with authorities in China, where DJI is headquartered. This is false. A junior DJI staffer misspoke during an impromptu interview with reporters.”

We understand that DJI does demand a name and email address upon registration but doesn’t verify these things. We also understand, from sources outside the company, that certain parts of DJI’s code suggest some kind of remote phoning-home capability has been written in, if not yet activated or used.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/04/apparent_us_army_memo_bans_dji_drones/

Send mixed messages: Mozilla wants you to try its encrypted file sharing

Mozilla has just rolled out an experimental service called Send that allows users to make an encrypted copy of a local file, store it on a remote server, and share it with a single recipient.

And once shared, the encrypted data gets deleted from the server.

Send solves what used to be a common problem, sending a large file via email. Email services have long limited the size of file attachments, and while many still do – Gmail, for example, limits emailed files to 25MB – large service providers like Apple and Google have begun using adjacent services like iCloud and Drive to offload uploads.

Nonetheless, Send offers an alternative method of transit for files of 1GB or less, backed by encryption and an exceedingly simple interface.

Send is offered through Mozilla’s Test Pilot program for previewing experimental features in the company’s Firefox browser. However, it is supposed to work with any modern browser.

Send relies on Node.js code backed by a Redis database running on Amazon Web Services. Upon selecting a local file, Mozilla’s software encrypted the file client-side, uploads it to AWS and generates a URL that contains the encryption key, that can be shared with the desired recipient of the file.

“Each link created by Send will expire after one download or 24 hours, and all sent files will be automatically deleted from the Send server,” Mozilla explains in a blog post.

Send relies on the Web Cryptography JavaScript API with the AES-GCM algorithm for client side encryption and decryption.

Asked whether Mozilla would be able to unlock a stored file upon receipt of a lawful warrant, a spokesperson said the company is be unable to do so.

‘Mozilla never receives the key’

“With Send, files uploaded by users cannot be accessed by Mozilla,” a spokesperson explained in an email to The Register. “A ‘fragment’ in the URL (the part after the ‘#’) contains the generated key so a user can share it with others, but these fragments are not sent to the server when requests are made, so Mozilla never receives the key.”

While this may be a reasonably secure arrangement, it’s far from perfect. AWS might be able to recover a deleted file or be forced to retain them, given sufficient motivation, and the key might be recoverable from log files or the messaging service used to send it.

Such scenarios aside, there’s still room for privacy improvements. Mozilla has acknowledged that it sends the file name in plain text, along with other data like file size that the company deems useful for evaluating its service.

But as pointed out in a GitHub Issues post about the source code, the current version of Send also transmits the shared file’s SHA256 hash in plaintext, which could be used to identify the file.

In response, Mozilla engineer Danny Coates said Send’s privacy language has been revised to reflect file hash handling and a code update planned for next week has removed the hash logging.

“With the current functionality of the site it isn’t strictly necessary to send the file hash in plain text, however we want to be able to test features that require the hash of the file,” Coates said. “One specifically is to check uploads against a malware database.”

It might also be worthwhile to check for hashes associated with known unlawful images and videos.

Encrypting the file name remains an open issue. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/05/mozilla_tests_send_for_disappearing_file_sharing/

WannaCry crooks cash out their ransom

When the WannaCry malware came out, it had two major functions: it was a worm, so it spread from computer to computer automatically, and it was ransomware.

The fee it demanded was typically $300, converted into Bitcoin (BTC) and sent to one of several bitcoin addresses.

Bitcoin is sort-of anonymous: in particular, a bitcoin address doesn’t include your name, or an account number, or any other PII (personally identifiable information).

But the amount of money attached to a bitcoin address is a matter of public record – indeed, the Bitcoin transaction ledger, or blockchain, has to be public to ensure that there is an unmodifiable history to stop anyone claiming bitcoins they don’t own, or spending the same bitcoin twice.

In other words, once a bitcoin address is connected to a specific event, such as a ransomware outbreak, anyone can track how much money is coming in and going out, even though the account holder is unknown.

To the likely surprise of the crooks, most WannaCry victims refused to pay, so that the crooks’ bitcoin wallets were plump but not bulging, topping out at about $150,000 by the end of the malware outbreak.

After the malware died down, the crooks left those bitcoins alone, perhaps fearing the attention that withdrawals from the tainted wallets might attract.

Until… a Twitter account that was keeping an eye on the WannaCry revenue reported a series of withdrawals leaving the balance at $0.

What next?

We don’t know, and we might never find out the who or why if the withdrawals are successfully laundered.

In the case of bitcoin this is typically achieved using a so-called “tumbler” service.

For a fee, tumblers shunt bitcoins through a random sequence of accounts, rather like Tor shunts your network trafic through a random set of computers to disguise what’s really going on.

Criminals use them because, if law enforcement can link a wallet known to have been involved in a crime to another action online that reveals a sliver of the owner’s PII, then they have a chance of unmasking the crooks.

Journalist Patrick O’Neill of CyberScoop is reporting that rather than being tumbled, the ill-gotten bitcoins have been converted into another cryptocurrency, Monero, on the ShapeShift.io exchange.

Unlike Bitcoin, Monero keeps the sending address, receiving address and amount of each transaction secret.

O’Neill reports that the exchange has now blocked the addresses used by WannaCry and is “engaging and assisting law enforcement”.

Curiouser and curiouser, said Alice.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jrQOPSOJpoU/