STE WILLIAMS

Bug that deleted $300m could have been fixed months ago

All of you unfortunate holders of frozen ether, there’s no sign of a thaw anytime soon… sorry.

It wasn’t phrased that way, of course, but that was probably the most significant takeaway for holders of the cryptocurrency that uses the Ethereum blockchain after a lengthy “postmortem” issued on Wednesday by digital wallet company Parity Technologies Ltd.

This comes a little over a week since somewhere between $160m and $300m was frozen thanks to a user exploiting a bug in the Parity Wallet library contract.

While the post said Parity “deeply regrets the situation” and is “working hard to explore all feasible solutions,” it has no date when anything like that might happen.

There is no timeline for when such an improvement proposal could be implemented; we will follow the will of the community and go through the regular EIP (Ethereum Improvement Proposals) process like any other protocol improvement.

But perhaps an even more important piece of bad news was that the freeze shouldn’t have happened at all. As Parity acknowledged, a Github contributor called “3esmit” warned it about the flaw in August 2017 and recommended a code change.

The company said at the time that it considered it only as a “convenience enhancement,” and not an exploitable bug.

Interpreting the recommendation as enhancement, the changed code was to be deployed in a regular update at a future point in time.

Obviously that future point in time had not arrived by 6 November, when a user identified only as “devops199” discovered what he apparently thought was a multi-sig Ethereum wallet (requiring more than one owner to “sign” a transaction before it can proceed) and took ownership of it by calling a function known as initWallet.

Turns out, devops199 had actually become the owner of a code library for Parity multi-sig wallets. And then he decided to kill, or “suicide” it. As Parity put it:

Subsequently, the user destructed this component. Since Parity multi-signature wallets depend on this component, this action blocked funds in 587 wallets holding a total amount of 513,774.16 Ether as well as additional tokens.

Which, as Jordan Pearson of Motherboard observed after reading the postmortem, “doesn’t look good.” At midweek, ether was worth $330.50.

At the time of the freeze, users were both angry and aghast. Pearson noted that a commenter called “1up8912” on the Ethereum subreddit wrote:

I know it is easy to be smart in hindsight, but these are huge design errors, I can’t comprehend how could this pass reviews in the architecture phase.

According to Parity, the original “Foundation” multi-sig wallet code was, “created and audited by the Ethereum Foundation’s DEV team, Parity Technologies and others in the community,” but was later restructured into two library contracts.

One of those, a “smart contract, containing the majority of the wallet’s logic” contained a fatal flaw:

In an attempt to stay as close as possible to the original audited smart contract, as few changes as possible were made to derive the library contract. This, however, meant that the library contract had the same functionality as a regular wallet and required initialization. It therefore also still contained the original self-destructfunction that is designed for retiring the wallet.

But after a hack on 19 July 2017, in which hackers looted $32m in ether from multi-sig wallets, the library contract was, “fixed and redeployed” the next day. It is the wallets created after that day, with the flaw noted by “3esmit”, that are affected.

Supposedly the whole thing was an accident. Devops199 posted on Github under the heading, “anyone can kill your contract” that he had killed the library inadvertently, and then later in a tweet that, “I’m eth newbie .. just learning.”

That didn’t fly with Kosta Popov, founder of Cappasity, who had about $1m in a now-frozen Ethereum multi-sig wallet. The Register reported he believes it was “deliberate and fraudulent.” In a statement on his company website, Popov wrote:

Our internal investigation has demonstrated that the actions on the part of devops199 were deliberate. When you are tracking all their transactions, you realize that they were deliberate… Therefore … we suppose this was a deliberate hacking.

While Popov also wrote that, “contacting law enforcement might be the right next step,” so far, there doesn’t seem to be much interest, at least in public, from Parity in tracking down devops199. The company did not respond to an email seeking comment.

In a “locking-the-door-after-the-horse-has-bolted” move, Parity said it is:

…removing the ability to deploy multi-sig wallets until we feel we have the correct security and operations procedures in place so that we can be confident this will not happen again.

Even more important than that though is what Parity intends to do to fix the broken processes that led to the buggy code being deployed, and vulnerability report being misunderstood.

On that, Parity says it is:

..commissioning another full-stack external security audit of all existing sensitive code including secret management, key generation and password management, signing and auto-updating.

We will be putting significant efforts and resources into reviewing our processes and procedures internally and have a team specifically dedicated to operational security. This team will be expanded as necessary and we will have resources at its disposal. The team will be tasked with reviewing and maintaining critical parts of Parity Technologies’ offering.

Which must be some relief – to those whose currency didn’t get frozen.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OG5iwwBxOyQ/

Lloyds’ Avios Reward credit cardholders report fraudulent activity

Thousands of Lloyds Avios Rewards American Express credit card customers have been targeted by fraudsters, the bank has admitted.

Reports first emerged on air miles site Head for Points, where readers asked if the credit card had suffered a major data breach.

One said: “About a week ago my wife’s Lloyds Avios Amex card was used fraudulently by someone over in New York for a few different things so we called Lloyds to talk about this and get the card cancelled and a replacement sent out.”

After contacting Lloyds, he said the bank informed him it was getting thousands of calls a day and was seeing a lot of fraud on Amex cards.

Another said: “Same for me – queued for 45 mins on Saturday afternoon to speak to the fraud team after my card was declined – there was an attempted US transaction on there. And spoke to a colleague this week with the Lloyds Avios Amex whose card had also stopped working. There’s clearly been a massive leak somewhere…”

A Lloyds spokeswoman said: “A very small number of Lloyds Bank Avios Rewards American Express credit card customers have been affected by recent fraudulent activity. This has affected less than one percent of customers who hold these cards and we have introduced additional controls to provide further protection.

“These controls have been successful in ensuring that fraudulent transactions are identified and declined. We apologise to customers for any inconvenience caused. Impacted customers will receive a full refund of monies that have been taken fraudulently.”

Earlier this week, customers of Lloyds Banking Group and TSB were shut out of their online banking – for the second time this month.

At the start of the year, the UK-based group fell victim to a DDoS that led to a two-day outage. Several more glitches followed throughout the year. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/17/lloyds_customers_affected_by_data_breach/

We’re Still Not Ready for GDPR? What is Wrong With Us?

The canary in the coalmine died 12 years ago, the law went into effect 19 months ago, but many organizations still won’t be ready for the new privacy regulations when enforcement begins in May.

If you’ve been comforting yourself with the thought “I’m sure there will be a grace period for the General Data Protection Regulation,” think again, pal, because this is the grace period, and it’s almost over. May 25, 2018 enforcement actions for GDPR begin, many if not most of us aren’t ready, and we really have no good excuse.

Two out of every five respondents to a new survey released last week by Thales stated that they don’t believe they’ll be fully prepared for GDPR when enforcement actions kick in, specifically 38% of respondents in the UK, 44% in Germany, and 35% in the US. Other recent surveys turn up similar results. Aside from the fact that GDPR officially went into effect in 2016, why is this privacy law and the controls it requests coming as such a surprise? We should have seen this coming from 10 miles and 12 years away. 

The Canary in the Coalmine: ChoicePoint

My first warning came one month after I started covering cybersecurity: the ChoicePoint breach, which occurred in 2004 but wasn’t revealed until February 2005. 

The personally identifiable information — including name, address, and Social Security number — of 163,000 people was exposed when data broker ChoicePoint (since purchased by Lexis Nexis) sold it to phony businesses set up by an alleged crime ring. Roughly 800 people became victims of identity theft as a result of the incident. ChoicePoint first only notified affected individuals covered under California’s young data breach notification law; then later informed victims in other, yet-uncovered states. The Federal Trade Commission fined the company $10 million, plus an additional $5 million to establish a fund for victims.

ChoicePoint was big news precisely because nobody knew who ChoicePoint was. The individuals receiving breach notification letters and suffering from identity theft were not ChoicePoint customers. The company was not a household name. Most people were not aware that companies like ChoicePoint even existed.

The incident showed that, in America at least, individuals do not own their personal data; they don’t even hold a penny stock in it. It showed that the organizations that own, buy, and sell that data might do a lousy job of securing it – even when they market themselves as security service providers. 

It wasn’t a hack that caused the breach; it was bad business. But the company’s chief information security officer suffered a great deal of public criticism just the same, including from those within his own industry. CISOs everywhere agreed “let’s make sure we’re not the next ChoicePoint!” And then every company decided to become the next ChoicePoint.

Big Data Revolution  

Now, every company wants to know everything about everyone, everywhere, all the time. Suggest to a marketing or sales person that their company might suceed without that information and they break out in hives and look for the men in white coats to take you away. Nearly every organization now is a steward of some form of sensitive info. 

So, developers made it as easy as possible for people to hand over their personal data: auto-checked “accept” boxes, auto-fill forms, “share on social media” buttons. New business models and job titles centered around getting customers to buy services with data, not cash.

Cybersecurity people knew this could cause trouble. And it did. What we should have known is that the trouble would eventually lead to a reckoning. Every PII breach was a warning. Malvertising was a warning. The plummeting price of credit card numbers on the black market was a warning. Every free cloud IT service, every targeted ad, every one of Facebook Messenger’s incessant requests to turn on notifications was a warning. Every time you read the news and wondered “why wasn’t that patched,” “why wasn’t that encrypted,” “why was that connected to the internet,” “why wasn’t that disposed of correctly,” “why would they even collect that,” was a warning. 

There was a disturbance in The Force. Eventually something had to give. 

GDPR: A New Hope

Bigger, badder data privacy law would have to come, surely. And not just “tattle-tale” data breach notification laws or checklist-happy “set-it-and-forget-it” style regulations coming out of the industry. Eventually the world would call for a law that might genuinely be inspired by the idea that people have a right to privacy.

And of course if such an idea would arise it was going to come from Europe. The Swedes passed their first data protection law in 1973. The European Union issued the GDPR predecessor, the EU Data Protection Directive, in 1995. Discussions for a replacement started in 2009, GDPR was proposed in 2012, approved in 2014, and was officially adopted in 2016. 

Dark Reading has been writing about GDPR for years, but readers didn’t take much notice until the Equifax breach.

So we’ve had at least five years to prepare for a law that went into effect 19 months ago, and we’re still not ready. Worse yet, we don’t particularly want to be. 

According to the Thales report: “Interestingly, while around one in five (22% in the UK, 24% in Germany, 20% in the US) believe the GDPR will lead to fewer data breaches, a significantly higher proportion (32% in the UK, 31% in Germany, 49% in the US) are concerned that its implementation will actually result in an increased number of breaches.”

Half of the Americans surveyed think GDPR will increase the number of breaches.

What a cop-out. 

True, cybersecurity professionals do have legitimate gripes when it comes to privacy/security regulations: Bean-counters who approve budgets may only grant enough beans to achieve compliance, not leaving enough to achieve real security. Regulations may be overly prescriptive with the controls they require, preventing organizations from deploying newer, better security solutions. Compliance efforts prevent organizations from using a risk-based approach to security.

Legitimate. But they don’t hold water when it comes to a couple of realities about GDPR.

For one thing, what organizations that use a risk-based approach to data privacy don’t understand is that it’s not their privacy they’re putting at risk. An organization can manage financial fallout of a data breach with effective incident response, good PR and cyber insurance. An individual, however, can’t get a job because a background check ordered from a ChoicePoint-lookalike turned up an awful credit score that makes management think “irresponsible” and infosecurity think “insider threat.” An individual can’t sleep because their crazy ex-partner now knows where their kids go to school. An individual encounters any number of other complications because the religion they practice, the medicines they take, and the mistakes they made are all publicly available to be manipulated. The industry-created standards, oopsy-daisy notification laws, and risk-based enterprise security management strategies we already have don’t take any of that into account.

GDPR isn’t all that prescriptive on cybersecurity controls, either. Many of the things it does ask for are reasonable and are the same kinds of things that security people have been asking for all along: better data inventory, better data destruction, better monitoring, regular vulnerability testing, principles of least privileges, encryption where necessary, and applications that aren’t full of holes. 

We groan and mock developers for writing insecure applications that you have to fix later. Well, GDPR says that applications need to be “secure by design.” Now developers have to listen to you. The inventory and destruction mandates should help wipe out your “shadow IT” problem. The security of data processing section of the legislation, where it mentions encryption and pseudonymization, even includes nice flexible risk-friendly language like “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk.” Sure, the 72-hour breach notification requirement is a bit scary. “The right to be forgotten” is a bit scary. But that’s only because so many of us have been doing a terrible job to this point.

Businesses are not teenagers living in messy bedrooms, driving recklessly. A business should not hide it from Dad when we lose his credit card or get a parking ticket with his car, lose the ticket, neglect to pay the ticket, and not bother to address it until Dad’s gotten nicked for driving with a suspended license. (Sorry, Dad.) When other people give us their stuff, we should know we have it, know where we put it, be able to return it when they want it back and at the very least tell them when we lost it, broke it, gave it to someone else or let it get stolen.

GDPR simply codifies the fact that “personally identifiable information” is someone else’s stuff, and should be treated accordingly. If legislation like this actually increases the number of breaches that we have, then we’re doing something wrong. If potential penalties of 20 million euros or 4% of our global annual revenue, whichever is higher, don’t help us obtain better budgets, then we’re doing something wrong. And if, after Equifax and all the other data breaches that don’t get covered because there aren’t enough employed reporters in the world, we still don’t think GDPR is necessary, then we’re doing something wrong. 

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Article source: https://www.darkreading.com/risk/were-still-not-ready-for-gdpr-what-is-wrong-with-us-/a/d-id/1330422?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

IBM, Nonprofits Team Up in New Free DNS Service

Quad9 blocks malicious sites used in phishing, other nefarious activity.

A new nonprofit has launched a free Domain Name System (DNS) service that filters malicious domains linked to botnets, phishing campaigns, and other malicious activity.

The new Quad9 DNS service built by IBM Security, Packet Clearing House, and the Global Cyber Alliance, is aimed at consumers and small- to midsized businesses, and doesn’t share or resell user DNS lookup information to advertisers.

“Ninety to 95% of threats and major intrusions come by way of DNS,” says Philip Reitinger, president and CEO of the Global Cyber Alliance and former deputy undersecretary for the National Protection and Programs Directorate at the US Department of Homeland Security. Quad9 blocks phishing sites that are flagged as malicious, he notes.

DNS attacks can be insidious for consumers as well as businesses. Three out of 10 companies say they’ve been hit with cyberattacks on their DNS infrastructure, 93% of whom suffered downtime due to the attack, according to a recent study by Dimensional Research on behalf of Infoblox. And that’s just the organizations who actually detected that their DNS was hit; experts believe the actual number of DNS attacks is higher because many organizations don’t know.

Quad9 isn’t the first free DNS service, however. OpenDNS, now part of Cisco Systems as OpenDNS Home, was one of the first such services that filters malicious DNS traffic, for example, and Google offers its 8.8.8.8.  

DNS pioneer Paul Vixie, CEO and founder of DNS security firm FarSight Security, notes that there are actually hundreds of freebie DNS services, and not all are created equal. “There are hundreds of DNS service providers offering free service, but the only ones I’m sure I would trust to see my DNS lookups are Google’s 8.8.8.8 and Cisco Umbrella’s OpenDNS. And now, add DCA/PCH/IBM’s Quad9 to that list, because their credentials are also quite strong.”

While Google’s 8.8.8.8 does not filter DNS responses, Cisco Umbrella’s OpenDNS and Quad9  filter “any known-dangerous DNS data that could otherwise lead to a malware infection or worse,” he notes.

Vixie says networks, including home networks, should opt for a DNS service with DNS filtering as a defense from network threats. While many ISPs offer this service, if they also mine customer DNS queries, it’s a privacy tradeoff, he notes.

The key is to vet a DNS service’s privacy policy. “Even if they publish a ‘privacy policy,’ they might not be following it. You need solid reason to trust a DNS service’s credentials,” he says.

How Quad9 Works

Setting up the Quad9 service entails reconfiguring the DNS setting on networked devices to 9.9.9.9. When a user types an URL into his or her browser, or clicks on a website, the service checks it against IBM X-Force’s threat intelligence database, as well as nearly 20 other threat intelligence feeds including Abuse.ch, the Anti-Phishing Working Group, F-Secure, Proofpoint, and RiskIQ.

John Todd, executive director of Quad9 and a former Senior Technologist, Packet Clearing House, says consumers are the initial target for the new service, especially for their Internet of Things devices.

The DNS filtering service would block an IoT device from becoming a bot in a botnet such as Mirai, for example, notes Paul Griswold, director strategy and product management, IBM X-Force. “The best way to protect them [IoT devices] is through the network layer and through the DNS. So if an IoT device gets infected like they did with Mirai, it would cut off those [DNS] requests then they try to join a botnet.”

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/analytics/ibm-nonprofits-team-up-in-new-free-dns-service/d/d-id/1330454?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fake news ‘as a service’ booming among cybercrooks

Criminals are exploiting “fake news” for commercial gain, according to new research.

Fake news is widely assumed to be political or ideological propaganda published to sway public opinion, but new research conducted by threat intel firm Digital Shadows and released on Thursday suggested fake news generation services are now aimed at causing financial and reputational damage for companies through disinformation campaigns.

The firm’s research stated that these services are often associated with “Pump and Dump” scams, schemes that aggressively promote penny stocks to inflate their prices before the inevitable crash and burn. Scammers buy low, hope that their promotions let the sell high, then flee with their loot and little regard for other investors.

A cryptocurrency variant of the same schemes has evolved and involves gradually purchasing major shares in altcoin (cryptocurrencies other than Bitcoin) and drumming up interest in the coin through posts on social media. The tool then trades these coins between multiple accounts, driving the price up, before selling to unsuspecting traders on currency exchanges looking to buy while prices are still rising.

An analysis of the Bitcoin wallet of one such popular “Pump and Dump” service found that it had received the equivalent of $326,000 from aspiring criminals in less than two months, Digital Shadows’ research stated.

Fake news techniques

Disinformation campaign taxonomy [source: Digital Shadows blog post]

Digital Shadows also identified more than ten services that allow users to download software that controls the activities of social media bots. One such service offers users a trial for just US$7.

Others tools claim to promote content across hundreds of thousands of platforms, including forums, blogs and bulletin boards. The tools supposedly work by controlling large numbers of bots to post on specific types of forums on different topics.

Mentions of these sites and services across criminal forums have increased increased fourfold in just two years from 418 in 2015 to 1381 in 2017 so far, Digital Shadows reported. The company opined that things are only likely to get worse:

The battle against fake news could be getting even more difficult with advertisements for toolkits increasingly claiming to include built in features that bypass captcha methods, which were initially brought in to prevent bots and automated scripts from posting advertisements indiscriminately across these platforms.

Unsurprisingly, media organisations are a frequent target for purveyors of fake news. Digital Shadows analysed the top 40 global news websites and checked over 85,000 possible variations on their domain. In doing so, it discovered some 2,858 live spoof domains.

Simply by altering characters on a domain (e.g. a “m” may have changed to an “rn”) and by using cloning services it is possible to create a convincing fake of a legitimate news site. Miscreants then link to and otherwise promote fake stories at these bogus sites for their own nefarious ends.

Retailers have also been targeted. One managed service offers “Amazon ranking, reviews, votes, listing optimisation and selling promotions’ with pricing ranging from $5 (for an unverified review) to $10 for a verified review and up to $500 for a monthly retainer.

Trojan horse photo via Shutterstock

New, revamped Terdot Trojan: It’s so 2017, it even fake-posts to Twitter

READ MORE

“The sheer availability of tools means that barriers to entry are lower than ever. It means this now extends beyond geopolitical to financial interests that affect businesses and consumers”, said Rick Holland, VP Strategy, Digital Shadows. “Of course, rumours, misinformation and fake news have always been part of human society. But what has changed in the digital world is the speed such techniques spread around the world.”

Digital Shadows issued guidance for firms looking to combat disinformation. Useful protection steps include keeping an eye on trending activity on social media and forums as it relates to an organisation’s digital footprint to potentially identify disinformation activity. Organisations should also proactively monitor for the registration of malicious domains and have a defined process for dealing with infringements.

Lastly organisation should monitor social media for brand mentions and seek to detect the ‘bots’, using clues such as the age of the account, the content being posted, and the number of friends and followers,” Digital Shadows advised. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/17/fake_news_as_a_service/

Mr. Robot eps3.5_kill-process.inc – the security review

It feels like the entire series was leading up to this episode, and I’m hesitant to mention anything more in case someone gets accidentally and majorly spoiled…

WARNING: SPOILERS AHEAD – SCROLL DOWN TO READ ON

We can pull out two small threads from this week’s episode to examine. Both deal with unauthorized access.

When we’re talking about information security, and not just cybersecurity, physical security is also part of the picture. We saw Elliot take advantage of small loopholes in the E-Corp NYC building security to infiltrate it rather easily. While employees were filing back in to the building after an evacuation, Elliot took advantage of the minor chaos and the crowds to steal a badge from a security guard, and then use it to get access to several different areas of the building – including rooms where he could connect directly to the (presumably secure) corporate network via ethernet.

At many larger corporate sites, the security staff is often subcontracted so there may not be an element of control over processes there, especially if they are hired by the building landlord. Sure, we can say that the guards should be more careful with their badges, as they are not unlike admin-level credentials with their higher levels of access, but enforcing this would be tough, to say the least.

But giving credit where it’s due: we saw a failsafe kick in when the badge owner (presumably) realized their badge was gone, as the card stopped working right before Elliot tried to get in to the server room. Though it took a bit of time for the guard to realize his badge had been taken, they took measures to revoke that card’s access quickly, which is the right thing to do.

The second thread is when we see Elliot try to roll back the battery server firmware updates, and Mr. Robot does everything he can to stop this from happening.

Picking up on blink-and-you’ll-miss-it action in Terminal when Elliot and Mr. Robot are duking it out, Elliot manages to SSH into the battery servers with full root access, which we assume would give him the access he’d need to change the firmware…

…and moments later we see connection refused as Mr. Robot kills SSH.

Prevention vs Cure

In both these threads, Elliot managed to get access he wasn’t supposed to have, but was thwarted nevertheless by timely (or perhaps just-in-time) intervention.

For those of us in the real world, this is a reminder that although prevention is better than cure, giving up shouldn’t be an option—with a prompt reaction to a security violation, there may yet still be time to head the attacker off at the pass.

By the way, in both these examples, 2FA (two-factor authentication) would have made things much harder for Elliot in the first place.

If the physical access system required a badge (something you have) and a PIN code (something you know), Elliot wouldn’t have been able to swipe someone else’s badge and immediately get in.

And if the SSH server had insisted upon both a password and a token code, Elliot would only have had half of what he needed to access the system.

Not that it would have prevented the carnage from Dark Army’s stage 2 in the end, alas.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_MolyK3F8B8/

Parity: The bug that put $169m of Ethereum on ice? Yeah, it was on the todo list for months

Alt-coin wallet software maker Parity has published a postmortem of the bug that put millions of dollars of people’s Ethereum on ice – and has admitted it knew about the flaw for months. It just hadn’t got round to fixing it.

Last week, netizens using Parity’s multi-signature wallets – which each require more than one person to authorize transactions – that were created after July 20 found themselves locked out of their funds, due to an anonymous miscreant triggering a bug in the code and freezing the crypto-currency collections.

It was thought as much as $280m of Ethereum had been permanently trapped in the affected wallets, but this was later amended to $169m, or 513,774.16 ETH.

It was also thought that a user going by the handle of devopps199 accidentally created a corrupted wallet, which then had a cascading effect across Parity’s user base, locking people out of recently created multi-signature collections. Subsequent analysis of the cockup alleged the lockdown was no accident, but a deliberate attempt to bork Parity wallets. The software maker has not commented on the claims.

In this latest report, however, Parity said it was warned of the programming flaw by a user in August, months before the wallet freeze was triggered. After examining the issue, the developers determined it really was a potential problem, and resolved to issue a fix “at some point in the future.”

That future didn’t come soon enough for the owners of the at least 70-odd Ethereum wallets knackered by the bug.

“Parity Technologies regularly employs external auditors for formal audits of smart contracts that we write,” the outfit said.

“However, rather than just having more audits, we strongly believe that more extensive and formal procedures and tooling around the deployment, monitoring and testing of contracts will be needed to achieve security. We believe that the entire ecosytem as a whole is in urgent need of such procedures and tooling to prevent similar issues from happening again, in particular if and when the number and complexity of live contracts grows.”

Parity said that it deeply regrets making the coding error that led to the wallet freeze and the loss of the millions of dollars they contain. As a precaution, it has stopped issuing multi-signature wallets, and insists its standard Ethereum holding software is fine.

As for actually unlocking up the trapped funds, Parity said it has no immediate solutions. It said it is considering several Ethereum improvement proposals to put to the community, is carrying out a full-stack external security audit of its existing sensitive code, and promises to expand its security team.

Ethereum prices are currently $330 per coin, and have risen slightly since the Parity snafu. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/16/parity_flaw_not_fixed/

Kaspersky: Clumsy NSA leak snoop’s PC was packed with malware

Kaspersky Lab, the US government’s least favorite computer security outfit, has published its full technical report into claims Russian intelligence used its antivirus tools to steal NSA secrets.

Last month, anonymous sources alleged that in 2015, an NSA engineer took home a big bunch of the agency’s cyber-weapons to work on them on his home Windows PC, which was running the Russian biz’s antimalware software – kind of a compliment when you think about it. The classified exploit code and associated documents on the personal system were then slurped by Kremlin spies via his copy of Kaspersky antivirus, it was claimed.

Kaspersky denied any direct involvement. It was unfortunate timing considering US officials had banned the Russian software from all federal government systems the month before. The biz offered to hand over its source code to investigators, to prove it wasn’t up to anything dodgy, and began a full internal inquiry.

The report, published on Thursday, said it has no record of the described snafu in 2015, but the case looked like a situation that kicked off the year before. A user with a Verizon FiOS IP address in the Baltimore area, near the NSA headquarters, fired up the Kaspersky software, and it found on the PC powerful cyber-attack code that appeared to be part of a collection codenamed the Equation Group files. We know that now these files belonged to the NSA, but at the time, Kaspersky was still figuring out where they came from.

Kaspersky had been researching the Equation Group’s spyware tools for months after it encountered the data elsewhere. The files showed all the hallmarks of being a highly sophisticated state-sponsored creation – such as the NSA’s handiwork. Assigning names like Equation, Grayfish, Fanny, DoubleFantasy and Equestre to the tools it found in the surveillance set, Kaspersky updated its antivirus signatures in June 2014 to look for instances on its customers’ computers. That would mean people running Kaspersky’s tools would be protected from the mysterious malware.

Signatures aren’t an exact science, and these digital fingerprints for the Equation Group files triggered hundreds and thousands of detections, most of which turned out to be false positives. But towards the end of the year, the software appeared to hit pay dirt, Kaspersky said, and it found 17 instances of Equestre, two more for Greyfish and a 7zip archive that also appeared to be holding the spyware code – all on a single computer. The NSA engineer’s home computer.

“An archive file firing on these signatures was an anomaly, so we decided to dig further into the alerts on this system to see what might be going on,” the report stated. “After analyzing the alerts, it was quickly realized that this system contained not only this archive, but many files both common and unknown that indicated this was probably a person related to the malware development.”

Over a three month period, Kaspersky found 37 unique Equation Group files on the computer, with indications this machine belonged to a developer of the sophisticated malware. The security shop said it was withholding further details on this until it receives permission from the user to do so – so don’t hold your breath.

Poor opsec and a third-act twist

It appeared that many of these files were contained on removable drives. Kaspersky said it assumes the mix up on dates came from a misinformed reporter.

The archive was sent back to Kaspersky’s servers for further analysis by a lab staffer. It contained a collection of executable modules, four Word documents marked as classified, and other files related to the Equation Group project. It was shown to the CEO Eugene Kaspersky, who ordered it deleted immediately – a claim some in the infosec community are skeptical about.

An examination of the computer the files came from showed an interesting snippet – it was already infected with malware. In October of that year the user downloaded a pirated copy of the Microsoft Office 2013 .ISO containing the Mokes backdoor, and that the Office software had been activated using a pirated key generator.

Kaspersky’s software blocks Mokes, and wouldn’t have allowed the installation to proceed, so the biz theorized that its software was turned off so the individual could load up the dodgy copy. When the antivirus was fired up again, it detected the threat, but this wasn’t too unusual – over a two month period Kaspersky’s code found 128 separate malware samples on the machine that weren’t related to the Equation Group.

In an interesting third-act twist, the Mokes software appears to have been run out of China. Kaspsersky found that the malware’s command and control servers were apparently being masterminded by a one Zhou Lou, from Hunan, using the e-mail address “[email protected].”

“Given that system owner’s potential clearance level, the user could have been a prime target of nation states,” Kaspersky said. “Adding the user’s apparent need for cracked versions of Windows and Office, poor security practices, and improper handling of what appeared to be classified materials, it is possible that the user could have leaked information to many hands.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/16/kaspersky_nsa_staffers_pc_was_riddled_with_malware_from_pirated_code/

Ransomware via RDP – how to stay safe! [VIDEO]

Microsoft’s RDP (Remote Desktop Protocol) is a great way to look after your network remotely.

But a cottage industry of cybercriminals has sprung up to “look after” your network too, by infecting you with ransomware if you give them half a chance.

We wrote a detailed article about this issue yesterday, and followed it up today with a Facebook Live video to take another look at this sometimes very painful problem:

(Can’t see the video directly above this line? Watch on Facebook instead.)

Note. With most browsers, you don’t need a Facebook account to watch the video, and if you do have an account you don’t need to be logged in. If you can’t hear the sound, try clicking on the speaker icon in the bottom right corner of the video player to unmute.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0NPdm2Xc9jU/

Drone maker DJI left its private SSL, firmware keys open to world+dog on GitHub FOR YEARS

Chinese drone maker DJI left the private key for its dot-com’s HTTPS certificate exposed on GitHub for up to four years, according to a researcher who gave up with the biz’s bug bounty process.

By leaking the wildcard cert key, which covers *.dji.com, DJI gave miscreants the information needed to create spoof instances of the manufacturer’s website with the correct HTTPS certificate, and silently redirect victims to the malicious forgeries and downloads via standard man-in-the-middle attacks. Hackers could also use the key to decrypt and tamper with intercepted network traffic to and from its web servers.

It’s rather embarrassing. DJI is one of the world’s largest small and medium-sized aerial drone manufacturers.

The private SSL key was found sitting in a public DJI-owned GitHub repo by Kevin Finisterre, a researcher who focuses on DJI products. AWS account credentials and firmware AES encryption keys were also left exposed, we’re told, along with highly sensitive personal information in poorly configured public-facing AWS S3 buckets, which he summarized as a “full infrastructure compromise.” DJI has since marked the affected HTTPS certificate as revoked, and acquired a new one in September.

“I had seen unencrypted flight logs, passports, drivers licenses, and identification cards,” Finisterre said, adding: “It should be noted that newer logs and PII [personally identifiable information] seemed to be encrypted with a static OpenSSL password, so theoretically some of the data was at least loosely protected from prying eyes.”

Earlier this year the US Army issued a blanket ban on the use of DJI products by its personnel. It gave no reason for doing so, other than unspecified “cyber vulnerabilities,” and was rapidly followed in doing so by the Australian military. Several British police forces also use DJI drones for operations, in place of helicopters.

Speaking to El Reg, Finisterre added that the SSL private key “sat on GitHub for two to four years as I recall… no clue who wound up with it,” continuing:

This breach seemingly confirms many of the concerns of the summer regarding the US Army ban, and other concerned parties discussing DJI’s data security posture. It is unfortunate that I have had to share it in this fashion; I had hoped for a “responsible” collaboration on a mutual message with the vendor.

Earlier today Finisterre posted an 18-page PDF on Twitter setting out his findings and frustrations over what he describes as several months of working with DJI’s US representatives in trying to report the security blunders. Having disclosed the cockups privately to DJI, he applied for a reward from its bug bounty scheme.

Though DJI agreed in principle that he would be paid their “top reward” of $30,000, the two sides disagreed vehemently over the terms of a non-disclosure agreement that the company wanted all bounty recipients to sign, which eventually led to Finisterre losing patience and going public with all the details, effectively throwing away thirty grand.

DJI bug bounty NDA is ‘not signable’, say irate infosec researchers

READ MORE

DJI acknowledged the security failures, and told us it had “hired a third-party research firm to help us assess the issue and manage next steps.”

Computer security expert Professor Alan Woodward, of the University of Surrey in England, told El Reg: “This wouldn’t be the first time someone has posted their private key inadvertently on GitHub. When people write code that requires a hard-coded private key it’s always something that should be treated like the Crown Jewels. To post it in public view on the web is a real gotcha.”

Security researcher Scott Helme added: “The basic problem is that with access to the key, an attacker can use DJI’s certificate.” He also highlighted the fact that the now-revoked certificate was issued for *.dji.com, covering all DJI subdomains – including security.dji.com, which is where their Security Reporting Centre can be found.

Helme added that, in his view, the canceled certificate could be used to decrypt intercepted web traffic to and from DJI’s website until its expiry date of 10.00 UTC on 5 June 2018. Helme has previously blogged that there are flaws in how common web browsers handle cert revocation via the Online Certificate Status Protocol, allowing recalled certs to still be trusted by browsers. He added: “If someone is in a position to use the certificate they are also in a position to stop the revocation check happening, so the browser would accept the certificate despite it being revoked.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/16/dji_private_keys_left_github/