STE WILLIAMS

Cops catch $15m crypto-crook

A 36-year-old man has been arrested a year after stealing €10m ($15m) of the Internet of Things- (IoT-) focused cryptocurrency IOTA using bogus software.

European law enforcement agency Europol announced this week that the UK’s South East Regional Organised Crime Unit (SEROCU) arrested the unnamed man in Oxford on suspicion of fraud, theft and money laundering.

Going by the name Norbertvdberg, the man created a software program to generate the random 81-character seeds used to secure IOTA cryptocurrency wallets. IOTA holders use their seed to generate public addresses that they can then use to send and receive IOTA. Anyone who knows a wallet’s seed can transfer tokens, explains the IOTA site, and once sent they cannot be recalled.

Warning: Your wallet seed must be securely stored to safeguard your funds. There are no possible retrieval methods if you lose, insecurely generate or compromise your seed. Knowing the ‘seed’ is equivalent to ‘owning’ the tokens.

Iotaseed.io, the site created by the alleged thief, offered users the ability to generate seeds by moving their mouse, which purported to generate a unique sequence of mnemonic words and a receiving address.

In reality, it tricked users.

Instead of generating random seeds, it created sequential ones that incremented each time and secretly stored those seeds for later use, as this analysis shows.

Norbertvdberg continued to reel victims in with the site, even posting code on a GitHub account to give the operation an air of legitimacy.

In January 2018, when he had gathered enough seeds, he allegedly used them to send IOTA from his victims’ IOTA wallets to his own addresses. He stole from at least 85 victims, and possibly many more, said Europol.

He also used a DDoS attack on IOTA servers to distract administrators and hide transaction surges that might have alerted them to the theft.

Iotaseed.io was operating as late as November 2017, but was taken down in January 2018. Ironically, at one point the site displayed a message:

Check that the URL above is https://iotaseed.io There are scammers out there!

Part of the problem may have been that the official IOTA wallet contains no tool to generate seeds. Instead, the Foundation asks users to:

Randomly write down uppercase letters (A-Z) and the number 9 on a piece of paper until you have 81 characters written.

… or to execute terminal commands, or use a password generator to create an 81-character password, and the manually change some of characters to 9. He admits that this method is a “somewhat more complicated routine”.

Norbertvdberg took advantage of user confusion, posting several times on the official IOTA Foundation forum recommending https://iotaseed.io to new users asking how to create random keys.

The whole thing infuriated users. In a thread on the IOTA forum discussing the theft, one user said:

This is a terrible situation and we can’t believe that user bad practices are entirely at fault here. It is very bad practice that a wallet would not generate its own secure seed instead of requiring an external program to do this. Furthermore, to make users log in with their seed puts them at constant risk of key loggers and mistakenly pasting the seed into websites and chat windows. These things definitely need to be solved ASAP.

Please stop pointing fingers at users as this is preventing critical modifications being added to the wallet and improving security and user experience.

Users noticing that their funds had been stolen contacted Germany’s Hessen State Police, who discovered the UK suspect in July 2018. They notified the Joint Cybercrime Action Taskforce (J-CAT), which is part of Europol’s European Cybercrime Centre (EC3). This eventually got the case to the UK National Crime Agency, which led to the arrest. Officers also confiscated several computers from the suspect’s home for forensic analysis.

Based on a distributed ledger technology known as Tangle, IOTA offers free transactions with almost zero compute power. Designed primarily for transactions between IoT devices, its creators hope that the system will create new business models involving connected devices.

The takeaway from this whole sorry affair is that if you’re dealing with a technically complex asset like cryptocurrency, it pays to invest the time in understanding how it works, what the dangers are, and how you can protect yourself against them.

Cryptocurrency developers and administrators must also accept that some users will take the path of least resistance, without realizing that this path isn’t secure. Admins can protect their community – and therefore their ventures – by generating secure tools that assist users through all steps of the setup and management process, rather than assuming they will choose security over convenience.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/axOjYjUYH9g/

#DeleteFacebook? #DeleteTwitter? #FatLotOfGood that will do you

#DeleteFacebook? #GoodLuck!

A study published on Monday found that when it comes to protecting our privacy, it isn’t quite that simple. In fact, we don’t have to actually have a social media profile in order to be known, thanks to the fact that all our predictable friends and colleagues can fill in the blanks.

A study from researchers at the University of Vermont and the University of Adelaide found that access to as few as eight of our contacts is enough to enable predictive or machine learning technologies to achieve up to 95% accuracy in guessing what a person will post.

From an abstract of the study, titled “Information flow reveals prediction limits in online social activity” and published in the journal Nature Human Behaviour on Monday:

Information is so strongly embedded in a social network that, in principle, one can profile an individual from their available social ties even when the individual forgoes the platform completely.

Jim Bagrow, a mathematician at the University of Vermont who led the research, said in a statement that he and his team used statistical models to analyze data from more than 30 million publicly available Twitter posts from 13,905 accounts. Using that data, they used machine learning to accurately predict what a person would post based on what their contacts have posted.

What’s true for Twitter goes for Facebook, too, the researchers say: Even if you’ve never posted to either platform, it just takes between eight and nine of your friends to build a profile of your likes, interests and personality on social media.

A statement from Bagrow:

You alone don’t control your privacy on social media platforms. Your friends have a say too.

Friends like these

This is just the latest bullet in the list of how poorly our privacy fares online, which has included Facebook shoveling our data into outfits such as Cambridge Analytica. That data debacle was also centered around our social media friends… and how they’re useful when it comes to dragging our privacy through the mud.

Tens of millions of users’ data were grabbed not because those users had necessarily played a Facebook personality test called thisisyourdigitallife, but because a friend did. The app just reached past the personality test users themselves and grabbed their contacts’ data, as well.

The past year has also brought increasing insight into Facebook’s shadow profiles: profiles filled in with data collected from non-members that include, among other things, email addresses, names, telephone numbers, addresses and work information… a practice that European courts have told Facebook to knock off.

In April, when Facebook CEO Mark Zuckerberg was getting grilled by lawmakers on Capitol Hill, he told them that his company only collected data on non-users for “security purposes.” Facebook explained that it also included people’s contact lists when they use Facebook’s mobile app, which the platform uses to suggest friend recommendations.

Twitter hadn’t responded to media requests for a comment as of Thursday afternoon. For its part, in response to the study, a Facebook spokeswoman told news outlets that even if the company collects data on non-users, it doesn’t build profiles on them. From its statement:

If you aren’t a Facebook user, we can’t identify you based on this information, or use it to learn who you are.

A statement from co-author Lewis Mitchell, a senior lecturer in applied mathematics at the University of Adelaide:

There’s no place to hide in a social network.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yX095lY9Xtc/

US gov issues emergency directive after wave of domain hijacking attacks

The US Department of Homeland Security (DHS) has issued an emergency directive tightening DNS security after a recent wave of domain hijacking attacks targeting government websites.

Under the directive, which appeared a week after a US-CERT warning on the same topic, admins looking after US .gov domains have until 5 February to do all of the following or explain why they can’t:

  • Verify that all important domains are resolving to the correct IP address and haven’t been tampered with.
  • Change passwords on all accounts used to manage domain records.
  • Turn on multi-factor authentication to protect admin accounts.
  • Monitor Certificate Transparency (CT) logs for newly issued TLS certificates that might have been issued by a malicious actor.

The warning mentions domain hijacking campaigns publicised by security companies in November and January, only one of which alluded to targets that might include US government sites.

The DHS warning is more specific:

CISA is aware of multiple executive branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.

Separately, the CyberScoop website quoted unnamed sources as telling it that at least six US civilian agencies had been “affected by the recent malicious DNS activity”.

Six agencies is a lot, which underlines why the directive is billed as an emergency.

What is domain hijacking?

Domain hijacking has been a persistent issue in the commercial world for years, a prime example of which would be the attack that disrupted parts of Craigslist in November 2014.

In that incident, as in every successful every domain hijacking attack, the attackers took over the account used to manage the domains at the registrar, in this case, Network Solutions.

The objective is to change the records so that instead of pointing to the IP address of the correct website it sends visitors to one controlled by the attackers.

This change could have been made using impersonation to persuade the registrar to change the domain settings or by stealing the admin credentials used to manage these remotely.

It’s a potent attack – web users think they’re visiting the correct website because they’ve typed the correct domain in their address bar and have no reason to doubt where they end up.

For attackers, it’s the perfect crime that avoids the much harder job of having to take over the real website.

DNS hijacking and cache poisoning

DNS can be manipulated in other ways, including DNS hijacking where someone’s browser, computer or home router is compromised to resolve domains via a malicious DNS server, or through cache poisoning in which the same end is achieved either by manipulating address data cached locally on the computer or home router, or at a higher level in the DNS infrastructure itself.

Because the US Government manages thousands of domains through a sprawl of devolved agencies, securing them was never going to be easy.

The added complication is the fact that some agencies are short on staff thanks to the partial government shutdown. Tweeted Chris Krebs of the DHS Cybersecurity and Infrastructure Security Agency (CISA) on this issue:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jXoVS7LHL-c/

Fighting Emotet: lessons from the front line

Thanks to Sophos expert Peter Mackenzie for the research in this article.

Emotet is malware that’s designed to evade detection, dig in hard and multiply.

Thanks to a restless update schedule, a modular, polymorphic design, and its ability to deploy a host of different techniques for worming through networks, the software is a moving, shape-shifting target for admins and their security software.

Over its five-year life, Emotet has evolved from a Trojan that silently steals victims’ banking credentials into a highly sophisticated and widely deployed platform for distributing other kinds of malware, most notably other kinds of banking Trojan.

Emotet arrives on the back of malicious spam campaigns and serves up whatever malware pays. So far this year that’s meant TrickBot and QBot banking trojans, although it’s also been linked with BitPaymer – a strain of sophisticated ransomware that extorts six-figure payouts.

In July 2018, the US-CERT (United States Computer Emergency Readiness Team) issued an alert that described Emotet as:

…among the most costly and destructive malware affecting SLTT [state, local, tribal, and territorial] governments. Its worm-like features result in rapidly spreading network-wide infection, which are difficult to combat. Emotet infections have cost SLTT governments up to $1 million per incident to remediate.

Emotet remains an extremely potent in-the-wild threat, and dealing with it is one of the most difficult challenges facing system administrators and threat hunters.

With that in mind, I sat down with Sophos Global Malware Specialist Peter Mackenzie, to find out what he’s learned from dealing with Emotet outbreaks.

1. Secure all of your machines

Prevention is better than cure, and one of the best preventative steps you can take is to make sure you don’t have any unsecured machines on your network. According to Peter:

Invariably when organizations are hit by Emotet, the source of the infection is an unprotected machine on the network. Customers are often unaware of these devices, let alone any malware that’s on them.

You can use a free network scanning tool to get a list of every active device on your network, and compare this with the ones in your security management console. If you find any unknown devices, get them patched and running up-to-date endpoint protection as quickly as possible.

Unknown, unsecured machines also give Emotet a place to hide and adapt, making a bad situation much worse.

Although it may be confined to the unsecured machine by the security software on your other machines, it will be trying to break free all the time. And because it’s polymorphic, updates itself so frequently (sometimes multiple times a day), and its payloads can switch on a dime, it’s continuously presenting new challenges.

The longer it’s allowed to run through those machinations, the more the risk increases that an update to Emotet, or a change of payload, will find a gap in your armour that allows it to break out and spread through your network.

It’s impossible to predict what will find the gap – perhaps it’ll be a new exploit, or a mutation that hides Emotet from signature based anti-virus temporarily – so defence in depth is crucial, and advanced anti-malware features like deep learning, exploit prevention and EDR give you a significant advantage in containing the outbreak and finding the source.

2. Patch early, patch often

Emotet is a gateway for other malware, so containing an Emotet outbreak doesn’t just mean stopping Emotet, it means stopping whatever it brings with it. Since you don’t know what that will be you have to take the best bang-for-buck precautions you can. Top of the list (it’s a long list, admittedly) should be patching known vulnerabilities.

It might feel like the oldest security advice under the sun but it’s on this list on merit. In the real world, unpatched software is making Emotet outbreaks worse, and harder to contain.

For an example of how that works, just look at EternalBlue, the SMB exploit made famous in 2017 by WannaCry and NotPetya. Almost unbelievably, despite all the headlines, and almost two years after Microsoft issued security bulletin MS17-010 announcing patches that protected against it, malware is still making profitable use of the exploit. One of those pieces of malware is TrickBot, the payload most commonly delivered by Emotet.

Somebody reading this isn’t on top of their patching – don’t let it be you.

3. Block PowerShell by default

Emotet typically arrives in malicious email attachments, and an outbreak often starts like this:

  1. A user receives an email with a Word document attached.
  2. The user opens the Word document and is fooled into running a macro.
  3. The macro triggers PowerShell that downloads Emotet.
  4. The Emotet infection begins.

Clearly, a user has to get a few things wrong for this to succeed, so the final piece of advice could easily have been “train your staff not to open dodgy emails or run macros”. It isn’t because, although it’s a great idea, it’s a never-ending journey and only has to fail once.

Something that has a similarly blunting effect on email-borne Emotet, but is easier for admins to implement successfully, is to block their users’ access to PowerShell by default.

We don’t mean block it for everyone – some people need PowerShell – we just mean begin with the assumption that nobody needs it (including admins) and then unblock it for the people that really, provably, do.

And when we say block, we mean block rather than setting a policy to disable it. Policies can be bypassed, so PowerShell should be blocklisted (the Sophos functionality that does this is called Application Control).

You can read more about how Sophos products stop Emotet on our sister site, Sophos News. Sophos has also prepared a Knowledge Base article for its customers: Resolving outbreaks of Emotet and TrickBot malware.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8srAMBfQIMg/

Just keep slurping: HMRC adds two million taxpayers’ voices to biometric database

HMRC’s database of Brits’ voiceprints has grown by 2 million since June – but campaign group Big Brother Watch has claimed success as 160,000 people turned the taxman’s requests down.

The Voice ID scheme, which requires taxpayers to say a key phrase that is recorded to create a digital signature, was introduced in January 2017. In the 18 months that followed, HMRC scooped up some 5.1 million people’s voiceprints this way.

Since then, another 2 million records have been collected, according to a Freedom of Information request from Big Brother Watch.

That is despite the group having challenged the lawfulness of the system in June 2018, arguing that users hadn’t been given enough information on the scheme, how to opt in or out, or details on when or how their data would be deleted.

Under the GDPR, there are certain demands on organisations that process biometric data. These require a person to give “explicit consent” that is “freely given, specific, informed and unambiguous”.

Off the back of the complaint, the Information Commissioner’s Office launched an investigation, and Big Brother Watch said the body would soon announce what action it will take.

Meanwhile, HMRC has rejigged the recording so it offers callers a clear way to opt out of the scheme – previously, as perm sec Jon Thompson admitted in September, it was not clear how users could do this.

Big Brother Watch said that this, and the publicity around the VoiceID scheme, has led to a “backlash” as people call on HMRC to delete their Voice IDs. FoI responses show 162,185 people have done so to date.

“It is a great success for us that HMRC has finally allowed taxpayers to delete their voiceprints and that so many thousands of people are reclaiming their rights by getting their Voice IDs deleted,” said the group’s director, Silkie Carlo.

“Now it is down to the ICO to take robust action and show that the government isn’t above the law. HMRC took millions of Voice IDs without taxpayers’ legal consent – the only satisfactory outcome is for those millions of Voice IDs to be deleted.”

An HMRC spokesman told The Register: “Our Voice ID system is very popular with millions of customers as it gives a quick route to access accounts by phone.

“All our data is stored securely and customers can opt out of Voice ID or delete their records any time they want. Seven million customers are using this system and only a very small percentage of customers have chosen to opt out.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/25/hmrc_voice_id_big_brother_watch/

UK-EU infosec data sharing may not be KO’d by Brexit, reckons ENISA bod

Interview A senior EU cybersecurity official has said he is “optimistic” about information sharing between the UK and the political bloc continuing after Brexit.

In an interview with The Register, Steve Purser of the EU agency for Network and Information Security (ENISA) said that while it is “obvious” that the information-sharing relationship “will be changed… if the Brexit goes about”, he is keeping an open mind.

This could be seen as a contrast to the decidedly gloomy view being promoted today by a slack handful of retired defence and security bigwigs.

“ENISA and the [EU] Commission doesn’t just do things within our boundaries,” he said at France’s FIC2019 infosec shindig earlier this week. “My guess would be that [being] within the Union would give the best information sharing relationship. Having said that, we are looking for global approaches and we will make the best deal out of a bad situation.”

ENISA is a relatively small agency based on the Greek island of Crete. Employing 83 people at present, its ambition is to grow to “around 140” heads – and, presumably, expand its annual budget of just 11m euros. Following new EU cybersecurity directives in 2017, Purser told us, ENISA secured “a permanent mandate and [many] more responsibilites.”

“The most interesting thing about it from my point of view is that we get a very big new responsibility which is to set up an EU cybersecurity certification framework. This is a framework that will allow certifying authorities to certify everything from lightbulbs, toasters to atomic stations and submarines. And processes and services,” he added. This “voluntary” scheme would, in his vision, ultimately allow the general public to “understand and interpret” the security capabilities of consumer goods and “influence their purchasing decisions.”

A shop-of-shops

ENISA functions by “leveraging the expertise of the [EU] member states”, said Purser. Rather than doing all its own work in-house, it aims to bring together “a community of experts” to work on problems and exercises. Though this seems like a very limited role, when we put this to Purser he emphasised how “scalable” ENISA’s setup is:

“By working like this, the ownership of the solution is with the community. That’s a powerful model because it’s then their solution and not ENISA’s solution. We do not pretend to be those that can save the world but by working together [with the EU] member states, together we can do an extremely good job.”

Surely, El Reg asked, it’s not all as smooth as that, and the EU’s traditionally top-down approach doesn’t really gel with the realities of frontline infosec? Purser, bespectacled and slightly flush from the powerful heating in Lille’s Grand Palais conference centre, nodded. “When we started out, we did terribly.”

Referring to a 2010 exercise that was a “failure”, Purser said that “was the best possible thing that could have happened. Ever since then we’ve been doing exercises every two years and now have a very sophisticated setup. We have SOPs defined across borders… incidentally, they were used for Wannacry and Notpetya, so we had a better response than we would have done otherwise.”

In light of that failure, he said ENISA has three main objectives regarding incident response terms: identifying who to call; understanding that person’s “decision making powers and capabilities”; and exchanging, in a secure way, the right information to solve the problem.

“A lot of what ENISA does is bottom up,” he emphasised. “We get the experience on why some things worked well and others don’t, and we feed back into the policy loop.”

Don’t worry, be happy … oh, er, about that

In terms of what badness he sees coming our way this year, Purser was explicit: ENISA thinks black hats are getting more sophisticated and more geared towards hacking for profit rather than notoriety.

“We see, to a certain extent, a move towards hardware,” he said. “Spectre and Meltdown, the Rocker vuln, some evidence that things may be moving lower down in the stack to some extent.”

“Monetisation, for sure,” he continued. “People used to hack for reputation, now it’s about money. There are industrial level processes and quality systems supporting some attacks. We see people understanding the weaknesses of new technologies – but, of course, when new things come out like AI or robotics we can be ready for a whole new wave of attacks.”

Moreover, things are not going to get safer any time soon. Even new technologies bring their own unique set of threats with them, Purser agreed.

“Fundamental concepts are being threatened. For many years we assumed that safety and security were pretty much the same thing, whereas some things taught us that’s not necessarily true. The example I give is the Eurowings crash where the pilot used a security feature to crash the plane.* As we move into the world of cyberphysical systems, we cant assume they’re the same: we have to look very carefully at the two.” ®

Bootnote

* Andreas Lubitz, the Germanwings pilot who in 2015 murdered 150 people in cold blood as well as himself, waited until the aircraft captain briefly left the flight deck before locking the cockpit door shut and setting the autopilot to fly into the Alps. After the Twin Towers terrorist murders of 2001, all airliner cockpit doors were reinforced and made unopenable from the outside if locked.

Shortly after the Alps mass murder, Germanwings rebranded as Eurowings.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/25/enisa_steve_purser_interview/

Data hackers are like toilet ninjas. This is not a clean crime, you know

Something for the Weekend, Sir? This place is a mess. No, worse than that: it’s a disaster area.

I hesitate to use the analogy “it looks like a bomb hit it” in this lively era of mischievous politics and religious fascism. Besides it’s not an appropriate description of the sight before my eyes. No, not bombs. Wild horses, perhaps. Possibly a tornado. Or most likely of all, one really determined dickhead.

I am witnessing the aftermath left behind by a previous visitor to the shared office kitchenette facilities.

When I arrived this morning, everything was pretty much as the cleaners had left it last night: mugs and glasses sparkling in the cupboard, steel sink and chrome taps twinkling with glinty health, Formica surfaces wiped and reflective. At 11:00am, I return for a hot water top-up only to discover a scene of utter bedlam.

About 100 mugs – stained on the outside as well as inside and mostly still half full of miserable brown liquid – are piled up around the work surfaces and in the sink. Pools of milk have been liberally poured around the kitchen, including on top of the microwave and dribbling down the sides of the green metal fire extinguisher. Something orange has been proficiently exploded inside the microwave, entirely obscuring the little window; even with the door closed, it smells of fish.

Mostly dry and unused teabags are littered everywhere in an arrangement which, while not unartistic, suggests no obvious purpose, including in the sink and even a couple inside the fridge. Twisted shreds of dry kitchen paper are heaped on the floor around the waste bin, and a couple more are deftly balanced on top around its edge; the bin itself is empty. As I walk with trepidation towards the kettle, granulated sugar crunches unpleasantly under foot.

The fact that I am a temporary contract visitor to client premises does not hide the fact that I am sharing this kitchenette with a mere dozen employees. It is impossible, as I crunch back out of the disaster zone, not to gaze across the few rows of desks and play a mental game of Find The Dickhead.

Golly, it makes me so cross. Grr. See? If you are feeling sympathetic anger, feel free to manage it with the help of a little light music as you read on.

Youtube Video

Then it hits me: how the heck did they create such wild disorder without being seen or heard in the act? Slip in, cause utter mayhem, slip out quietly. That’s actually quite a skill. Even hackers don’t escape unnoticed… or do they?

At the time, I was trying to detail the circumstances around the appearance of Collection #1, last week’s massive data breach in which 773 million email users exposed themselves on a hacking site. As its name suggests, it’s not a new security break-in but a big bunch of previous stolen lists gathered into one place for unrestricted use by the cyberspacernet’s official Naughty Community.

If you haven’t been following this story because it’s about data security and therefore the dullest thing in human existence, here’s the tl;dr. Go to Have I Been Pwned and type in your email addresses to see which of them are in the collection.

Reg readers almost certainly won’t need to panic when they inevitably see at least one of their addresses returned with “Oh no – pwned!” You will already have changed your ID credentials long ago. Most of this stuff comes from classic ID list hacks over the last eight years, including famous ones such as Adobe’s amateurish 2013 loss of 153 million poorly encrypted passwords, Dropbox’s 2012 breach of tens of millions of IDs, and LinkedIn’s loss of 164 million addresses and passwords which were apparently nabbed in 2012 but not exploited until 2016 and of course yet again last week.

For me, the amusing detail was in the work of now-celebrity security researcher, 1Password evangelist and founder of the Have I Been Pwned site Troy Hunt in condensing duplicates. From a total data dump of 2.6 billion email addresses, he brought this down to 772,904,991 unique ones. OK, that’s not exactly hilarious until Troy Hunt notes that these correspond with just 21,222,975 unique passwords.

Er… 21 million passwords for 773 million email addresses? Strictly speaking (if statistically idiotic, I realise) this could mean on average that your unique password is being shared by 35 other people. More likely, your devilishly cunning password is probably shared only by a handful of people who bothered to use uppercase, lowercase, numbers and special characters while also having coincidentally named their pet identically to yours. The other 772.9 million email addresses are all using the same password as each other.

In theory again, if there are on average 36 email addresses for every unique password in the exposed IDs, this suggests to me that hackers could find it 36 times easier to guess your password than guess your email address. Yes yes, I know, but even if you factor in the usual uppercase, lowercase, numbers and special characters, I reckon it works out even in the end.

And even so, a common theme across a lot of the lists in Collection #1 is not how bad our passwords are, but how ineffectively and incompetently big companies bother to secure them. If I choose to use the password passw0rd for a dozen logins, that’s a risk I take upon myself. But if my crap password turns up unsalted alongside my email address on a hacking site, that’s down to the original list manager failing to protect it, plunging me into a world of shit as a result of their cavalier attitude to data security, not mine.

Talking of a world of shit, I have just visited the Gentlemen’s toilet facilities at my aforementioned client’s offices.

Oh. My. God.

Since my initial visit upon arriving at work, it appears that the office toilets have played venue to a mass brawl between an American football team (both offence and defence), a monastery of whirling dervishes, a band of anarchistic freerunners and a drug-crazed troupe of uncharacteristically violent Morris dancers. While shitting.

I stagger back into the open-plan office in horror, gasping for breath, as someone somewhere leans onto the keys of a church organ. Gaping at my dozen temporary colleagues in horror, I am filled with marvel and revulsion in equal measure at such stealth. Which of you filthy bastards…? And how…? When…?

Grr. See? I’m getting angry again. Put the music back on and we’ll get through this.

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He claims not to suffer from OCD but insists it is arguably unnecessary to distribute 37 torn shreds of toilet paper artlessly across the cubicle before urinating directly onto the seat, the lid, the back wall and across the floor. As for the rest, he can only assume they were voiding their bowels with the aid of a power hose turned to the “fan” setting. @alidabbs

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/25/data_hackers_are_like_toilet_ninjas_this_is_not_a_clean_crime_you_know/

A picture tells a 1,000 words. Pixels pwn up to 5 million nerds: Crims use steganography to stash bad code in ads

A strain of malware has been clocked using steganography to run malicious JavaScript on Macs via images in online banner ads, it was claimed this week.

A joint report from security shops Confiant and Malwarebytes drilled into the techniques used by VeryMal, a malvertising operation that spreads through poisoned ad images. What they found was that miscreants were avoiding security filters by hiding code in images using steganography.

That code, when executed in the browser, redirects the visitor to dodgy sites that try to trick people into installing Adobe Flash updates and similar fare that are actually adware, which secretly clicks on ads in the background to generate revenue for the campaign’s masterminds. We’ve seen this kind of thing before, against Windows PCs.

We’re told VerMal was active between January 11 and 13, on two top-tier ad exchanges used by a quarter of the top 100 publisher websites, targeting macOS and iOS users in the US. It was also doing the rounds in December, according to Confiant.

The upshot: as many as five million netizens a day shown maliciously crafted adverts, costing the industry $1.2m in just one day, it is alleged. That dollar value is the guesstimated impact of running these dodgy ads – from an increase in blockers and loss of trust in publishers to money paid out by advertisers and networks for fraudulent clicks.

“As malvertising detection continues to mature, sophisticated attackers are starting to learn that obvious methods of obfuscation are no longer getting the job done,” explained Confiant security engineer Eliya Stein.

“The output of common JavaScript obfuscators is a very particular type of gibberish that can easily be recognized by the naked eye. Techniques like steganography are useful for smuggling payloads without relying on hex encoded strings or bulky lookup tables.”

Beware Greeks bearing lists: Bank-raiding nasty Zeus smuggles attack orders in JPEGs

READ MORE

Stein said that in the case of VeryMal, the banner ad image containing the code is just a benign image: it is not dangerous on its own, and doesn’t exploit any vulnerabilities in the browser. However, encoded in the pixels of the image is malicious JavaScript code. Crucially, the ad is served to the browser along with a small piece of seemingly harmless JS that reads through the image’s pixels, extracts the malicious script from these bytes into a string, and then executes the string.

(Don’t forget that in this day and age, ads are fetched as a package of images and code, the latter of which lets the ad network know if the former was seen by an actual human. For instance, if a reader scrolls past an ad too fast, it isn’t reported as a successful impression by the bundled watchdog code. If the ad was fetched but not visibly rendered due to a blocker, the watchdog again snitches to the network.)

Security software scanning for malicious JS will not spot it smuggled into the ad image, and will likely let through the side-loaded extraction code. One solution to all of this, of course, is to block ad images and sidekick code from being fetched. (Just make sure you white-list the nice ads, like ours.)

Interestingly, the code checks to see if Apple fonts are present, and if so, it figures it’s running on a Mac and continues on. Non-Macs stop at this point. Here are the full extraction steps, according to the report:

* Create a Canvas object (this enables the use of the HTML5 Canvas API in order to interact with images and their underlying data.)

* Grab the image located at: hxxp://s.ad-pixel.com/sscc.jpg

* Define a function that checks if a specific font family is supported in the browser.

* Check if Apple fonts are supported. If not, then do nothing.

* If so, then loop through the underlying data in the image file. Each loop reads a pixel value and translates it into an alphanumeric character.

* Add the newly extracted character to a string.

* Execute the code in the string.

“Much of the buzz around these types of attacks will have you believe that the image file alone is the threat and that we now have to fear the images that our browsers load during day to day web surfing, but this is a departure from the truth,” said Stein.

“Validating the integrity of individual image files served in ads makes little sense within the broader context of the execution of these payloads.

“Estimated all together, Confiant benchmarks the cost impact for just that Jan 11th peak alone to have been over $1.2 million. When you consider that this was just one of multiple hundreds attacks Confiant has caught and blocked over the past month alone, the scale of the issues facing the digital ad industry becomes clearer.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/24/mac_steganography_malware/

You’re an admin! You’re an admin! You’re all admins, thanks to this Microsoft Exchange zero-day and exploit

Microsoft Exchange appears to be currently vulnerable to a privilege escalation attack that allows any user with a mailbox to become a Domain Admin.

On Thursday, Dirk-jan Mollema, a security researcher with Fox-IT in the Netherlands, published proof-of-concept code and an explanation of the attack, which involves the interplay of three separate issues.

According to Mollema, the primary problem is that Exchange has high privileges by default in the Active Directory domain.

“The Exchange Windows Permissions group has WriteDacl access on the Domain object in Active Directory, which enables any member of this group to modify the domain privileges, among which is the privilege to perform DCSync operations,” he explains in his post.

This allows an attacker to synchronize the hashed passwords of the Active Directory users through a Domain Controller operation. Access to these hashed passwords allows the attacker to impersonate users and authenticate to any service using NTLM (a Microsoft authentication protocol) or Kerberos authentication within that domain.

Mollema wasn’t immediately available to discuss his work due to time zone differences and the need to involve a media handler.

The attack relies on two Python-based tools: privexchange.py and ntlmrelayx.py. It has been tested on Exchange 2013 (CU21) on Windows Server 2012 R2, relayed to (fully patched) Windows Server 2016 DC and Exchange 2016 (CU11) on Windows Server 2016, and relayed to a Server 2019 DC, again fully patched.

Using NTLM, Mollema says it’s possible to transfer automatic Windows authentication, which occurs upon connection to the attacker’s machine, to other machines on the network.

letters stuffed in a mailbox. Photo by SHutterstock

Welcome to 2019: Your Exchange server can be pwned by an email (and other bugs need fixing)

READ MORE

How then to get Exchange to authenticate the attacker? Mollema points to a ZDI researcher who found a way to obtain Exchange authentication using an arbitrary URL over HTTP through the Exchange PushSubscription API using a reflection attack.

If this technique is instead used to perform a relay attack against LDAP, taking advantage of Exchange’s high default privileges, it’s possible to for the attacker to obtain DCSync rights.

Mollema describes several potential mitigations for the attack in his post. These include: reducing Exchange privileges on the Domain object; enabling LDAP signing and channel binding; blocking Exchange servers from connecting to arbitrary ports; enabling Extended Protection for Authentication on Exchange endpoints in IIS; removing the registry key that allows relaying; and enforcing SMB signing.

In a statement emailed to The Register, Microsoft avoided commenting on the specific vulnerability described by Mollema, but the wording of its coy, content-free reply suggests the company may issue a fix in February.

“Microsoft has a strong commitment to security and a demonstrated track record of investigating and proactively updating impacted devices as soon as possible,” a Microsoft spokesperson said. “Our standard policy is to release security updates on Update Tuesday, the second Tuesday of each month.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/25/microsoft_exchange_hashed_passwords/

Database of 24 Million Mortgage, Loan Records Left Exposed Online

Breach latest example of how misconfigurations, human errors undermine security in a big way, experts say.

Data on tens of thousands of loans and mortgages issued by Wells Fargo, CapitalOne, and several other financial institutions — some now defunct — was recently found leaked online in what appears to be a particularly egregious example of human error.

Independent security researcher Volodymyr “Bob” Diachenko discovered the data on January 10 in an Elasticsearch database that was openly accessible via the Internet and not protected even by a password. The database has since been removed.

Diachenko, in collaboration with TechCrunch, investigated the breach and identified the entity responsible for putting the database online as Ascension Data Analytics, a firm that provides a range of custom analytics services around loan data. Ascension is a vendor to companies that purchase mortgages and loans from banks.

TechCrunch, the first to report the breach along with Diachenko’s Security Discovery blog, quoted representatives from Citi and Wells Fargo as saying the banks had no direct connection with Ascension.

In total, the database that Diachenko found contained more than 24 million records amounting to some 51GB of data. Many of the documents contained the full names, addresses, social security numbers, credit history, loan amounts, repayment schedules, and other details typically associated with a home mortgage or other bank loan. Such information can be especially useful to criminals seeking to commit identify theft and other types of financial fraud.

The records were copies of handwritten notes and printed documents related to loans and mortgages that Ascension had converted into a computer-readable form via Optical Character Recognition (OCR).

Diachenko, who regularly uses public search engines like Shodan and Censys to search for exposed databases and report them back to the responsible organizations, says he found the Ascension data the same way.

“Despite my pretty busy pipeline of reports, it is not often when I find such a large amount of sensitive data collected in one place, without any password or login,” Diachenko says.

Most of the time, the exposure is the result of a human error rather than a technical or factory-level misconfiguration, he says. “On top of that, insecure databases have always been a common target for attackers, especially Elasticsearch instances.”

The TechCrunch report suggests that a New York-based company called OpticsML that works with Ascension was involved in the breach as well. But it is not entirely clear what the company’s role was in the incident. Neither Ascension nor Rocktop Partners — a firm that TechCrunch identified as the parent of Ascension — responded to a Dark Reading request for comment on the data breach. But in comments to TechCrunch, a Rocktop representative confirmed the breach and said it will notify all individuals impacted by the exposure.

Unintended Disclosures

Despite all the concern over cybercriminals and nation-state actors, a startlingly high proportion of data breaches, especially in the cloud, result from basic misconfigurations and human error. “The big takeaway is that this data exposure was totally avoidable,” says Praveen Kothari, CEO of CipherCloud.

Based on available data, the breach appears to have been the result of an internal administrative error. “In the final analysis, the cloud providers secure their infrastructure, but it is totally up to you to secure your data,” Kothari says.

Data maintained by the Privacy Rights Clearinghouse shows that 20% of all breaches in 2018 resulted from causes not involving hacking or criminal intent. In many cases, the breaches were the result of sensitive information accidentally being posted publicly, sent to the wrong party, mistakenly distributed via email or physical mail, and other similar mishandling. More data records – 54.7% – were exposed last year via such incidents than any other reason.

In a report last year, Symantec identified the three most common configuration mistakes that organizations make in the cloud as leaky storage buckets, vulnerabilities caused by open-source components, and accidentally leaving tokens, passwords and other secrets in publicly accessible repositories. In a 2018 study, Digital Shadows estimated some 1.5 billion files are exposed online as the result of misconfigured Amazon S3 buckets, FTP servers, network attached storage, and SMB file-sharing.

“Common configuration errors include failing to password-protect databases with sensitive data, failing to configure servers to limit access, and failing to encrypt sensitive data in general,” says Jonathan Deveaux, head of enterprise data protection at comforte AG.

Mistakes often result from a lack of knowledge about where sensitive data resides, he says. Negligent and careless insiders and a failure to recognize that security is everyone’s concern are other issues, he notes.

Colin Bastable, CEO of Lucy Security, says breaches such as the one involving Ascension are the manifestation of a training and attitude problem. “Who leaves such data without password protection? Someone who does not care,” he says.

Plenty of tools are available to protect data at all stages of the lifecycle, so technology is not an issue. “Fundamentally … management needs to fear the consequences of exposing personal data more than they crave the money from trading data,” Bastable says.

Partner Problems

The Ascension data breach is also another example of the exposure that companies can face from the security mistakes of partners, suppliers, vendors, and other third parties with whom they interact either directly, or even indirectly. In this case, even though Citi and Wells Fargo did not appear to have direct contact with Ascension for years, their customer data still got exposed.

Such incidents show why it is important for organizations to prioritize vendors based on business exposure and potential impact and then applying the right level of due diligence to those vendors, says Fred Kneip, CEO at CyberGRX.

“A lot of organizations don’t have a handle on which vendors pose them the greatest risk,” he says. 

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/database-of-24-million-mortgage-loan-records-left-exposed-online/d/d-id/1333730?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple