STE WILLIAMS

What’s under the hood of the new Brave browser?

Which browser – or browsers – rule the internet? Usage stats tell us it’s Chrome, followed by Internet Explorer, Firefox, Safari, Opera and the un-edgy Edge.

And yet, beyond this mainstream exists a surprising number of alternatives. Browsers aren’t yet done and dusted it seems and, if anything, new ones are on the uptick. The question is why.

Privacy and performance are plausible hooks but, with the exception of Tor, it’s difficult to pin down how neo-browsers offer much improvement. The suspicion is that some – including a plethora of ad-blocking add-ons – have latched on to these themes to front opaque business models with hidden downsides.

Then there’s Brave, a new Chromium-based browser for the ad-blocking age that would doubtless have fallen into obscurity like all the others if it weren’t for the fame of the man behind it, Brendan Eich.

It was he who invented JavaScript while working for Netscape in the 1990s before leaving to co-found Mozilla, a company he left in difficult circumstances in 2014 after only a week as CEO.

If anyone knows about browsers, it’s Eich, surely. Why another browser then?

Eich stated in an early blog that Brave wants to rebuild the crumbling relationship between users, advertisers and publishers by re-balancing everyone’s interests more evenly.

Today’s big browsers are like windows through which advertising pours, along with ad-tracking systems that boost their commercial prowess. Inevitably, this messes up privacy because ads must watch people’s behaviour. Add Google’s search dominance and the problem deepens.

What companies like Google say, basically, is trust us – yes, we watch you but at least we’re the devil you know.  Earnest rivals such as Mozilla lack Google’s conflicts of interest but struggle to stem programmatic advertising because that means retrofitting privacy to the skeleton of an ageing browser model.

We’re left with weak anonymity modes or better ones with compromises – Mozilla’s recent Focus app is more a stripped-down search utility than a full-service browser for instance. Browsing can also be scandalously slow, which drives people to disingenuous ad-blocking tech.

Brave’s alternative looks standard-issue at first: there’s ad-blocking (fingerprinting protection and script blocking), support for password managers (LastPass, Dashlane and 1Password) and HTTPS Anywhere integration. It mentions anti-phishing protection. However, under the hood:

Brave currently runs an experimental automated and anonymous micro-donation system for publishers called Brave Payments.

Originally based on Bitcoin, this, it transpires, is about to be replaced by an Ethereum-based payments system called Basic Attention Tokens (BAT). When launched to fund the startup behind Brave earlier this year, BATs were seized upon by speculators who think they’ll increase in value.

Brave, then, is less a browser than a demo client for what is claimed to be a fairer ad platform in which advertisers, publishers and users are rewarded for taking part in what Eich calls a blockchain-based “game” (Brave gets a cut too).

Brave’s BAT platform shields the anonymity of users while guaranteeing the authenticity of their viewing in detail. Only genuine ads are served so there’s no malvertising.

Eich is a vague about whether users get a share of BATs – it’s a “possibility” in some cases. If users don’t, what incentive is there to use it? What it offers – ad-blocking, less malvertising, and perhaps some performance gains – can be found in browsers without BAT.

It’s a fascinating concept but, ironically, its advantages are hidden from people in the same way that the surveillance of today’s ad-tracking systems is.

It’s often said that with Google and Facebook, the user becomes the product. Brave’s alternative of turning users into tokens sounds like a modest advance at best.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RDQu1jbWD4g/

Thought you’d blocked a Twitter user? Here’s how they can still dogpile you

Have you blocked abusive Twitter users?

Well, good luck with that. It turns out that neither you nor President Trump – who’s known for being quick to hit the block button – can stop blocked accounts from replying to your tweets, thanks to a bug that’s making the rounds.

As Motherboard reports, all a blocked person has to do is create an additional, dummy account, toggle over to it to view the messages of whoever blocked them, compose a reply, toggle back to their main account, and then hit reply to engage with that person’s tweets anew.

You won’t see the replies, but the followers of the blocked account will. Which means if you’ve blocked someone who was encouraging abusive behavior from their followers, they can still egg on dogpiles from their main account by retweeting you.

Well, this is curious. The whack-a-mole problem was one of the things Twitter mentioned that it was tackling in July. In February, it had announced that it was giving people more ways to report targeted harassment, including taking steps to identify the whack-a-moles who get suspended only to go off and open new accounts.

In July, Ed Ho, general manager of Twitter’s consumer product and engineering department, said that thanks to these changes, Twitter’s new systems had removed twice the number of repeat offender accounts – the whack-a-moles – over the preceding four months.

And yet here we are. A commenter on our coverage of that announcement said they couldn’t see any improvement:

They recently suspended my account 2 days after I reported another user who was breaking their TOS by using 2 accounts to gang up and harass people (and used both accounts to mass DM me 70+ times in an hour).

Twitter’s internal numbers painted a far rosier picture than many of its users reported. That point was strongly underscored by a report from BuzzFeed, also posted in July, about how Twitter is still slow to respond to incidents of abuse unless they go viral or involve reporters or celebrities.

Basically, when it comes to getting Twitter to pay attention to its own rules against abuse, it pays to know somebody. Otherwise, far too often, troll targets are going to be staring at streams of sewage in their Twitter feeds as the company blithely sends form emails that clearly show that somebody’s asleep at the wheel.

As far as this new bug goes, Motherboard points to an account that Trump blocked late last month: the Party of Reason and Progress (PORP), a nonprofit dedicated to promoting reason and empirically sound decision-making in modern politics.

Blocked? No problem. PORP is still replying to the president’s tweets under the same old blocked account – @TheOfficialPORP – and, Motherboard reports, is receiving more engagement on those tweets than ever. Here’s one from September 1:

PORP told Motherboard’s Louise Matsakis that it’s simple: Twitter’s mobile app lets users toggle between multiple accounts, but it doesn’t account for whether a user has blocked one of those accounts. All you have to do to reply to someone’s tweets, in spite of them having blocked you, is to just pull up their tweet on another account.

The PORP spokesperson:

I literally just respond to [Trump] (reply) from any other account. Then when I’m writing the reply Twitter has a switch accounts function right at the top. Once I switch, the tweet is still there and I press send.

Thus, the blocked account of @TheOfficialPORP can still reply to Trump, just like anybody you or I might block can keep replying to our tweets, spreading their own take on our messages to all their followers – for better or worse.

PORP told me:

Hard to believe it works but it does

Motherboard’s Matsakis said she couldn’t confirm when the bug had started to occur but that she had to update her mobile Twitter app to the latest 7.6 version in order to experience and confirm it. She reached out to Twitter for comment and to ask when the bug will be patched, but it hadn’t replied to her by the time the story was posted on Friday.

What’s the point of a block button that can be evaded so easily? The point of blocking is to spare users from abusive accounts. But if blocked accounts are still out there, still replying to Tweets of those who blocked them, and their followers are still able to interact with the target account… well, it just seems that the block button isn’t doing anything to stop gasoline from being thrown on to the torches of trolls.

Let’s hope Twitter fixes this hole soon.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hwZnnKqrugs/

Unsecured databases are (still) the low-hanging fruit of the internet

A ransom attack that wiped more than 27,000 poorly configured MongoDB databases in January sounded like it would be a pretty loud wake-up call for better open-source NoSQL security.

Apparently, not so much.

As was widely reported at the time, penetration tester Victor Gevers, of the Netherlands-based GDI Foundation, and Niall Merrigan, a Norway-based developer, noticed a huge spike in attacks on MongoDB installations, jumping from about 2,000 on January 3 to 8,542 on January 5 to about 27,000 a couple of days later.

The two warned that that the installations were seriously vulnerable: old MongoDB instances deployed via cloud hosting services, mostly on the Amazon Web Services (AWS) platform with a default configuration – and without password-protected admin accounts.

That’s what has long been known in the industry as “low-hanging fruit”. And eight months later, the fruit is still hanging low. So low that in many cases it doesn’t even require hacking to get at the data, since it is publicly available. It’s the online version of a wide-open door.

Not that MongoDB is the only instance of leaky database security, or that this is anything new. As Naked Security’s Kate Bevan noted this week, more than 4m Time Warner Cable customers in the US had their data exposed, apparently due to a third party – TWC’s technology partner BroadSoft – failing to secure an AWS S3 bucket.

Earlier this summer, Upguard reported that information on the resumes of thousands of applicants to the private security firm TigerSwan – almost all of them military veterans – had been exposed due to an AWS S3 bucket being configured for public access by a third party – a recruiting vendor for TigerSwan. Upguard said it took more than a month to secure the bucket: it had notified the firm on July 21, and the problem wasn’t fixed until August 24.

More than four years ago, a report on Amazon S3 storage buckets found that nearly one in six – 1,951 of the 12,328 identified at the time – were open to the public.

The list goes on – and on. Wired noted just a few months ago that MacKeeper data on 13m customers leaked in 2015. In April 2016, MacKeeper security researcher Chris Vickery found a database that had been exposed for seven months, containing personal and voting data on all 93.4m Mexican voters. And last October, hackers gained access to personal information on 58m customers of the data storage firm Modern Business Solutions.

All of which would cause one to think that if those involved were going to wake up to the threat, they would have done so long ago – especially since, as noted on the BigStep blog more than a year ago:

… while NoSQL is a powerful solution and a real redesign for the way databases have always been structured, it was not developed with security in mind. Security was more of an afterthought, and even then, an afterthought by the organizations that adopt NoSQL databases and applications, not the developers themselves.

Another problem is that, since the technology is open-source, MongoDB and other makers of NoSQL databases are not necessarily to blame for the exposures, since they don’t control how users set them up – and whether they do or don’t apply the security controls that come with the product.

It’s enough to make a brief rant from a commenter with the handle “FreedomISaMYTH” on the Naked Security blog after the January MongoDB attacks sound like the most likely explanation He (or she) wrote:

dbleaks.com has informed thousands of folks of this issue, less than 1% reply and out of that 1%… its maybe 1 in 100 that actually secure their databases… Hospitals, schools, doctor offices, applications (iOS/Andriod), etc… Security is the last thing these folks care about. You can tell them, they don’t care until it’s too late.

And perhaps the only way to make them care is to inflict some pain, in the form of sanctions or litigation, neither of which seem to be rising in the wake of the ongoing data exposures.

But if people do care, there is plenty of advice, starting with making sure that you read the security manual of whatever NoSQL database service you’re using, and implement all the available security controls.

Beyond that are best practices that amount to common-sense security hygiene:

  • Make sure all of your data is backed up.
  • Keep your database patched and up to date.
  • Encrypt sensitive data.
  • Sandbox unencrypted data.
  • Establish and enforce strict user authentication policies.

In short, make sure your online database isn’t exposed to the rest of the internet – we don’t want to be writing about your security failure on Naked Security.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7C7bEH1-sJU/

Heading off to university? Watch out for phishing scams

Several times a year, without fail, scammers start bombarding university students with barely convincing phishing emails in the weeks before the beginning of every term.

Timing is key: UK students expect to receive important emails from official bodies such as the Student Loans Company (SLC) and university finance departments in August, December and March are therefore seen as more vulnerable to being duped.

Sure enough, Action Fraud this week issued one of its periodic warnings that the con-merchants are at it again, this time a campaign that tries to trick students into believing their SLC account has been suspended due to “incomplete information”.

It hasn’t. of course, but anyone taken in by it will reach a phishing page designed to harvest their bank account details.

This isn’t a work of great phishing craft, but that doesn’t seem to matter:  a few recipients will read the important-sounding subject line, register the bright blue hyperlinks embedded in the email, and click themselves headlong into a dangerous situation.

At least this time new and returning students are getting a heads-up about this campaign before it does damage. This hasn’t always been the case.

In 2011, a similar campaign around the SLC grabbed the bank logins for 1,300 students, running up £1.5m in losses for victims. By 2012, the official victim count for this type of student fraud had dropped to 831, followed by 162 in 2014.

So, things are getting better but clearly there are enough victims out there to make it worth continuing attacks into the future.

There’s plenty of advice worth handing out here, most of which sounds obvious: never log on to anything at the behest of an email, least of all one connected to finance, and double-check suspect requests through an institution’s customer help.

Anyone who spots a phishing email should forward it to [email protected] or report to Action Fraud, where it will be added to attack intelligence. The #StudentLoan Twitter hashtag is another good warning source.

Beyond that, turning on multi-factor authentication is a must because, in addition to being a good thing in itself, the lack of it is a warning sign when visiting important websites.

That it? Not quite.

We mentioned at the beginning of this piece that student phishing attacks depend on good timing but they also burn through one other fuel: email addresses. This sounds obvious but email addresses are central to targeted attacks.

It’s not clear where the phishing scammers got the cache of addresses used in the latest campaign (interestingly, some appear not to be students at all) but there could be an amalgam of sources including addresses guessed from university email domains, taken during academic breaches, or scraped or compromised from online services.

It follows, then, that guarding email addresses is an important defence, or at least being careful with which ones are used for which type of communication. This includes the email addresses handed to every new student by universities.

Starting a university course is a good moment for anyone lucky enough to experience it. But nobody should be under any illusion that it’s also the gateway to a life in crime’s line of fire. New students deserve to be reminded of this.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pYKDP945Ayo/

Secure microkernel in a KVM switch offers spy-grade app virtualization

Researchers at Australian think tank Data61 and the nation’s Defence Science and Technology Group have cooked up application publishing for the paranoid, by baking an ARM CPU and secure microkernel into a KVM switch.

As explained to El Reg by Toby Murray, on behalf of his fellow researcher from Data61’s Trustworthy Systems Team Kevin Elphinstone, workers in secure environments will sometimes have multiple PCs on their desks. Each will be connected to its own network and run apps that live on discrete infrastructure. Those air gaps provide hygiene so that organisations feel satisfied data can’t move between applications. To make things a little less cluttered on the physical desktop , keyboard, video and mouse (KVM) switches mean users can share one set of human interface peripherals among multiple PCs.

While KVM switches save clutter, users only see one app at a time. Which isn’t great given that sharing data from diverse sources can help the kind of people who need these rigs to do their jobs.

Hence Data61’s newly-revealed “Cross-Domain Desktop Compositor” (CDDC), a small piece of hardware that offers the same peripheral-aggregating functionality as a KVM switch but can also publish applications from different machines onto one screen and even allow cut and paste between windows.

The CDDC uses the seL4 microkernel, code that has been mathematically proven free of error and is therefore deployed in environments where reliability and resilience are at a premium.

The CDDC’s field-programmable gate array contains seL4 and code to scrape apps from different PCs and publish them into a single screen.

The device’s output is just video so even though users see apps from up to four air-gapped PCs or thin clients on on screen, those machines’ isolation is preserved.

Policies can be applied so that only the permitted pixels make it off a PC and into the monitor the CDDC drives. Users are also reminded what level of security applies to the app they’re working with. Interaction between windows also follows policy – no naughty cutting and pasting from Top Secret to mere For Official Use Only apps allowed!

The Cross-Domain Desktop Compositor's output - several apps on one screen

The Cross-Domain Desktop Compositor’s output – several apps on one screen, all rendered as mere pixels

Murray said Data61 built the CDDC because while commercial products can publish apps securely, there are known problems with general-purpose hypervisors. He mentioned Xen’s recent woes as one reason sensitive users aren’t keen on commercial products.

For now, Murray hopes Australian defence types and government types will be interested in the CDDC. Over time he hopes financial services, medical and energy sector users will appreciate the chance to publish applications in a splendidly paranoid fashion too. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/07/cross_domain_desktop_compositor_vdi_for_the_paranoid/

Achtung! German election tabulation software ‘insecure’

Software used in Germany for vote counting is insecure, according to research by the Chaos Computer Club (CCC).

The white-hat hackers found multiple vulnerabilities and security holes in German national voting software. The findings were released by the group on Thursday, just weeks before the upcoming vote on September 24 to elect the members of the Bundestag (parliament).

Software used to tabulate election results might be hacked, the CCC warns. Analysis of the PC-Wahl software package used in many states to capture, aggregate and tabulate the votes during elections threw up a number of vulnerabilities.

The broken software update mechanism of PC-Wahl allows for one-click compromise. Together with the lacking security of the update server, this makes complete takeover quite feasible. Given the trivial nature of the attacks, it would be prudent to assume that not only the CCC is aware of these vulnerabilities.

“Elementary principles of IT security were not heeded,” said Linus Neumann, a CCC spokesman who was involved in the study. “The amount of vulnerabilities and their severity exceeded our worst expectations.”

PC-Wahl’s technology has been used in national, state and municipal elections in Germany for many years. The state of Hesse is verifying every transmission made using PC-Wahl through independent channels, according to CCC.

CCC has published proof-of-concept attack tools as a means to validate its warnings. The famous security outfit released its analysis in order to encourage the authorities to make necessary security improvements. “A brute manipulation of election results should be harder now because of the raised awareness and changed procedures,” CCC argues.

The security of electronic voting systems has been under serious scrutiny for years, well before allegations that the Russians interfered with last year’s US presidential elections. Russian intelligence has also been accused of hacking at least one maker of software used in the 2016 vote.

During a US Senate Intelligence Committee hearing in January, a senior Department of Homeland Security official claimed that the electoral systems of 21 as-yet unnamed states were probed by Russian government hackers last October. The intruders used a variety of exploits and software vulnerabilities in attempts to crack into election registration and management systems, but not the vote tallying equipment itself, according to testimony from the DHS’s acting director of the cyber division, Dr Samuel Liles, as previously reported.

El Reg contacted PC-Wahl requesting comment but we’re yet to hear more. We’ll update this story as and when new information emerges. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/07/german_election_software_insecure/

Dolphins inspire ultrasonic attacks that pwn smartphones, cars and digital assistants

Voice control is all the rage these days, but a team of Chinese researchers has come up with a way to subvert such systems by taking a trick from the natural world.

Apps like Google Assistant and Siri are set to always be listening and ready for action, but shouting into someone else’s phone is hardly subtle. So the team from Zhejiang University decided to take a standard voice command, convert it into the ultrasonic range so humans can’t hear it, and see if the device could.

The method [PDF], dubbed DolphinAttack, takes advantage of the fact that puny human ears can’t hear sounds well above 20kHz. So the team added an amplifier, ultrasonic transducer and battery to a regular smartphone (total cost in parts around $3) and used it to send ultrasonic commands to voice-activated systems.

DolphinAttack

Very little kit can have a big effect

“By leveraging the nonlinearity of the microphone circuits, the modulated low-frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems,” they said.

“We validate DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa.” The in-car navigation system in Audi cars was also vulnerable in this way.

Because voice control has lots of possible functions, the team was able to order an iPhone to dial a specific number – which is handy but not that useful as an attack. But they could also instruct a device to visit a specific website – which could be loaded up with malware – dim the screen and volume to hide the assault, or just take the device offline by putting it in airplane mode.

The biggest brake on the attack isn’t down to the voice command software itself, but the audio capabilities of the device. Many smartphones now have multiple microphones, which makes an assault much more effective.

As for range, the furthest distance the team managed to make the attack work at was 170cm (5.5ft), which is certainly practical. Typically the signal was sent out at between 25 and 39kHz.

The full research will be presented at the ACM Conference on Computer and Communications Security next month in Dallas, Texas. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/07/dolphins_help_pwn_electronics/

Secure microkernel in a KVM switch offers spook-grade application virtualization

Researchers at Australian think tank Data61 and the nation’s Defence Science and Technology Group have cooked up application publishing for the paranoid, by baking an ARM CPU and secure microkernel into a KVM switch.

As explained to El Reg by Toby Murray, on behalf of his fellow researcher from Data61’s Trustworthy Systems Team Kevin Elphinstone, workers in secure environments will sometimes have multiple PCs on their desks. Each will be connected to its own network and run apps that live on discrete infrastructure. Those air gaps provide hygiene so that organisations feel satisfied data can’t move between applications. To make things a little less cluttered on the physical desktop , keyboard, video and mouse (KVM) switches mean users can share one set of human interface peripherals among multiple PCs.

While KVM switches save clutter, users only see one app at a time. Which isn’t great given that sharing data from diverse sources can help the kind of people who need these rigs to do their jobs.

Hence Data61’s newly-revealed “Cross-Domain Desktop Compositor” (CDDC), a small piece of hardware that offers the same peripheral-aggregating functionality as a KVM switch but can also publish applications from different machines onto one screen and even allow cut and paste between windows.

The CCDC uses the seL4 microkernel, code that has been mathematically proven free of error and is therefore deployed in environments where reliability and resilience are at a premium.

The CCDC’s field-programmable gate array contains seL4 and code to scrape apps from different PCs and publish them into a single screen.

The device’s output is just video so even though users see apps from up to four air-gapped PCs or thin clients on on screen, those machines’ isolation is preserved.

Policies can be applied so that only the permitted pixels make it off a PC and into the monitor the CCDC drives. Users are also reminded what level of security applies to the app they’re working with. Interaction between windows also follows policy – no naughty cutting and pasting from Top Secret to mere For Official Use Only apps allowed!

The Cross-Domain Desktop Compositor's output - several apps on one screen

The Cross-Domain Desktop Compositor’s output – several apps on one screen, all rendered as mere pixels

Murray said Data61 built the CCDC because while commercial products can publish apps securely, there are known problems with general-purpose hypervisors. He mentioned Xen’s recent woes as one reason sensitive users aren’t keen on commercial products.

For now, Murray hopes Australian defence types and government types will be interested in the CCDC. Over time he hopes financial services, medical and energy sector users will appreciate the chance to publish applications in a splendidly paranoid fashion too. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/07/cross_domain_desktop_compositor_vdi_for_the_paranoid/

Keep your data safe from lockup malware flinging thieves

Promo Ransomware has become one of the most damaging threats on the internet. In recent years viruses have proliferated, spreading through spam emails and off-the-shelf malware kits that even criminals with minimal IT expertise can use to hijack and encrypt data, then demand a ransom to unlock it.

The sums of money have grown – and payment does not always guarantee delivery of the encryption key you need. The bad guys are constantly innovating and they operate in a highly professional manner, skilfully exploiting the security holes that exist in many organisations, small and large.

If you want to stay ahead, it could be time to attend a Sophos webinar entitled Don’t Tear your Hair out over Ransomware, which will examine the alarming new strains of ransomware that have emerged so far in 2017. Beware especially of the following:

  • Spora launched in January and updated in June, uses spam emails that offer a “try before you buy” feature to scramble data files
  • Petya A computer worm and its variants responsible since June for massive ransomware attacks across Russia, Ukraine and beyond
  • Philadelphia An example of RaaS (ransomware-as-a service) that once lurked only on the dark web but can now be openly purchased by anyone tempted by its sleek marketing and illicit rewards
  • Locky After a spell in obscurity, this familiar virus has recently re-emerged to ensnare the unwary through .zip files in email attachments
  • WannaCry The infamous worm for which Microsoft had issued a patch in March, but still raged through 150 countries in May and almost brought the UK’s health service to its knees

Sign up to watch the webinar on the 21st of September to learn the tips and techniques you need to protect yourself against fresh ransomware attacks and discover what anti-virus tools are available to block the latest mutations. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/07/protect_your_data_from_the_ransomware_thieves/

.UK domains left at risk of theft in Enom blunder

Thousands of UK companies were at risk of having their .uk domain names stolen for more than four months by a critical security failure at domain registrar Enom.

The security lapse allowed .uk domains to be transferred between Enom accounts with no verification, authorisation or logs.

Any domains hijacked would have been “extremely hard or impossible” to recover, according to The M Group, the security firm that discovered the flaw.

The M Group said it reported the issue to Enom on 2 May, but the problem was only addressed on 1 September. The practical upshot of the problem was that anyone with an Enom account would have been able to transfer another Enom customer’s domain to their control without consent or authorisation.

Enom sent a customer advisory saying it had fixed the issue by disabling all inter-account .uk domain transfers, mitigating the security oversight.

Enom’s breach notification email (source: M Group)

The issue was announced on the full disclosure mailing list. The M Group’s advisory is here.

Enom is owned by Canada-based domain name services company Tucows. El Reg asked both Tucows (by email) and Enom (via Twitter) for comment on the security issue. We’ll update this story as more information comes to hand. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/07/enom_security_snafu/