STE WILLIAMS

US-CERT Warns of ERP Application Hacking

ERP applications such as Oracle and SAP’s are open to exploit and under attack, according to a new report referenced in a US-CERT warning.

Enterprise resource planning (ERP) applications from vendors such as Oracle and SAP are under attack and the critical data living inside them is vulnerable to both criminal and nation-state hackers. That’s the warning by US-CERT today, referencing a new report by Digital Shadows and Onapsis.

The Digital Shadows/Onapsis report is a detailed look at how the software that form the central pillar of many organizations’ application infrastructure has been targeted by cybercriminals in patterns that go back years.

“The key findings fold into three things,” says Michael Marriott, research analyst at Digital Shadows. “First of all there’s still a worrying number of Internet-facing applications. Second, there’s an increasing amount of exploits for these applications. And finally, threat actors know this,” he explains.

Years of development have gone into many of the exploits referenced in the report. The US-CERT bulletin references both the new Onapsis and Digital Shadows report, ERP Applications Under Fire, and a previous bulletin from 2016.

“Attackers do not need to go and really use one of their zero days or advanced techniques,” says Juan Pablo Perez-Etchegoyen, CTO of Onapsis. “A weak user password exploited by a well-known vulnerability that has been out there for five, 10, or even more years can lead to a successful breach.”

And those older breaches are finding new success in ERP. “They can leverage the current state of ERP applications because they are harder to maintain, hard to patch, and harder to keep up,” Perez-Etchegoyen says.

ERP’s complex and critical status within the enterprise makes it uniquely subject to attack. “Anyone who analyzes enterprise critical software will surely discover that cyber criminals are targeting them and that they will find vulnerabilities or existing, ongoing cyber campaigns,” says Joseph Carson, chief security scientist at Thycotic.

“Access to such ERP systems typically means security has been weak in other parts of the business, for example, securing systems and privileged access to critical business applications,” he says.

Architectural Vulnerability

Complexity has always been one of the characteristics of ERP software, and modern versions of the applications that can reach into every corner of a company’s operations are no exception. “The key part of this is that the footprint of course is so big. If you analyze an ERP application, for example, it has millions of lines of code — way more than any modern operating system,” says Perez-Etchegoyen.

The legacy ERP providers covered in the report are juicy targets, according to Joseph Kucic, chief security officer at Cavirin. That’s because they traditionally were internal applications only and later acquired “bolt-on components.”

“Since these firms are growing by bolt-on acquisition, strategic components there are extensive publicly exposed elements, and those vendors lacked the focus that cloud-born applications have had in place since day one,” he says.

Perez-Etchegoyen notes that both SAP and Oracle are pushing customers toward cloud deployments from their traditional on-premise architectures. And for some, that shift could bring benefits.

“In some cases it’s even more secure to be in the cloud. For some specific use cases, organizations will enjoy the benefits of more secure systems just by going to a cloud, he says. “But that doesn’t apply to all of the cases, especially where you have impressive service implementations or with multiple different products interfacing.”

The complexity that comes with integrating the many different layers of cloud applications brings particular security concerns, according to Kucic.

“Another major enterprise application weakness is middleware and it could provide a richer target area that could cross multiple applications and be more difficult to detect,” he says. “In some cases, I have found separation of Dev, UAT, and production on middleware to be the greatest weak link in enterprises and the least understood.”

One of the most concerning aspects of ERP deployments is the number of user interfaces that face the Internet, says Perez-Etchegoyen. Simply moving an interface of the Internet is not a sovereign panacea for security woes, though: “We hear a lot from ERP customers that they believe that because their SAP applications are not Internet-facing they are fine,” but that is not enough, he says.

The three key steps an organization can take to reduce their attack exposure are to carefully review configurations for known vulnerabilities; change default passwords and require strong passwords for administrators and users; and try to reduce the exposure of ERP applications to the Internet.

“This [report] is kind of a warning that the real actors are interested in various different things that are held by our applications, and there is stuff we can do about it to reduce our attack surface,” Digital Shadows’ Marriott says.

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/application-security/us-cert-warns-of-erp-application-hacking/d/d-id/1332390?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Russian hackers are ready to disrupt US energy utilities, says DHS

Remember the stark warning from the US Department of Homeland Security (DHS) earlier this year that the Russians were trying to hack the country’s energy grid?

According to comments made on the record by the Department’s chief of industrial-control-system analysis, Jonathan Homer, this campaign was a lot more serious that anyone has previously admitted.

Historically, warnings about alleged Russian attacks on US infrastructure have tended to be general in nature – aspiration as much as achievement – but Homer’s comments add details that seem designed to raise the anxiety level a couple of notches.

From 2016 and into this year, Homer says Russian hackers have snared “hundreds of victims” in the utilities and equipment sectors, and “got to the point where they could have thrown switches” in a way that could have caused power blackouts. Predictably, these compromises started with phishing attacks, he said, before adding that the attackers had been sophisticated enough to jump air-gapped networks.

Some victim companies might even today be unaware that they were targeted, a curious admission by the official given that one of the DHS’s jobs would surely be to tell them.

None of this will surprise anyone who took the time to read through the DHS alert from March, which offered a blow-by-blow account of how these attackers have been targeting the energy sector right down to which password cracking tools they prefer.

The groups alleged to be behind all this are nicknamed Dragonfly and Energetic Bear – the activity of both of these groups has been well documented over the last four years.

It might look a little strange that someone from the DHS would want to draw public attention to a successful attack by one of these groups in a way that serves to advertise their capabilities. It could be that officials want to underscore private warnings that have been handed out to the energy sector and perhaps pave the way politically for even more investment in US cyber defence.

There are now regular stories about Russian attacks against all sorts of online systems, including a separate indictment of 12 Russians for the alleged leaking of DNC emails during the 2016 election.

More recently came an unusually technical warning about how a group called Grizzly Steppe was attempting to compromise home routers.

On the other hand, perhaps these warnings are a way of sending these groups – and their alleged nation state paymasters – the message that what they are trying to do is being looked at under the microscope, and generating the forensics to point the blame squarely at them won’t be as hard as they think.

Energy and utility infrastructure is vulnerable in every country, and that includes Russia of course. That some nations might want to understand how to exploit it is a certainty, but under what conditions they would try to disrupt utilities in a country such as the US usually ends up being an exercise in pointless speculation.

As the disruptive 2015 and 2016 attacks on Ukraine demonstrated, that’s already happened on a small scale. The question is if, when and how the attackers will try something much bigger.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4oX-yXAw-Z4/

Scammers pwn verified Fox Twitter account to scam cryptocurrency

Scammers have long been exploiting Twitter to steal digital currencies from naïve users, but this month one attacker pulled off a rare coup by compromising a verified Twitter account.

Those at risk of impersonation, like celebrities and other public figures, can get their accounts verified by Twitter to show that they are really the people in control of the account. In July, someone managed to gain access to a verified Twitter account for a now-defunct Fox show called Almost Human and use it to impersonate cryptocurrency entrepreneur Justin Sun, founder of the TRON decentralized blockchain application platform.

Almost Human was a science fiction drama that ran for just three and a half months between mid-November 2013 and March 2014. Fox cancelled it after the first season. However, the network appears to have lost control of the Twitter account that had been used for the show. Scammers appear to have compromised the account and updated its display name to Sun’s, whose real Twitter account is here.

The impersonators have retweeted the real Justin Sun account several times, and most recently posted a giveaway invitation, asking followers to go and get free coins. This post now seems to have been taken down.

Cryptocurrency giveaway scams have become a popular activity among fraudsters. The scams, which typically target users of Ethereum and Bitcoin, two of the most popular cryptocurrencies, work by offering free coins online. The catch is that victims must first send a small amount of the cryptocurrency to the address before they receive a larger payout. The scammers keep the money they receive without returning anything.

The technique is a variant of the 419 scams that have plagued email users for so long, in which scammers claim to be high-ranking officials needing to get money overseas. They ask victims to send them a small amount of money in exchange for millions, which predictably never arrive.

Cryptocurrency giveaways have exploded on Twitter, and fraudsters have frequently impersonated celebrities and influencers to spread their silicon snake oil. The methods are depressingly simple: all a user has to do is change their display name. Twitter user names are unique names that show up in your URL, but display names are personal identifiers that show up in your profile page and on your posts. Users can set them to anything.

After impersonating a popular influencer, scammers will then post links (either as shortened URLS or as images) that take them to landing pages that often display large numbers of false transactions to fake social proof.

In the past, fraudsters have used this trick to impersonate cryptocurrency entities ranging from popular exchange BitStamp through to Litecoin founder Charlie Lee. Most memorably, they targeted Vitalik Buterin, co-founder of Ethereum, who changed his username in response to “Vitalik ‘Not giving away ETH’ Buterin” and asked Twitter to intervene.

This isn’t the first time that someone has impersonated Justin Sun or his Tron cryptocurrency venture. BuzzFeed reporter Ryan Mac found others doing it in February, using the same trick.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qaDnGHUgASg/

Crimson Hexagon banned by Facebook over user data concern

It sounds like the new headquarters for Superman archenemy Lex Luthor, but “Crimson Hexagon” is actually the name of the most recent data analysis firm to have been suspended for harvesting Facebook’s user data.

The Wall Street Journal last week reported that Facebook is investigating whether the firm’s contracts with the US government and a Russian nonprofit tied to the Kremlin violated its policies.

According to the WSJ, Crimson Hexagon signed off on at least 22 government contracts worth more than $800,000 since 2014, including for the State Department, the Federal Emergency Management Agency (FEMA), and the Secret Service, as well as a separate contract with a Russian nonprofit called the Civil Society Development Foundation.

The newspaper reported that a current deal with FEMA involves monitoring online discussion for various disaster-related purposes. Another deal, with the Department of Homeland Security (DHS) Immigration and Customs Enforcement (ICE), fell through after Twitter resisted the firm’s use of its firehose data: a premium version of Twitter’s streaming API that guarantees access to all tweets matching specific criteria.

It’s not surprising that the Twitter deal fell through: Twitter’s got a history of extending, but then deciding to close down, special access to allow surveillance outfits to mine data.

In 2016, for example, in the wake of a report from the American Civil Liberties Union (ACLU) about police monitoring of activists and protesters via social media data, Twitter, Facebook and Instagram cut off the data streams they’d been sending to the Geofeedia app: an app that used the companies’ APIs to create real-time maps of social media activity in protest areas. Those maps have been used to identify, and in some cases arrest, protesters shortly after their posts became public.

Facebook likewise banned use of user data for government surveillance in March 2017, following pressure from civil liberties groups concerned about the targeting of dissidents and protesters, according to the BBC. The publication quoted a statement issued by a Facebook spokesperson on Friday:

We don’t allow developers to build surveillance tools using information from Facebook or Instagram. We take these allegations seriously, and we have suspended these apps while we investigate.

Crimson Hexagon had been paying for access to Twitter’s firehose – in fact, the firm gets more useful data from Twitter than from Facebook, the WSJ reports – but the deal reportedly fell apart over concerns about how data might be used in a potential contract with ICE.

Also on Friday, Crimson Hexagon pointed out in a post that it’s no Cambridge Analytica. It’s never collected anything but publicly available information, said CTO Chris Bingham. That’s in contrast to CA, whose data slurping was “explicitly illegal,” Bingham said:

To be abundantly clear: What Cambridge Analytica did was explicitly illegal, while the collection of public data is completely legal and sanctioned by the data providers that Crimson engages with, including Twitter and Facebook, among others.

As Tech Crunch points out, Crimson Hexagon, unlike Cambridge Analytica, isn’t “a quasi-independent arm of a big, shady network of companies working actively to obscure their connections and deals.” Rather, it’s …

…more above the board, with ordinary venture investment and partnerships. Its work is in a way similar to CA, in that it is gleaning insights of a perhaps troublingly specific nature from billions of public posts, but it’s at least doing it in full view.

Crimson Hexagon has spent years using the public APIs of apps such as Facebook, Instagram, Twitter, and other sources that include newsfeeds and blogs, aggregating public posts so it can measure public opinion about candidates, brands or issues. It’s got clients around the world, including in Russia, Turkey, the UK and the US.

The firm claims to have pulled together a trillion-post archive. Among Crimson Hexagon’s projects: the firm unsuccessfully tried to procure a Defense Department contract monitoring the Islamic State (IS) online; it had a contract to measure Russian President Vladimir Putin’s popularity; and, according to sources familiar with the company, it had a deal in Turkey that led the Recep Tayyip Erdogan-led government to decide, in 2014, to “briefly shut down Twitter amid public dissent,” as the WSJ reports.

In response to questions from the WSJ about its oversight of Crimson Hexagon’s government contracts and its storing of user data, Facebook said on Friday that it wasn’t aware of some of the contracts. The platform said it was suspending Crimson Hexagon’s apps from Facebook and its Instagram unit as it launched a broad inquiry into how Crimson Hexagon collects, shares and stores user data.

Also on Friday, Facebook VP for product partnerships Ime Archibong said that the company planned to meet with Crimson Hexagon’s team over the next few days to look into the matter:

Facebook has a responsibility to help protect people’s information, which is one of the reasons why we have tightened [access to user data].

Archibong added that Facebook allows outside parties to produce “anonymized insights for business purposes.”

Crimson Hexagon, a Boston firm, was founded in 2007 by political scientist Gary King, director for the Institute of Quantitative Social Science at Harvard University. While the “Crimson” part of its name appears to be a hat-tip to Harvard, the company’s site says the name is actually based on the “Crimson Hexagon” featured in Jorge Luis Borges’ short story, the Library of Babel: a “library of astronomical size, comprised of almost infinite hexagonal-shaped rooms that collectively contain every possible combination of just 23 letters, a space, a period, and a comma. Though most of the books are gibberish, the library also contains every valuable book ever written and that might ever be written.”

That’s pretty much what our public posts are to a data analytics firm: continuously churned out mountains of what at first blush seems like gibberish but which, when you figure out how to analyze it, “helps brands find valuable meaning in a seemingly infinite volume of unstructured text and images,” as Crimson Hexagon says.

In other words, it’s figured out how to find great value in gibberish. Now, it’s time for Facebook to work out if, in the hands of Crimson Hexagon, that great value translates into the type of government surveillance the platform has already banned.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CULYqOztXZs/

Hidden camera Uber driver fired after live streaming passenger journeys

Have you recently climbed into the back seat of an Uber or Lyft in St. Louis?

If so, did you happen to find yourself bathed in purple light? …Chauffeured by a friendly, bearded driver named Jason?

If any of that rings a bell, it might be time to say welcome to candid camera, given that you may have been live-streamed to Twitch without your knowledge or say-so, and that strangers may have been discussing your behavior online during your unwitting performance.

Or rating your body. Or capturing upskirt video of your crotch. Or watching you throw up. Or spying on you while you kissed, or wept, or dissed your relatives/friends/boss.

That’s what happened to hundreds of passengers who happened to wind up in a truck being driven by Jason Gargac, a (now former) driver for Lyft and Uber who decided to start livestreaming his passengers, and himself as a narrator when they weren’t there, as he drove around St. Louis.

Starting in March and on up to this past weekend, when both companies terminated him, Gargac gave about 700 rides through Uber and more with Lyft, according to the St. Louis Post-Dispatch.

Most of those rides were streamed to Gargac’s channel on Twitch: a live-video website that’s popular with video gamers. And most of those live-streamed riders had no idea he was filming them, the newspaper reports. Gargac goes by the username “JustSmurf” on his channel, which blinked off of Twitch.tv on Saturday.

Here are the type of comments that Gargac’s Twitch subscribers amused themselves with: when a viewer rated a blonde passenger a “7” and a brunette a “5,” another viewer with the username “DrunkenEric” replied that…

She doesn’t sit like a lady though.

The purple lights? They were used to illuminate passengers for the cameras. Two cameras, “about the size and shape of a deck of cards,” were mounted on the windshield, one facing outside, one facing inside the car. Gargac spent about $3,000 to outfit his car as a mobile recording studio, including what the St. Louis Post-Dispatch reports as a 12-button control panel that allowed him to toggle between camera views as he drove. He used what the newspaper called a “data setup” to keep his livestreams connected.

In an interview with the newspaper, Gargac said that at one point, he added a 4″ sticker on his back passenger window. It read:

Notice: For security this vehicle is equipped with audio and visual recording devices. Consent given by entering vehicle.

The Post-Dispatch managed to get in touch with a number of recorded people. None said that they’d noticed the sticker. They did say that they felt “dehumanized,” though, and asked that their names not be used in media coverage, given that they felt humiliated by the Twitch comments.

However, their first names, and even their full names, were sometimes revealed on the live stream. The same goes for their homes: Gargac at one point told his listeners that he intended to shut off the street-facing camera before he got close to a pickup address, to “protect people’s privacy and all that jazz.”

…if he remembers, that is. “I’m going to try to remember,” he says. “I’ll probably forget half the time.”

Gargac also told the St. Louis Post Dispatch that he muted addresses if he caught them before people uttered them, and that he’d muted one conversation about drug addiction and another about personal finances. He also created a “block” graphic he could trigger from his control panel to paste over things like that upskirt shot. He created it after one of his followers clipped the upskirt footage for later viewing, that is.

Part of his motivation was to avoid being banned from Twitch, given that its terms of service prohibit sexual material, he said.

So it’s partly the [terms of service] and partly to respect the people. You know, I wouldn’t want my junk out there on camera, so I’m going to try to respect everyone else as much as I can.

The Post-Dispatch published its report on Gargac’s livestreaming on Friday. In the week leading up to publishing, Lyft and Uber responded to the newspaper’s queries by issuing prepared responses that pointed to the fact that the practice is legal in Missouri, where only one party to a conversation needs to consent to it being recorded. Gargac was that one person.

Uber’s initial statement:

Driver partners are responsible for complying with the law when providing trips, including privacy laws. Recording passengers without their consent is illegal in some states, but not Missouri.

That tune changed by Saturday, however. That’s when Uber sent an updated statement about having suspended Gargac from the app. It’s also when Gargac’s channel disappeared from Twitch. On Sunday, Lyft also said that the driver had been deactivated on its app. As of Monday, Uber told the newspaper that it had entirely ended its relationship with Gargac.

After the story was posted, Twitch said in a statement that it would remove content if it received a complaint from someone whose privacy was violated.

Chip Stewart, a professor at Texas Christian University who’s researched the privacy implications of livestreaming, told the Post-Dispatch that surreptitiously recorded passengers might have legal recourse, but the success of such a case might well depend on showing that they had a “reasonable expectation of privacy” when they climbed into Gargac’s truck.

Gargac, who told the newspaper that he considers his truck to be a public space – one in which he wouldn’t, say, have sex – could be facing not only civil litigation but possibly even criminal charges, given that he drove some fares into Illinois: a neighboring state that requires all parties to give consent to having conversations recorded.

His story has been picked up by the likes of the New York Times, “NBC Nightly News,” the BBC and “Good Morning America,” among others, and is being used to illustrate the haziness of current privacy laws.

In response to the media blowup, Uber on Monday confirmed that it was re-evaluating its policies.


Image courtesy of Twitch.tv

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CTWmL5nMRwQ/

How one hacker could have changed automotive history

What’s worse than a world-readable copy of your customers’ data that any hackers with time on their hands (or even just a bit lof luck) might stumble across?

How about a world-readable set of customer data that’s also world writable?

That way, once crooks have downloaded the entire trove, they can snoop through the stolen data, make a bunch of subtle (or even not-so-subtle) changes…

…and upload the alterations back to your server.

That way, they get a chance to make history, not merely to snoop on it.

According to a recent report from cybersecurity company UpGuard, that’s just what its researchers found earlier this month at a Canadian robotics company.

The robotics company’s ill-secured server not only exposed 150GB of private data from automotive companies including Ford, GM, Tesla and Toyota, but also leaked personal information about its own staff, including scans of passports and driving licences.

The unsecured data even included a non-disclosure agreement (NDA) form from trendy US automotive firm Tesla.

There’s no suggestion that the NDA had been signed – and many NDAs are, in any case, a matter of public record, because it’s not the existence of the NDA itself that’s secret – but it’s a wry irony in any case.

How were the backups organised?

Apparently, the afflicted company, Level One Robotics, is a user of the astonishingly useful open source software rsync.

Simply put, rsync is a remote file and directory copying tool that makes it both easy and efficient to keep two copies of your data synchronised, even if they’re on opposite sides of the world.

Traditional copying tools typically duplicate files blindly from A to B, even if B already has the needed files – this can waste enormous amounts of time and network bandwidth.

Traditional backup tools add a bit more intelligence by only copying files that have changed since last time, but usually copy an entire file even if only a few bytes in it have changed.

But rsync goes much further than that, and automatically.

When synchronising, client A and server B work in tandem through the files at each end of the link and efficiently:

  • Agree which new files need to be copied from A to B.
  • Agree which files on B are no longer needed and should be deleted.
  • Figure out how to update files on B to match the files on A by modifying only the parts of the files that have changed.

For example, in a text file that has had extra paragraphs added at the end, rsync will copy across only the new lines, thus avoiding copying the bulk of the file that already exists at the other side of the link.

Likewise, if the middle few slides in a presentation have been deleted but the rest of the document is the same, rsync will figure that out and transmit the instruction “delete the middle 20% of the target file” instead of re-transmitting the 80% of the file that hasn’t changed.

The rsync tool is part of the Ph.D. dissertation that made Australian computer scientist Andrew Tridgell, better known as “tridge”, into Dr. Andrew Tridgell. Tridge is held in iconic regard in the open source community for creating Samba, a free reimplementation of Microsoft’s SMB networking protocols (now standardised as CIFS). Samba made it easy to introduce Linux servers into networks that were dominated by Microsoft products. You should read tridge’s Ph.D. – it’s not oppressively long, unlike some doctoral papers out there, and Andrew writes really well.

What went wrong?

The problem, if that is the right word, with rsync is that it’s enormously powerful – and that makes it prone to disastrous misconfiguration, by accident or design.

You need to be really careful about who’s allowed to connect to your rsync servers, and what sort of commands they’re allowed to send.

For example, to leech your entire backup, all I need to do is start with an empty directory and tell your server, “Please get me synchronised” – rsync will quickly figure out that I need a copy of every file, and that’s what I’ll get, without needing to know anything about your directory structure to start with.

Of course, if you run the same command in the other direction, you’re telling the server to “sync” itself with an empty directory, so it will immediately delete everything at the other end in order to make the source and target consist of an identical collection of zero files.

Alternatively, to make a small and subtle series of unauthorised hacks to your files, I can rsync your entire data set to my computer, find and edit just the files I want to mess with, and then run rsync once again to push back my alterations – automatically and super-efficiently.

Unfortunately, Level One Robotics hadn’t been cybervigilant enough at controlling who could get at its data.

Its rsync server, with access to more than 150GB of private and personal data, ought to have been accessible only to carefully selected servers on the company’s own network.

But the rsync server was accessible directly from the internet, findable by anyone prepared to set out and look.

The good news here is that UpGuard reported this potentially disastrous misconfiguration to Level One, and Level One fixed the problem pretty quickly – before the story was made public.

What to do?

If you’ve got servers that are only supposed to be accessible to selected people from selected computers on selected networks…

…then for goodness’ sake check to make sure that your servers don’t show up where they aren’t supposed to.

If you’re new to network scanning, which is where you deliberately go looking – with permission, of course – for what’s visible where on your own network, we recommend starting off with Nmap.

Nmap is free, powerful, has 20 years of history behind it, and you might as well run it against your own network to see what’s what – you might not be scanning your servers, but you can be sure the crooks are!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/IIOt0L0idqg/

Criminal mastermind injects malicious script into Ethereum tracker. Their message? ‘1337’

Ethereum-tracking website Etherscan has resolved a cross-site scripting issue on its domain.

Though among the world’s top-2,000 websites (1,379th per Alexa), Etherscan fell foul of one of the net’s most common security slip-ups.

Cross-site scripting (XSS) refers to when a hacker is able to inject a script into a vulnerable site which is viewable by visitors. It is especially useful for running phishing scams or, worse, pushing malicious scripts at site surfers.

Security researcher Scott Helme discovered that the flaw resided in an insecure custom implementation of the Disqus comment system, which generated a pop-up alert box on the Etherscan site. It read: “etherscan.io says l337.”

The Etherscan developers informed users via Reddit. The site temporarily disabled the comment section while it worked to resolve the issue.

When the comments section reappeared, tests by Helme determined that the vulnerability was still uncorrected. “It seems that the fix was specifically to ‘handle un-escaped javascript exploits’ via their comment system,” he said, adding that this did not address the problem.

Helme told us that by late Tuesday afternoon the bug had been stamped, freeing him to discuss it in a blog post published on Wednesday morning. Helme began his inquiry into Etherscan’s XSS woes in response to a tip-off from journalist Jordan Pearson.

Etherscan is yet to respond to a request by El Reg to comment on the problem.

“This is exactly the kind of thing that CSP [Content Security Policy] was built to stop and it would have made a great defence here even though traditional mechanisms like output encoding were missed/forgotten,” Helme said. “A properly defined CSP would have neutralised the inline script here because inline script can be controlled on a site that defines a proper CSP.

“If the injected script tag was loaded from a third-party origin then the script would have been blocked because the origin wouldn’t have been found in the CSP whitelist. Either way, the attack would have been neutralised and again, this is exactly what CSP set out to do.”

CSP reporting could have alerted site admins about the problem. “When the browser blocked the hostile script it could send a report out to a service like Report URI1 and provide immediate information that there is script on the page that shouldn’t be there,” Helme added.

Lucky escape

The Etherscan incident could have been far worse. Rather than a cheeky pop-up, a more mendacious mind might just have easily used the same flaw to run a crypto-mining scam.

“The script payload here was not stealthy in the least bit, popping a JS [JavaScript] alert on the page is a dead giveaway that there is a script there doing bad things,” Helme said. “Just think if it hadn’t popped that alert, though. What if it had injected malware, a malicious redirect, modified or tampered with the page or installed a keylogger? There are countless ways this could have gone very, very wrong but yet again, this was a lucky escape.

“It was only a few months ago when I was talking about how 4,000+ government sites got hit with crypto-jacking after a piece of rogue JS installed a crypto miner on their site. Back then I detailed how CSP and SRI could have protected all of those government sites and to this day only a small handful of them have gone and deployed either of those protections.” ®

Bootnote

1Helme is the security researcher behind both securityheaders.com and report-uri.com, free tools to help websites to deploy better security.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/25/etherscan_xss/

2FA? We’ve heard of it: White hats weirded out by lack of account security in enterprise

Few companies bother to secure employee accounts with simple protections like two-factor authentication (2FA) and lockouts, an analysis by security company Rapid 7 has found.

These were only the most glaring weaknesses that emerged from 268 real-world penetration tests carried out by its security staff since 2017 for the report “Under The Hoodie” (PDF).

Of this, only 15 per cent of networks had enabled 2FA, leaving 34 per cent where it was impossible to detect and a remaining 50 per cent where it was not present.

The even more basic practice of setting an account lockout – restricting incorrect password attempts to deter or slow brute-force attacks – was missing on almost one in five networks tested by Rapid 7.

In a further 16 per cent of cases, lockout only added time to the tester’s attempted compromise. They were only completely locked out and detected in 7 per cent of occasions.

Pen-testers are, of course, experts at avoiding being locked out, but so are a lot of cybercriminals. This is the whole point of pen-testing – to simulate a compromise from the attacker’s point of view.

The strangest omission, however, was still the failure to implement 2FA. “While 2FA continues to grow in popularity, it is still rare to find it in the field,” the authors noted.

One company that does use multi-factor authentication internally is Google, which this week told security blogger Brian Krebs that there had been “no reported or confirmed account takeovers since implementing security keys at Google”.

Unless a weakness is found in the way the technology has been implemented, an attacker needs to have physical access to keys as well as password and username.

And network credentials are not well protected, it seems, as testers were able to get their hands on these more than half of the time. During internal network tests, this rose to 86 per cent.

The number of ways testers were able to do this was dizzying: ranging from guessing default passwords, scraping compromised ones from the internet and social engineering.

The insiders

Other findings included that 84 per cent of networks were vulnerable to software and hardware vulnerabilities to some extent, with 96 per cent affected by at least one vulnerability on internal tests.

A small encouragement here was that attackers wouldn’t have been able to make much of these without manual skills – automated tools and canned exploits only got the pen-testers so far. Nevertheless, in two-thirds of pen-tests, Rapid 7’s mavens gained complete admin access to the target networks.

The growth in internal pen-testing is a noticeable theme. Most tests were still traditional external tests but 32 per cent were purely internal, a significant rise on the previous analysis in 2016.

“This uptick in internal assessments is an indicator that organizations are, in general, taking a more holistic approach to their network security and are more likely to assess both their internal and external attack surfaces.”

This makes sense: once an attacker has breached the network’s perimeter, they see the network in the same way someone on the inside would. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/25/companies_fail_to_secure_employee_accounts/

New Free Chrome Plug-in Blocks Cryptojacking Browser Attacks

Qualys also plans Firefox, Safari, IE versions.

Qualys has developed a free extension for Google Chrome to protect browsers from cryptojacking attacks, Dark Reading has learned.

The new BrowserCheck CoinBlocker Extension uses both domain blacklists for cryptocurrency mining sites as well as heuristics features to detect unknown cryptojacking attack types. Qualys will officially roll out the plug-in on Wed., July 25, but it’s already available on the Google Chrome Web Store.

Cryptojacking attacks often occur when an attacker infects a website with JavaScript, and an unsuspecting user visitor to the site unknowingly downloads that malicious code via a browser. The victim’s machine is then used to mine cryptocurrency, which the attacker pockets. The process can eat up more than 70% of a machine’s CPU, according to Qualys researchers.

Ankur Tyagi, senior malware research engineer at Qualys and one of the creators of the tool, says while there are other existing Chrome extensions for cryptojacking protection, most rely soley on a blacklist of IP addresses and not heuristics. Qualys’ BrowserCheck CoinBlocker Extension also was built to detect the popular CryptoNight family of cryptomining software, Tyagi says, the most pervasive of which is Monero.

Among the other coin types under the CryptoNight umbrella are ByteCoin, Digital Note, AEON, Loki, and BitTube. Tyagi says the heuristics feature in the plug-in can spot patterns that indicate cryptomining algorithm activity.

“Attackers are trying to create JavaScript-based attacks that can be launched on clients that visit” crypto malware-infected sites, he says.

BrowserCheck CoinBlocker works like this: When a user browses a website, the plug-in checks for the telltale malicious JavaScript. If it detects it, it stops the browser from downloading the JavaScript and also blocks the mining site. Qualys also plans to later roll out versions of the plug-in for the Firefox, Safari, and Internet Explorer browsers.

Google has been well aware of cryptocurrency mining abuse. In April, Google removed and banned cryptocurrency mining extensions in the Chrome Web Store after 90% of these apps violated its policy of properly informing users of the the apps’ purpose. 

The worldwide overall cryptocurrency market capitalization hit $270 billion this month, according to Qualys, demonstrating just how lucrative it is for abuse. Meantime, malicious coin-mining samples increased by 629% in the first quarter of this year, according to McAfee, from 40,000 samples in Q4 2017 to 2.9 million in Q1 2018.

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/new-free-chrome-plug-in-blocks-cryptojacking-browser-attacks/d/d-id/1332381?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Securing Our Interconnected Infrastructure

A little over a year ago, the world witnessed NotPetya, the most destructive cyberattack to date. What have we learned?

In late June, the House of Representatives passed legislation specifically aimed at securing the industrial control systems that run our nation’s most critical infrastructure, from oil pipelines to water treatment facilities to the grid. These systems also run infrastructure that might not rise to the level of “critical” but are certainly important. The automated machines powering America’s manufacturing industry, for example, are all powered by a software and hardware that is increasingly subject to a growing threat landscape.

This legislation is no doubt a reaction to the events of a little over a year ago, when the NotPetya malware metastasized from its original targets in Ukraine to over a dozen countries, including the United States. The US, UK, and other western powers later blamed and sanctioned Russia for the self-propagating worm, which has been dubbed the most destructive and costly cyberattack to date with damages exceeding $10 billion globally.

NotPetya and its predecessor WannaCry, both of which utilized an exploit that was allegedly developed by and later stolen from the National Security Agency, are glaring examples of how threats that have traditionally only affected IT systems are now creeping into operational technology, or OT systems like those that open and close breakers, rotate turbines, and shut down plant operations when conditions reach dangerous levels. Indeed, the IT and OT worlds are converging, meaning that the victims of cyberattacks are no longer always the primary targets.

The reason for this phenomenon can be summed up in one word: interconnectivity. Our technological worlds are converging because the “things” that were heretofore disconnected are gaining a network connection, and more connected devices are being introduced into the global digital commons. By some estimation, the Internet of Things will more than triple in size between now and 2025 to over 75 billion devices. Most of these devices are consumer-facing — like smart thermostats and home assistants — but they are also found in our industrial facilities in the form of sensors, actuators, and portable interfaces like tablets and smart displays.

These industrial devices pose the greatest potential cyber-risk to our critical infrastructure. As stated by Congressman Don Bacon (R-Neb.), the primary sponsor of the DHS Industrial Control Systems Capabilities Enhancement Act of 2018, they are “the critical interface between the digital controls in an operational process.” Unlike most IT environments, where hackers are forced to overcome authentication hurdles, usually by stealing credentials or cracking weak passwords, industrial control systems have no authentication. To make matters worse, the traffic is almost always unencrypted.

While it’s encouraging that the House is leaning forward on industrial cybersecurity and committed to authorizing and equipping the Department of Homeland Security to protect our critical infrastructure, this still remains largely a private sector problem. After all, over 80% of America’s critical infrastructure is privately owned and the owners and operators of these assets are best positioned to address their risks.

In doing so, one of the questions companies are asking themselves is how to reconcile the risks and rewards of the interconnected world. Should we simply retreat into technological isolationism and eschew the benefits of connectivity in the interest of security, or is there a better way to manage the risk?

The former is gaining a growing chorus, especially among security researchers. The latest call comes from Andy Bochman of the Department of Energy’s Idaho National Labs. Bochman argued this past May in Harvard Business Review that the best way to address the cyber-risk to critical infrastructure is “to reduce, if not eliminate, the dependency of critical functions on digital technologies and their connections to the Internet.” Said differently, when it comes to our most critical infrastructure assets, we should replace digital with analog and machines with humans.

Maybe I’m influenced by my millennial bias as a networked and digital creature, but such an approach seems tantamount to surrender in the face of a rising cyber threat that is still a long way from its apex. If the goal is to achieve maximum security of our critical infrastructure at all costs, even if it means depriving asset owners and operators of real-time performance analytics and the ability to conduct remote maintenance under routine and exigent circumstances, then so be it. However, this strategy is unlikely to receive much support outside of security circles and could prove to be cost prohibitive for most organizations.

By contrast, we must accept and embrace connectivity while, at the same time, improving security. This means balancing the risks of interconnectivity to our industrial control systems with gaining greater visibility into who and what are on these networks. Interconnectivity alone is not the problem; rather, it is this interconnectivity paired with opacity that produces the greatest risk to the country’s critical infrastructure.

When it comes to securing the industrial Internet of Things, we are still in very early days. Let’s not raise the white flag just yet by retreating into technological isolationism. Instead, let’s learn from the events of a year ago and bring together government, industry, and the critical infrastructure community to raise what are currently far too low barriers to entry for hackers.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Dave Weinstein is the vice president of threat research at Claroty and a non-resident fellow at New America.  Prior to joining Claroty, he was the chief technology officer of New Jersey, where he served in the governor’s cabinet and was responsible for delivering and … View Full Bio

Article source: https://www.darkreading.com/endpoint/securing-our-interconnected-infrastructure/a/d-id/1332375?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple