STE WILLIAMS

NSA leaker Reality Winner gets 63 months in jail

Reality Leigh Winner, the NSA contractor who leaked sensitive information to the Intercept last year, was sentenced to 63 months in prison last week along with three years of supervised release.

Winner began working as a linguist for contractor Pluribus International at the National Security Agency’s office in Fort Gordon, Georgia, in February 2017, carrying a Top Secret clearance. In May that year, she leaked a five-page document classified as Top-Secret/Special Intelligence to the Intercept, an online news outlet created by eBay founder Pierre Omidyar’s First Look Media project. The document outlined efforts by Russian military intelligence to hack voting software providers and local election officials in advance of the 2016 election.

The Intercept, initially founded as a way to communicate information leaked to journalists by whistleblower Edward Snowden, contacted the government at the end of May to authenticate the document. Upon reviewing what the Intercept sent them, government investigators noticed that it had a crease, indicating that it was a printout.

Officials conducted an internal audit, and found that six individuals had printed the document. Auditing their desktops showed that Winner had emailed the news outlet asking for podcast transcripts and subscribing to its feed.

At the time, some mused that officials may also have been tipped off by a unique dot pattern printed as part of the document, which can be used to tie material to a specific printer.

Winner was prosecuted under the Espionage Act. Her sentence was expected, as it was outlined in a plea deal reached in June. The alternative penalty could have been far higher. The prosecution accused her of violating Title 18, United States Code, Section 793(e) of the US Code. That section deals with ‘gathering, transmitting, or losing defense information’. Violating this law carries a hefty fine of up to $250,000 and a jail sentence of up to ten years.

In a prepared statement, Winner said that she had no intention of harming national security. She added:

I’d like to apologize profusely for my actions and apologize especially to my family: My actions were a betrayal of my country.

Winner’s case has provoked heated debate on both sides, with some praising her for revealing information of great value to the public, and others drawing a distinction between those who expose wrongdoing and those who simply divulge classified information.

President Donald Trump tweeted that the sentence was unfair, using it to attack both attorney general Jeff Sessions and his rival in the 2016 presidential race, Hillary Clinton.

Winner’s mother, Billie J Winner-Davis, called on Trump to help with her release.

Presidents have been known to commute leakers’ sentences before. Chelsea Manning, who was Bradley Manning at the time of her arrest for giving information to Wikileaks, originally received a 35-year sentence in 2013. In January 2017, then-President Obama commuted her sentence, requiring her to serve just four more months.

Winner wasn’t the first person to leak information from Fort Gordon. In 2008, military intercept officers Adrienne Kinne and David Murfee Faulk revealed the systemic surveillance of US citizens’ phone calls.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oyaty0Ioca4/

Boffins bork motion control gear with the power of applied sound

A group of university researchers have developed a way to remotely control motion-sensing devices using only sound waves.

The study [PDF], authored by Yazhou Tu and Xiali Hei of University of Louisiana Lafayette, Zhiqiang Lin of Ohio State University, and Insup Lee of University of Pennsylvania, found that embedded sensors and gyroscopes in things like VR controllers, drones, and even hoverboards can be manipulated with resonant sound.

The idea, say the researchers, is to use acoustic waves that vibrate at the same frequency of a MEMS gyroscope, tricking the capacitive sensors on the device into receiving commands as if they were sent by the accelerometer.

Among the tested systems that were found to be susceptible were the Oculus Rift, both iOS and Android VR controllers, and gyroscopic screwdrivers from two different manufacturers.

“Under resonance, the sensing mass is forced into vibrations at the same frequency as the external sinusoidal driving force (sound pressure waves),” the group writes.

“Therefore, the mass-spring structure of inertial sensors could serve as a receiving system for resonant acoustic signals and allow attackers to inject analog signals at specific frequencies.”

In short, the researchers used a sound played through a speaker to send analog signals to the gyro sensors and create the illusion that the signal was the result of movement.

While previous studies have found that exposed accelerometers and gyros to be open to this type of interference, this is the first to show that embedded sensors could also be affected. This means that sensors installed in devices could be targeted and used to manipulate that device.

Some examples the researchers provided were using sound to manipulate an iPhone’s VR controller:

Youtube Video

Control gyroscopic tools:

Youtube Video

And even make a ‘hoverboard’ transporter turn on command:

Youtube Video

There are limitations to the attacks. The researchers noted that the techniques only seemed to work on stationary devices, moving targets would require an attacker to not only follow sensor but also adjust frequency. Timing was also an issue, as in the experiment the researchers had to manually adjust the signal.

As to possible, mitigations, the researchers suggest that manufacturers better tune the analog low-pass filters to have a lower threshold. Additionally, damping the signals with materials or sampling could also help thwart the attacks.

“Employing both acoustic damping and filtering approaches in the designs of future sensors and systems can address these weaknesses,” the researchers explain.

“Additionally, acoustic damping can also be used to mitigate the susceptibility of currently deployed sensors and systems to acoustic attacks.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/28/hacking_motion_control/

Event management kit can take a hammering these days: Use it well and it’ll save your ass

Analysis Who’d have thought it? Diagnostic event streams and log files are fashionable at last.

But, despite many advances, they’re still as big a pain in the backside as they were 30 years ago – both as a tool for observing and reporting security issues thanks to their sheer volume and, increasingly, the numbers of data types we’re dealing with.

Logging has always been a paradox. Increase the log volume and you’ll naturally raise the number of alarms, buried in millions (literally) of lines of data. Limit your log input to something manageable and you risk missing potentially important messages as well as losing the underlying data.

Given the recent rise in machine learning (ML) and artificial intelligence (AI), it’s no surprise that AI types are trying to make sense of log files and come up with new ways of analysing them. The trend seems to be to talk about “behaviours”.

Now, oldies like me are all used to “events”. Speaking simplistically, an event is a single line in a log file or log data stream such as a password failure for an administrator account, or a new device being seen on the LAN. An event might mean something, but a sysadmin would have to look at what else is going on, either by checking other logs or systems, to see whether it’s important. Sometimes you’d amalgamate events to take automated actions, but this would be pretty noddy – for example, by locking a user’s account after three failed login attempts.

“Behaviour” suggests a more long-term view, and that’s exactly the point of today’s algorithms. They take multiple streams of event data over time to build up the context in which systems are operating, and to make far more considered decisions than legacy monitoring tools. Think of it this way: a young baby crying is an event, which may be associated with accepted (and standard) behaviour if it’s around feeding time. However, if the baby were crying immediately following feeding, that behaviour may be alarming.

The concept of User Behaviour Analytics (UBA) has been around for a few years, and it uses precisely this concept. It combines event streams to analyse what users do on systems, and identifies anomalies: the more unusual and potentially damaging the behaviour, the more urgent the alert. So when a 9-5 accountant suddenly logs in at 2am for the first time ever, that’s more suspicious than a member of the 24/7 call centre doing the same thing.

UBA has now expanded into UEBA – User and Entity Behaviour Analytics – which is all about doing the same thing but also for devices (“entities”) on the network, not just user activity. Then of course you get into analysing the network traffic itself, not just the network devices, and lo and behold, we have the concept of NTBA – Network Traffic Behaviour Analytics.

Helping the machines

The more data the machine learning software sees, the better it’ll be at getting the right answer – but a helping hand is always welcome. Take, for example, one of my pet hates: staff off-boarding processes that aren’t being followed properly. I’d like it to tell me if I see a login to Active Directory for an individual that was previously the subject of an “employment ceased” event in the HR system. No AI system’s going to guess how to do this when you first fire it up, though, so you’ll have to give it some clues.

Thinking at a higher level

You’ll also have to train yourself to think less technically. As IT specialists we’re used to thinking about traffic streams, ports, user IDs, login events, and the like. Because the technology’s now doing more of the low-level grunt work and – in real time – all the log correlation and deduction that we used to spend hours figuring out after the event, we can step back from that. We’ll think more how to configure and educate the software than how it’s working under the hood. And where we let the software go a step further and take action to, say, disable a user account, or kill the switch port of a virus-riddled PC, you’ll need to get to grips with finding your systems configured differently from how you left them.

Picking your alerts

In the old days you’d be strapped for space on most of your technology so you’d have to be frugal with the log data you allowed the kit to generate. These days the management software vendors – particularly in Security Information and Event Management (SIEM) – are saying: “Just throw everything at us, we’ll take it.” So while you used to restrict the input to your management platform, you’re now turning up the input to 11 and having to enforce limitations (or at least prioritisation) on the output instead. It’s a never-ending cycle of set-observe-repeat, but one you must nonetheless stick to: examine the alerts that are generated regularly, re-prioritise them rigorously, and ensure that stuff you’ve not seen yet is set to alert you. Security alerting thresholds should be more sensitive than general system management alerting levels. If you don’t get an alert to a system failure from the management platform then you’ll probably get one by phone from the users who can’t work, but in many security incidents the users will be blissfully unaware there’s something wrong so you need to be sure the kit will tell you.

What about monitoring for a lack of alerts?

If you can’t hear your children playing, they’re probably up to no good. The same can apply in information security: alerting to something that appears in a log file is great, and putting it in context using other event streams is better – but what about alerting to something that hasn’t happened?

Say a user ID is compromised and the attacker is able to throttle the logging level from a crucial system. Will the system react to this? Yes, it’ll probably smell a rat because the entity (the server, switch, or whatever was compromised) will be behaving differently from normal.

But what about when an administrator logs in? It might be a perfectly normal activity for that user ID at that time of day. Hang about, though: that user’s swipe card hasn’t been used since he left yesterday, and there’s no activity on the remote access server for him – because another user has compromised his password and is using it for their own means. Can you configure systems to protect against this? Of course, though it usually won’t be trivial. Will an AI-based monitoring system that’s running out-of-the-box in its untrained state clock it? Probably not. Give some thought, then, to how the clever tech can correlate based on information that’s not there: it’s an area where I think we’ll see more and more work taking place in the AI space.

And on a simpler level, you can monitor the files that users access – but where in the log does it tell you the files that they don’t go anywhere near? (Answer: it doesn’t.) Your AI monitoring package is dead clever, and is getting more clever the more data it consumes and the more you tune it. So, there’s no reason why it can’t decide that it can turn off access to a particular folder for user X because he has rights to it but hasn’t used it for six months. Again, this is something you could script, but won’t it be great if the AI figures it out for you and saves you the trouble? Again, expect to see your AI monitoring tool piping up and saying: “Hey, have you noticed this isn’t happening?”

A final reminder

I mentioned earlier that modern SIEM products have the capacity to consume pretty much all the data you can throw at them. So, while they’re chewing on the millions of event messages per day you’re bombarding them with, make sure you’re also sending them the non-existent event streams. By this I mean make sure you’re forwarding the event streams that should be empty. If your firewall’s configured not to permit inbound traffic, make sure you send the inbound traffic monitoring log to the SIEM platform. If all’s well it won’t cost you any bandwidth or storage (it should be empty, after all) but you’ll be damned grateful to have it when someone internal misconfigures your firewall or someone outside hacks it.

Why? Well, take the true story of the sysadmin who’d been asked to run a report of all email accounts that were being auto-forwarded to external email addresses. The indignant cry came: “There can’t be any, we disabled it when we installed the server.” He ran it anyway, just to prove that all was well, and found that someone’s entire inbound email stream was being forwarded to a hacker’s Gmail account. Which would have been obvious if they’d been monitoring for stuff that shouldn’t happen.

Maybe the moral of the story, then, is that monitoring what’s not there is what’ll save you in the long run. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/28/what_systems_and_which_behaviour_to_monitor/

None too chuffed with your A levels? Hey, why not bludgeon the exam boards with GDPR?

Schools across the UK may have thought results fever was over for another year – but, thanks to the nation’s privacy watchdog, they might not get to relax just yet.

The Information Commissioner’s Office has published a how-to guide on demanding more information about exams results for students.

“If you’ve just received your exam results, you may be interested to find out more about how you’ve been marked, and the comments made about you and your exam paper,” the ICO’s explainer stated. “You may even want to make an appeal against a mark you’ve been given.”

Under the General Data Protection Regulation, data subjects can ask for information held on them, which in this case will include exam marks, examiner’s comments and the minutes of any examination appeals panel.

The schools, colleges or universities will then have one month to respond to any such requests – but the ICO noted that students wanting to appeal marks would need to use a different procedure.

The body also pointed students to the Freedom of Information Act if they want to obtain more general official information about their schools.

Of course, informing the younger generation of their data rights is welcome – we’re just not sure the overstretched sector will be quite as thrilled about the promotion as they prep for the start of the new academic year.

It’s also possible that critics and observers would prefer the ICO to put as much emphasis on enforcing the laws as promoting their use. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/28/ico_exams_advice/

Windows 0-day pops up out of nowhere Twitter

It’s not bad enough to take Microsoft out-of-cycle, but CERT/CC has just put out a warning of a new privilege escalation bug in Windows.

According to the Tweet that set the hounds running, it’s a zero-day with a proof-of-concept at GitHub:

CERT/CC vulnerability analyst Phil Dormann quickly verified the bug, Tweeting: “I’ve confirmed that this works well in a fully-patched 64-bit Windows 10 system. LPE right to SYSTEM!” (LPE – local privilege escalation – El Reg).

CERT/CC has finished its more formal investigation, and has just posted a vulnerability note.

“Microsoft Windows task scheduler contains a vulnerability in the handling of ALPC, which can allow a local user to gain SYSTEM privileges”, the advisory stated.

ALPC, Advanced Local Procedure Call, restricts the impact somewhat, since it’s a local bug.

However, it opens an all-too-familiar attack vector: if an attacker can get a target to download and run an app, local privilege escalation gets the malware out of the user context up to (in this case) system privilege. Ouch.

The vulnerability note says: “The CERT/CC is currently unaware of a practical solution to this problem.”

Responding to The Register’s e-mail inquiry, a Microsoft spokesperson it will “proactively update impacted advices as soon as possible”, and pointed to its Update Tuesday schedule. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/28/windows_0day_pops_up_out_of_span_classstrikenowherespan_twitter/

Ah, um, let’s see. Yup… Fortnite CEO is still mad at Google for revealing security hole early

Updated The CEO of Epic Games, maker of smash-hit shoot-em-up Fortnite, continues to savage Google for disclosing a security hole in his software.

Calling the ad giant “irresponsible” for publicly disclosing the vulnerability on Friday, Tim Sweeney posted a string of angry tweets over the weekend and into Monday accusing the search king of hypocrisy – and implied that the release was payback for Epic deciding to offer its game to Android users outside of Google’s official Play app store.

The issue tracker webpage for the bug reveals that Google ran a security check against Epic’s Fortnite installer as soon as it was made publicly available on August 15. The uncovered flaw could be exploited by another malicious app on a phone to hijack the installation process of the game, and install spyware and other dodgy code in its place.

Google reported the issue to Epic which immediately started working on a patch that it put out one day later. Here’s where things get contentious. Two hours after informing Google it had published a fix, Epic’s security team asked Google to delay publication of the issue report for 90 days.

The 90-day delay is common practice and is the standard for bug disclosure under Google’s own guidelines. But Google’s guidelines also state: “After 90 days elapse or a patch has been made broadly available, the bug report – including any comments and attachments – will become visible to the public.”

Google’s security team turned down Epic’s 90-day request and published the information one week after the patch. It’s not clear when Google informed Epic it was going to publish the details; the issue tracker page refers to an email sent direct to Epic.

Unrestricted

“As mentioned via email, now the patched version of Fortnite Installer has been available for 7 days we will proceed to unrestrict this issue in line with Google’s standard disclosure practices,” says a comment posted shortly before the exchanges was made public.

We have asked Google when it sent its email to Epic and what its explanation for turning down the 90-day delay request was and we will update this article if it gets back. But it is safe to assume from the response from Epic’s CEO that it was unexpected.

“Android is an open platform. We released software for it. When Google identified a security flaw, we worked around the clock (literally) to fix it and release an update,” Sweeney tweeted, adding: “The only irresponsible thing here is Google’s rapid public release of technical details.”

Faced with Google fanbois pointing out that the information was only published a week after the patch was made available, he then pointed out that the Fortnite installer “only updates when you run it or run the game.”

Which is the reason Epic asked for a 90-day delay: so the likelihood of users opening the app – and so fixing the security issue – was far, far higher over the course of three months rather than one week.

But he wasn’t finished yet. Another tweet implied that there was a more nefarious reason behind Google’s disclosure: “Wouldn’t it be safer to disclose the technical details of vulnerabilities based on adoption rate of updates rather than mere availability?” he hypothesized, adding: “Of course the PR about the existence of a vulnerability and importance of updating could go ahead without disclosing the technical details.”

Google drive

In essence, he is suggesting that Google was driven to disclose a security hole in the Fortnite installer. Why? Because Fortnite decided that it would not put its app in the official Google Play app store. Sweeney was quite clear as to why: because he didn’t want to pay Google 30 per cent cut of the app’s revenue.

“The developer pays all the costs of developing the game, operating it, marketing it, acquiring users and everything else,” he explained earlier this month. “We’re trying to make our software available to users in as economically efficient a way as possible. That means distributing the software directly to them, taking payment through Mastercard, Visa, Paypal, and other options, and not having a store take 30 percent.”

He also noted that smartphone platforms “actually do very little” and even make money from selling ads for other apps using the keywords for the most popular apps.

fortnite

Game over for Google: Fortnite snubs Play Store, keeps its 30%, sparks security fears

READ MORE

It was a very public black eye for Google. Not only will the company lose millions of dollars in potential revenue but the ensuing publicity over the decision put a spotlight on Google’s price-gouging app store approach, one pioneered by Apple.

The counter-argument against Epic for its decision was that it could create a security risk because it would encourage users to download software from outside the (mostly safe) Play app store.

And so, when Google discovered that this security risk angle was actually real thanks to a hole in Epic’s installer, it had good commercial reasons to let people know about it.

Google of course claims that there was no such malice of forethought in its decision to release the details one week after the patch. It was simply following its policy and with a patch made available, it did what it has always done and made the information public.

Disclosure

Google, to its credit, has a long history of open disclosure as a default. Software manufacturers always have another reason why a security bug shouldn’t be disclosed but too often that approach can lead to holes being hidden for too long with potentially serious implications. Google sticks to its guns on disclosure, despite regularly criticism from companies that wish it had kept quiet.

So is this a case of Google seeking to embarrass Epic for refusing to go through its Play app store? Or is it Epic that is lashing out because of its own acute embarrassment?

We’d have to see the internal emails within Google and understand how far up the decision chain things went to ascertain that. But Epic’s Sweeney did make one further point about how he views it as Google being self-serving.

“This sort of policy would be disastrous if Google applied it to security flaws they discovered in their own software, given the Google/IHV/carrier bottlenecks in pushing Android OS updates,” he jabbed. Although, it would be fair to say that despite Fortnite’s huge popularity it is still a single app. And a game, rather than an operating system. ®

Updated to add

“User security is our top priority, and as part of our proactive monitoring for malware we identified a vulnerability in the Fortnite installer. We immediately notified Epic Games and they fixed the issue,” a Google spokesperson told The Reg in an emailed statement.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/27/fortnite_ceo_slams_google_for_revealing_security_hole_early/

Lawyers sued for impersonating rival firm online to steal clients

An Illinois law firm is suing a rival it says was impersonating it online in a bid to steal clients.

Motta Motta LLC said in a filing [PDF] to the Northern Illinois US District Court that rival legal firm Dolci and Weiland had set up both a website and phone line designed to redirect Mota’s criminal and family law clients to Dolci.

According to the complaint, Motta believes Dolci crafted a lookalike site designed to mirror its own website, then SEO-optimized that page to show up ahead of Motta’s own website in order to trick people in the Chicago area who were looking for an attorney.

The complaint alleges that, in 2016, Dolci was looking to expand its business from misdemeanors and DUIs into the family law and more serious criminal cases that Motta specialized in. To do this, Motta believes the rival firm tried to effectively steal its online identity.

Motta says the Dolci pages recreated not only the look and feel of Motta’s site, but also wholesale copied articles Motta’s attorneys had written for law journals years prior. It accuses Dolci of, at one point, compromising the Motta website and redirecting incoming traffic to its own lookalike page.

fbi

Ignore that FBI. We’re the real FBI, says the FBI that’s totally the FBI

READ MORE

“On or before May 11, 2016, Dolci executed the final step in its scheme and caused Motta’s website to be compromised and tags placed thereupon in order to hijack Motta’s web traffic and direct same to Dolci’s website while simultaneously assuming Motta’s online reputation,” the complaint reads.

“Dolci specifically set out to accomplish same in a manner least vulnerable to detection i.e. utilized tags as opposed to 301 redirects.”

As a result, Motta claims, Dolci saw its own traffic surge while Motta’s slowed to a trickle and the firm saw an “unmistakable and shocking decrease” in calls from potential new clients. What’s more, Motta alleges that its phone lines were also compromised by an employee, who turned traitor and referred potential clients to Dolci’s phone lines instead.

As a result, Motta alleges it lost about $2m worth of potential business to Dolci between 2016 and 2018.

The complaint alleges one count each of civil conspiracy, violation of the DMCA, unfair competition, and tortious interference. Motta is seeking an injunction to give it control of the Dolci website in question and a jury trial to determine damages. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/27/lawyers_impersonating_rivals/

Why CISOs Should Make Friends With Their CMOs

A partnership between IT security and marketing could offer many benefits to each group – and to the entire enterprise.PreviousNext

Image Source: Adobe Stock (the_lightwriter)

Image Source: Adobe Stock (the_lightwriter)

It might not seem like CISOs and CMOs have much in common, but both executives stand to gain by becoming allies.

Every day cybersecurity factors, such as bad breach publicity and phishing impersonators, erode enterprise brands — thereby diminishing the effectiveness of a CMO’s daily efforts. Brand value goes down, email marketing ROI gets trashed, and customer churn increases, all of which reflect poorly on the chief marketer. CMOs need help from CISOs to lock down risk factors. On the flip side, CISOs grapple with a number of challenges that CMOs could help them with, including insecure marketing technology and communication processes, breach response communication, and inadequate budget for preserving brand value.

While CISOs and CMOs might never become corporate besties, there’s clearly a lot of room for some mutual back-scratching. Here are some proof points to show why a partnership between this pair of executives can benefit both parties, as well as their companies.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/careers-and-people/why-cisos-should-make-friends-with-their-cmos/d/d-id/1332665?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6.4 Billion Fake Emails Sent Each Day

US the leading source of phony messages worldwide.

New data on email threats in the first half of 2018 shows that some 6.4 billion emails sent each day worldwide are fake.

According to email security firm Valimail, the US is the No. 1 source of fake email, sending some 120 million phony messages in the second quarter of 2018. Valimail, an email authentication vendor, gathered data from emails that spoof the domain of the email sender. The data is based on both Valimail’s own analysis of billions of email authentication requests to its DMARC service, as well as analysis of more than 3 million DMARC and SPF records.

DMARC (Domain-based Message Authentication, Reporting Conformance) and SPF (Sender Policy Framework) are email authentication standards. 

According to Valimail, 96.2% of the emails its DMARC-based email gateways inspected in the first half of 2018 were legitimate, while 2.2% were not, and 1.5% were from legit sources but failed to follow DMARC. 

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/64-billion-fake-emails-sent-each-day/d/d-id/1332677?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

North Korean Hacking Group Steals $13.5 Million From Indian Bank

Tactics that Lazarus Group used to siphon money from India’s Cosmos Bank were highly sophisticated, Securonix says.

North Korean-linked Lazarus Group is believed responsible for stealing $13.5 million from India’s Cosmos Bank in a brazen attack that has exposed limitations in the measures banks use to defend against targeted cyber threats.

The theft occurred between August 10 and August 13, 2018, and was enabled via thousands of fraudulent ATM transactions across 28 countries and by at least three unauthorized money transfers using the bank’s access to the SWIFT international financial network.

It is still unclear how the threat actors managed to initially infiltrate the bank’s network. But based on how Lazarus Group actors have typically operated in the past, the attackers broke in via a spear-phishing email and then moved laterally within the bank’s network, according to researchers at Securonix.

“This attack is a good example of the fact that, while ATM and SWIFT transaction monitoring is important, it often is not enough, and may only give you 10%-20% of the required detection coverage,” the security vendor noted in its report.

The Cosmos Co-operative Bank is a 111-year old co-operative bank in India with branches in 7 states and 39 major cities. Between August 10 and August 11, Lazarus Group operators managed to compromise an end-user system at the bank and used that to access and compromise the institution’s ATM infrastructure.

Publicly available information and Securonix’ own analysis suggest that the attackers used multiple targeted malware exploits to set up a malicious ATM/POS proxy switch in parallel with Cosmos Bank’s own central switch.

They then broke or redirected the connection between the bank’s ATM/POS central switch and its back-end Core Banking System. Securonix described the banking switch as a component that is primarily used to perform routing and transaction-processing decisions.

“Based on the publicly available details, most likely there was no additional hardware installed,” says Oleg Kolesnikov, a member of the Securonix threat research team. “The malicious payment switch typically comes in the form of software, so this is likely what was installed and/or cloned/modified by the attackers to proxy the requests from the ATM terminals instead of the existing switch.”

ATM Withdrawls

The attackers are believed to have increased the withdrawal limits on hundreds of targeted accounts at the bank and set them up so cash withdrawals could be made from the accounts from abroad. In total, operators working on behalf of Lazarus Group used 450 cloned non-EMV debit cards linked to accounts at Cosmos Bank to make some 12,000 international ATM withdrawals and 2,849 domestic transactions totaling $11.5 million.

Because the attackers had previously tampered with the link between the banks’ ATM switch and the core banking system, the required messages and codes for authorizing the debit card withdrawals were never forwarded to the core banking system. So typical checks on card number, card status, and PIN were never conducted. Instead, the attackers used the rogue ATM/POS switch that they had installed to send fake instructions for authorizing the fraudulent transactions.

About two days after the initial break-in, the attackers gained access to Cosmos Banks’ SWIFT environment and used it to illegally transfer $2 million to an account belonging to a trading company at Hang Seng Bank in Hong Kong.

The attack on Cosmos Bank’s ATM network was different from typical jackpotting and black box attacks where attackers physically tamper with ATMs to get them to spit out large amounts of cash. In this case, the attack targeted the bank’s core infrastructure and effectively bypassed all measures recommended by the Interpol for protecting a bank’s ATM infrastructure against logical attacks, Securonix said.

What remains unclear is why Cosmos Bank did not receive any alerts when the connection between its ATM switch and core banking system was cut or when thousands of ATM transactions that were clearly not normal were being made.

“We do not know for certain, but it is likely that the connection was redirected such that the connection remained active, and only the malicious requests in question were selectively redirected by the malicious component,” Kolesnikov says. This would ensure that the malicious requests never made it to the legitimate payment switch, and therefore were never visible at the core backend system, he says.

The attack also likely involved a lot of malicious and suspicious attack behaviors that the bank should have spotted.

Based on the publicly available details, the attackers had to stand up a proxy switch capable of responding to malicious transaction requests from the terminals, Kolesnikov says.

They also likely had to install some targeted malware components needed to monitor the card management process and the payment infrastructure, to gain access to the SWIFT terminals and to understand the standard operating procedures.

 Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/north-korean-hacking-group-steals-$135-million-from-indian-bank-/d/d-id/1332678?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple