STE WILLIAMS

Overestimating WebAssembly’s Security Benefits Is Risky for Developers

NCC Group technical director Justin Engler and security consultant Tyler Lukasiewicz explain that while WebAssembly technology can promise both better performance and better security to developers, it also creates a new risk for native exploits in the browser. Filmed at the Dark Reading News Desk at Black Hat USA 2018.

Article source: https://www.darkreading.com/application-security/overestimating-webassemblys-security-benefits-is-risky-for-developers-/v/d-id/1332697?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Windows Zero-Day Flaw Disclosed Via Twitter

Security experts confirm the privilege escalation vulnerability in Microsoft Windows still works.

A previously undisclosed zero-day vulnerability in Microsoft’s Windows 10 operating system was published via Twitter this week.

SandboxEscaper, the user behind the Twitter account that exposed the vulnerability, first posted about the bug on Monday, Aug. 27 and linked to a proof-of-concept on GitHub. This is a local privilege escalation vulnerability, which exists in the Advanced Local Procedure Call (ALPC) interface within the Windows Task Manager, reports CERT vulnerability analyst Will Dormann.

An API in the Windows task scheduler contains a vulnerability in the handling of ALPC, and the bug could allow a local user to gain system privileges, Dormann explains in a CERT writeup on the vulnerability.

“We have confirmed that the public exploit code works on 64-bit Windows 10 and Windows Server 2016 systems,” Dormann writes. “Compatibility with other Windows version may be possible with modification of the publicly-available exploit source code.”

Dormann later posted his own tweet confirming the exploit works even if the Windows 10 64-bit system is fully patched; with minor tweaks, it works on 32-bit systems as well. He is currently unaware of any workaround.

The way this vulnerability was disclosed – with social media posts and a proof-of-concept published on GitHub – has captured the attention of the security community. However, as Synopsys principal scientist Sammy Migues points out, the discovery of a local privilege escalation flaw is “fairly common.”

Average users running Windows machines with this vulnerability can exploit it to gain elevated privileges despite not being granted that level of access by IT admins, Migues explains. If they do, anyone who gains access to their device will have the same privileges, putting the device and its data at risk.

He also points out it is possible for remote attackers to exploit this vulnerability, which would typically require local access, if a local user executes the attacker’s code via phishing email or downloading malicious software.

“Having a working exploit out in the world makes this easier for everyone,” he continues. “A remote attacker would have to get someone to run their attack code,” via a phishing attack, for example, he says.

Microsoft has not issued an emergency patch for the bug. The company, which neither confirmed nor denied the existence of this vulnerability, will release its next wave of monthly fixes on September’s Patch Tuesday update, which will take place on Sept. 11.

“Windows has a customer commitment to investigate reported security issues, and proactively update impacted devices as soon as possible. Our standard policy is to provide solutions via our current Update Tuesday schedule,” reported a Microsoft spokesperson.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/windows-zero-day-flaw-disclosed-via-twitter/d/d-id/1332698?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

IT Professionals Think They’re Better Than Their Security

More than half of professionals think they have a good shot at a successful insider attack.

Computer professionals may think their enterprise security is good, but they think their skills are better. In fact, almost half think they could pull off a successful insider attack, according to a new report by Imperva.

Indeed, 43% of the 179 IT professionals surveyed said they could successfully attack their own organizations, while another 22% said they would have at least a 50/50 chance at success.

When it came to the attack surface, only 23% said they would use their company-owned laptops to steal information, while nearly 40% said their personal equipment would be the chosen avenue of attack.

This information is most worrisome, according to the report, in light of Verizon’s “2018 Data Breach Investigations Report,” which found nearly 60% of all attacks take months to detect and days more to begin mitigation efforts after discovery.

Read here for more.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/it-professionals-think-theyre-better-than-their-security/d/d-id/1332699?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Security Fatigue’ Could Put Business at Risk

The relentless march of security breaches may cause some individuals to drop their guard, but there’s more to the story than that.

When the news is filled with stories of one disaster after another, responders talk about “compassion fatigue” to explain why people seem to care less about the loss with each succeeding event. So, with stories of data breaches affecting millions of records in the news, is it possible for consumers and employees to suffer “security fatigue” in ways that have an impact on their behavior?

Gary Davis, McAfee’s chief consumer security evangelist, says security fatigue may be responsible for some behavior, but it’s not a complete explanation. “I read a report that talked about ‘optimism bias.’ People always tend to believe it’s not going to happen to them — it will happen to their neighbor, so they don’t have to be very proactive,” Davis says. “It’s the case of ‘it’s not going to happen to me’ versus ‘there’s too much going on.'”

One of the factors contributing to a lack of urgency is ignorance about just how much pain is involved when an identity is stolen, Davis suggests. “I don’t think people truly grasp just how painful it is to unwind something that’s pretty far gone down a path. That’s what people need to think about when they’re thinking about protecting their identities,” he explains.

Given the individual lack of action, organizations may have to step up their efforts to protect their customers and employees, Davis says. “With things like GDPR, there will be a more concerted effort for businesses to be more mindful,” he says.

But the impetus to protect individual personal data is not simply regulation-driven. As the traditional network perimeter has dissolved, it’s become more important for organizations to extend their technology and expertise to employee, partner, and customer devices in order to protect corporate assets. “We need a much stronger sense of collaboration and education for what you need to be doing to make sure you don’t put yourself or your company at risk,” Davis says.

What suggestions should the enterprise be making to customers and employees to help them keep both their own and enterprise data safe? Davis has a list of “bare minimum” steps he thinks every organization should suggest:

  • Apply patches and updates to the router, PC, and connected devices. If individuals do that, then they’re doing something well.
  • Stay informed and educated. Phishing is a good example. There are simple things an individual can do to see whether a message or website is phishing.
  • Have active antivirus on the smartphone and PC.
  • Use freely available website reputation tools. They’ll block access to a known-bad website.
  • Use a password manager. This could lead to both stronger passwords and the end of credential cascades in which a threat actor gets one password and gains access to dozens of websites.

Related Content:

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/security-fatigue-could-put-business-at-risk/d/d-id/1332700?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook: It’s too tough to find personal data in our huge warehouse

On 25 May, the EU’s General Data Protection Regulation (GDPR) came into force.

Mind you, the law itself had actually been in place for more than two years. The game changer: as of May, people could now demand that organizations hand over the data they hold on them – via subject access requests (SARs) – for free.

…which is how technology policy researcher Michael Veale, of University College London, wound up banging on the door of Facebook’s data warehouse.

As The Register reports, Veale submitted an SAR to the platform on 25 May, asking for whatever data it had collected on his browsing behavior and activities away from Facebook.

Facebook’s response: to slam the door in his face. Sorry, it told Veale: it’s too tough to find your information in our ginormous data warehouse.

That’s not going to fly, Veale has argued, given that the information Facebook picks up can be used to suss out highly personal information about somebody, including their religion, medical history or sexuality… and that goes for both Facebook users and non-Facebookers alike.

In particular, we’re talking about data scooped up by Facebook Pixel: a tiny but powerful snippet of code embedded on many third-party sites that Facebook has lauded as a clever way to serve targeted ads to people, including non-members.

Veale is taking the matter up with the Irish Data Protection Commissioner (DPC), given that Facebook’s European headquarters are in Ireland.

The Irish DPC has launched an inquiry into the matter, telling Veale that the case will likely be referred to the European Data Protection Board, given that it involves cross-border processing.

Veale shared his complaint with The Register. In his complaint, Veale seeks to find out whether Facebook has web history on him that pertains to medical domains and sexuality: the areas where Facebook is known to be doing highly targeted marketing, as he told The Register:

Both of these concerns have been triggered and exacerbated by the way in which the Facebook platform targets adverts in highly granular ways, and I wish to understand fair processing.

Veale says that he’s used the tools Facebook offers the public to find out what it knows about us. Such tools include Download Your Information and Ads Preferences, for example. But whichever specific tools Veale availed himself of proved “insufficient,” he said.

As Mark Zuckerberg repeatedly said over the course of two days of testimonial in front of the US Congress in April, and as Facebook reiterated yet again in a “Hard Questions” blog post in the aftermath of that question-fest, Facebook uses data collected – even when users aren’t on Facebook – in order to improve safety and security, and to improve its own and its partners’ products and services.

But unlike Google, which offers a tool to see what it knows about us, Facebook earlier this year revealed to activist Paul Olivier Dehaye that it can’t share users’ data with them.

We’re all stuck in the Hive

As Facebook said in an emailed response that Dehaye shared with the UK House of Commons digital committee, he had asked for data regarding what ads he saw as a result of advertisers’ use of Facebook’s Custom Audiences product. He also asked what data Facebook got on him via Facebook Pixel on third-party sites: data that’s not available through its self-service tools because it’s tucked away in a Hive data warehouse.

The Hive data is kept separate from the relational databases that power the Facebook site, Facebook told him, and is primarily organized by hour, in log format. That warehouse is vast, and it’s stuffed with people’s personal data, but it’s way too hard to get at it, Facebook said, and if everybody lines up to ask for their data, we’ll blow a gasket.

The data isn’t indexed by user, Facebook explained. In order to extract one user’s data from Hive, each partition would need to be searched for all possible dates in order to find any entries relating to a particular user’s ID.

From the company’s response to Dehaye:

Facebook simply does not have the infrastructure capacity to store log data in Hive in a form that is indexed by user in the way that it can for production data used for the main Facebook site.

As Dehaye points out, Facebook’s claims mean that as its user base grows, its data protection obligation “effectively decreases, as a result of deliberate architecture choices.”

Likewise, Veale isn’t buying Facebook’s argument. He pointed out that those who research Big Data have already clearly established that even if such data isn’t stored alongside a user ID, web browsing histories can be linked to individuals using only publicly available data. Toss machine learning into the mix, and even more patterns begin to emerge, he told The Register, including information on sexuality, purchasing habits, health information or political leanings:

Web browsing history is staggeringly sensitive.

Any balancing test, such as legitimate interests, must recognize that this data is among the most intrusive data that can be collected on individuals in the 21st century.

He told The Register that he wants to debunk the notion that it’s beyond the technical wherewithal of Facebook – or of any other online platform, for that matter – to handle requests like his:

I hope to refute emerging arguments that the data processing operations of big platforms relating to tracking are too big or complex to regulate.

By choosing to give user-friendly information (like ad interests) instead of the raw tracking data, it has the effect of disguising some of its creepiest practices. It’s also hard to tell how well ad or tracker blockers work without this kind of data.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0mzwU8TN1lo/

And you you thought you were safe behind your laptop screen…

There’s a useful sense of privacy from sitting in such a way that other people can’t see your laptop from behind.

When you’re working on your laptop facing other people, it follows that they’re looking at the back of your screen, so they can’t see exactly what you’re up to.

Whether you’re in a cafe, the library or a meeting room at work, why make it easy for everyone else to figure out your digital lifestle?

Simply put, “Not their business.”

But what if your screen were giving away telltale signs of what you were up to anyway?

A foursome of of cybersecurity researchers decided to take a look, and recently published a fascinating paper describing what they found out, and how – Synesthesia: Detecting Screen Content via Remote Acoustic Side Channels,

In fact, they didn’t so much take a look as have a listen.

Stray emissions

Stray electromagnetic emissions from electrical and electronic equipment have been an eavesropper’s friend for years, especially when display screens were made using so-called CRTs, short for cathode ray tubes.

CRTs were quite literally glass “tubes” (though they were more spherical than cylindrical) covered inside with photoluminescent paint that would light up briefly when struck by a beam of electrons generated by a high-voltage electrical “gun” and aimed by means of magnets.

The tube itself was sucked empty of air – as far as possible – during manufacture in order to let the electrons fly unimpeded to the screen.

That’s why the American slang word for a TV set is “the tube”; it’s where the word Tube in YouTube comes from; and it’s why old-school TVs were so jolly heavy – all that reinforced glass!

As you can imagine, firing a steady beam of electrons at a phosphor-coated glass surface and sweeping the beam left-to-right, top-to-bottom 50 or 60 times a second, produced a cocoon of ever-changing stray electromagnetic radiation that could be detected from a distance.

Back in the 1980s, a Dutch engineer called Wim van Eck showed that this stray radiation could be detected, received and decoded using inexpensive hardware, producing an eerie but legible echo of what was on display on the other side of the room, or even on the other side of a wall.

Suddenly, thanks to what became known as the van Eck effect, covert video eavesdropping wasn’t just the preserve of well-heeled nation-state adversaries with giant-sized detector vans.

Enter the LCD

Fortunately for our collective concerns about covert CRT surveillance, tube displays started to die out, replaced by screens using LCDs (liquid crystal displays), and latterly LEDs (light-eitting diodes), technologies that are especially handy for laptops.

Modern screens are flat, so they’re much more compact; don’t require high-voltage electrical coils, so they use much less power; don’t require a vacuum-proof reinforced glass tube and a bunch of permanent magnets to operate, so they’re much lighter…

…and they don’t work by flinging electrons around in ever-varying magnetic fields.

As a result, there’s a lot less stray radiation for crooks in your vicinity to collect.

The van Eck effect doesn’t work with today’s screens – or, if it does, there’s so little to go on that you can’t do the detection and decoding of stray emissions with commodity equipment that would fit in a handbag or a jacket pocket.

What about other emissions?

So, in this story, our intrepid researchers – Daniel Genkin, Mihir Pattani, Roei Schuster and Eran Trome – decided to try sniffing out video signals in a different way – using sound.

Recent research has shown that modern microphones, even the ones in mobile phones, can pick up sounds outside the range of human hearing.

What if modern screens produce inaudibly high-pitched sound waves as they refresh the pixels on the screen?

After all, the researchers reasoned, today’s screens are still refreshed a line-at-a-time, like old CRTs, and even though they use a tiny fraction of the electron-flinging power of their tube-based counterparts, the amount of electrical energy they consume still varies depending on what’s displayed on each line.

What if those nanoscopic power fluctuations cause micrometric fluctuations in the electronic components providing the power?

And what if those tiny, rapid fluctuations produce minuscule vibrations sufficient to generate faint pressure waves – sound! – that humans can’t perceive, because it’s too high-pitched to hear, and too low-powered to register anyway?

Reading by listening

Could the researchers “read” your screen just by listening to it?

Yes! (Sort of.)

The researchers started out with images they called “zebras”, consisting of giant-sized white-and-black stripes on the screen, chosen to give them the best chance of spotting something and convincing themselves it was worth going further.

Those results were promising, so they got a bit bolder: could someone across the table from you, for example, use a mobile phone to “record” your password off the screen as you typed it?

(Let’s assume that you’ve clicked the icon that reveals the actual password, not merely a string of **** characters – an option you might indeed choose if everyone else in the room can only see the back of your screen.)

Could our researchers sniff out the patterns on your screen using only audio emissions?

In two words, “Definitely maybe!”

What to do?

At the moment, this is an academic attack with little immediate practical value, so there’s not really anything you can or need to do.

The researchers tried “reading” individual words, consisting of no more six letters at a time on the screen, rendered in a plain, fixed-width typeface with characters 175 pixels high – not the typical font, size or layout you’d experience when reading a document or looking at a website.

Even then, their letter-by-letter success rates were as low as 75%.

But their hit rate was way better than random, so this is still worthwhile research – and it’s a fun paper to read with some cool images.

It’s also a excellent reminder about a truism in cybersecurity: attacks only ever get better.

And that’s why cybersecurity is a journey, not a destination.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cn9IQyHbefE/

ABBYY woes: Doc-reading software firm leaves thousands of scans blowing in wind

Document-reading software flinger ABBYY exposed more than 203,000 customer documents as the result of a MongoDB server misconfiguration.

The AWS-hosted MongoDB server was accidentally left publicly accessible and contained 142GB of scanned documents including over 200,000 scanned contracts, memos, letters and other sensitive files dating back to 2012. No username or password would have been needed to access this sensitive info before the hole was plugged.

Independent security researcher Bob Diachenko discovered the breach and alerted the software vendor. The data dump was discovered through Shodan, the machine data search engine, while Diachenko was investigating whether measures had been taken to avert MongoDB ransomware attacks, a particular problem last year.

ABBYY responded by blocking public access to the insecure system, allowing Diachenko to go public about his findings.

“Questions still remain as [to] how long it has been left without password/login, who else got access to it and would they notify their customers of the incident,” Diachenko wrote in a LinkedIn post related to the breach.

The name of the particular ABBYY client whose data was exposed has not been disclosed. ABBYY admitted the breach, which it described as a “one-off”, but said it had been resolved and had no impact on its various cloud-based services. The affected client was informed about the breach, which did not result in the disclosure of data to hackers, ABBYY said.

Last week, we were notified of a vulnerability affecting one of our MongoDB servers. MongoDB database software is widely used by enterprises. As soon as we got the email, we locked external access to the database, notified the impacted party, and took a full corrective security review of our infrastructure, processes, and procedures.

Our detailed investigation has shown that:

  • Only one client was affected. Said client has been notified, and all the necessary corrective measures have been taken.
  • No data was lost to an unknown party during the exposure.
  • The system is in a fully secure state.

Most importantly, this is a one-off incident and doesn’t compromise any other services, products or clients of the company. There is no relationship with or impact to CloudOCRSDK.com, FlexiCapture.com or any of our global cloud offerings. Additionally, no impact to any FlexiCapture or FineReader solution sold or promoted by ABBYY (cloud or on-premise).

We thank the research community for pointing out the vulnerability. The issue has been addressed and corrected. We are and will be taking all and any steps necessary to make sure it does not happen again.

MongoDB comes with security features as well as advice for administrators on how to secure systems. Default configuration of older versions of the database work without password access. Misconfigured MongoDB servers remain a common cause of security problems, and infosec watchers are unimpressed that ABBYY failed to heed the lessons of similar breaches.

“Victims of hacks associated with MongoDB have included the likes of Verizon, ‘elite’ dating website BeautifulPeople, and 31 million users of an Android keyboard app,” said industry veteran Graham Cluley in a post on the TripWire security blog.

“In this day and age, connecting a naked, unsecured MongoDB instance directly onto the internet can only be described as reckless and inexcusable. The security issue is well known, and the means to protect against it is well-documented.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/29/abbyy_aws_database_open_snafu/

Hackers faked Cosmos backend to hoodwink bank out of $13.5m

Security researchers have taken a deep dive into the cyber attack on the SWIFT/ATM infrastructure of Cosmos Bank, the recent victim of a $13.5m cyber-heist.

Experts at Securonix have outlined the most likely progression of the attack against the bank, the latest financial institution to face hacks blamed on state-backed North Korean hackers.

Cybercrims/spies most likely from the infamous Lazarus Group stole $13.5m from Cosmos Bank between August 10-13, as previously reported.

The breach involved an ATM switch and related SWIFT environment compromise that created two routes through which hackers cashed out, according to Securonix.

Either targeted spear phishing and/or a hack against a remote administration/third-party interface allowed hackers to gain an initial foothold in the Indian bank’s network. Following subsequent lateral movement, the bank’s internal and ATM infrastructure was compromised.

After the initial break-in, attackers most likely either leveraged the vendor ATM test software or made changes to the deployed ATM payment switch software to create a malicious proxy switch.

Hackers were then in a position to establish a malicious ATM/POS switch in parallel with the existing (legit) system before breaking the connection to the backend/Core Banking System (CBS) and substituting their own counterfeit system in its place.

banking ATM switch architecture [source: Securonix blog post]

Common banking ATM switch architecture

Details sent from a payment switch to authorise transactions were never forwarded to backend systems so the checks on card number, card status, PIN, and more were never performed. Requests were handled by the shadow systems deployed by the attackers sending fake responses authorising transactions.

This bogus system was used to authorise ATM withdrawals for over $11.5m through more than 2,800 domestic (Rupay) and 12,000 international (Visa) transactions using 450 cloned (non-Europay, MasterCard or Visa) debit cards in 28 countries, Securonix said.

Using MC [a malicious ATM/POS switch], attackers were likely able to send fake Transaction Reply (TRE) messages in response to Transaction Request (TRQ) messages from cardholders and terminals. As a result, the required ISO 8583 messages (an international standard for systems that exchange electronic transactions initiated by cardholders using payment cards) were never forwarded to the backend/CBS from the ATM/POS switching solution that was compromised, which enabled the malicious withdrawals and impacted the fraud detection capabilities on the banking backend.

Securonix rates the attack as far more sophisticated than bank ATM heists mounted by criminal gangs in Mexico, Russia and elsewhere that have focused on planting malware on targeted cash machines.

“The attack was a more advanced, well-planned, and highly coordinated operation that focused on the bank’s infrastructure, effectively bypassing the three main layers of defence (PDF),” the firm said.

The crooks didn’t stop there and further hacked Cosmos Bank’s compromised network in order to authorise a $2m fraudulent transfer through the SWIFT inter-banking messaging network.

“On August 13, 2018, the malicious threat actor continued the attack against Cosmos Bank likely by moving laterally and using the Cosmos bank’s SWIFT SAA environment LSO/RSO compromise/authentication to send three malicious MT103 to ALM Trading Limited at Hang Seng Bank in Hong Kong amounting to around US$2 million.”

The attack bears hallmarks of the Lazarus Group including the use of Windows Admin Shares for lateral movement, using custom Command and Control (C2) that mimics TLS, adding new services on targets for persistence, Windows Firewall changes and a number of other techniques. A fuller run-down of Lazarus Group’s techniques in general can be found in a wiki entry by Mitre here.

Securonix’s research is designed to help banks increase their chances of detecting similar future attacks. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/29/cosmo_bank_cyberheist/

7 Steps to Start Searching with Shodan

The right know-how can turn the search engine for Internet-connected devices into a powerful tool for security professionals.PreviousNext

In the toolkit carried by hackers under any shade of hat, a search engine has become an essential component. Shodan, a search engine built to crawl and search Internet-connected devices, has become a go-to for researchers who want to quickly find the Internet-facing devices on an organization’s network.

With skilled use, Shodan can present a researcher with the devices in an address range, the number of devices in a network, or any of a number of different results based on the criteria of the search. 

There are many ways to approach Shodan, but the following seven steps will get you started in the right direction. Have you already begun with Shodan? Are you a Shodan ninja? What tips do you have for beginners? Share your thoughts in the comments.

(Image: Shodan)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/iot/7-steps-to-start-searching-with-shodan/d/d-id/1332684?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How One Company’s Cybersecurity Problem Becomes Another’s Fraud Problem

The solution: When security teams see something in cyberspace, they need to say something.

Fraud isn’t something new or something that only happens on the Internet. Identity theft has been around for decades. What has changed is how fraud is executed; not only are individuals targeted, but now entire companies can become targets for fraud. For example, what are phishing sites masquerading as legit websites if not attempts at counterfeiting the identity of that company?

Cloud service providers and blue-chip software companies are especially desirable targets for fraud. Bad actors infiltrate corporate networks not to hack the corporations themselves but to co-opt their infrastructure. Hackers use stolen credentials to hide behind IP addresses, servers, and domain addresses to wage covert cyberattacks, misleading investigators and compromising corporate infrastructure in the process.

In my research, I’ve uncovered the three most common scenarios of what my team calls “cyber-enabled fraud,” which we define as fraud that is facilitated though the use of malware exploits, social engineering, and/or lateral movement through a compromised website, network, or account. Note that all there of these can be, and many times are, used in conjunction with one another.

Phishing: Bad actors send a phishing email to steal your credentials, usually by having you click on a masked hyperlink directing you to a well-done spoof of a legitimate website. There you are asked to list information like usernames, passwords, Social Security numbers, birthdates, or financial information. These phishing emails can also be designed to install ransomware when you follow their directives.

Social Engineering: When you spoof the email of the company’s CEO directed to the CFO or someone else in finance to see if he or she will wire money to an account controlled by the bad guys. Social engineering can also accomplish some of the goals of phishing, such as gaining sensitive information or getting credentials, over the phone or, on occasion, in person. You aren’t being asked to do something, like click on a link, but you are asked directly to provide sensitive information.

Lateral Movement/Resource Sabotage: Once bad actors have gained access through phishing or a vulnerability exploit, there is further fraud that can be committed: They can use that access to compromise other machines or servers in a company, often with the help of any fraudulent credentials they’ve managed to obtain, and they can use these compromised systems to send out malware and malicious spam, or use bandwidth and resources for crypto mining,

All of these actions result in infrastructure becoming compromised in some way. But the larger end result is that my cyber problem has just become everybody else’s fraud problem because my infected system is now set up to attack other systems.

Here’s an example of cyber-enabled fraud in action. There are two cloud service providers, Cloud A and Cloud B. Bad guys use prepaid or stolen credit cards to purchase a virtual server account with Cloud A and, through that server, send out malware that is using the server for fraudulent purposes.

When they are finally caught — which can take months — and the account is shut down, the bad guys immediately open up an account using the same credentials with Cloud B. If Cloud A and Cloud B are willing to work together and exchange threat intelligence information, with Cloud A flagging that account as fraudulent, they can stop the cyber-enabled fraud much faster. This drastically changes the economics for the fraudster.

Cyber-enabled fraud is part of a vicious virtual cycle. The good news is we can break this cycle by using best practices in cybersecurity that protects our own identities and assets as well as the larger cyber ecosystem. It’s taking the concept of “when you see something, say something” into cyberspace. Communicating about the cyber incidents you experience to others will help them better detect potential acts of cyber-enabled fraud. When you take care to protect yourself, you are helping your virtual community fight off cyberattacks.

This research was provided by the TruSTAR Data Science Unit. Click here to download a curated list of IOCs that have been tried to both cyber and fraud campaigns.

Related Content: 

Curtis Jordan is TruSTAR’s lead security engineer where he manages engagement with the TruSTAR network of security operators from Fortune 100 companies and leads security research and intelligence analysis. Prior to working with TruSTAR, Jordan worked at CyberPoint … View Full Bio

Article source: https://www.darkreading.com/endpoint/how-one-companys-cybersecurity-problem-becomes-anothers-fraud-problem-/a/d-id/1332669?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple