STE WILLIAMS

Mobile Banking Malware Up 50% in First Half of 2019

A new report from Check Point recaps the cybercrime trends, statistics, and vulnerabilities that defined the security landscape in 2019.

In the last year, 28% of organizations were hit with a botnet infection. Roughly one-third of cyberattacks were perpetrated by insiders, and 27% of all global businesses were affected by threats involving mobile devices. Mobile banking malware jumped 50% in the first half of 2019.

These numbers come from Check Point Research’s “2020 Cyber Security Report,” which contains attack trends, malware statistics, prominent vulnerabilities, and other factors that shaped the security landscape throughout 2019. Businesses saw malware types migrating into mobile and were hit with more informed and targeted ransomware campaigns. Magecart became an epidemic, and a series of major vulnerabilities were found in Microsoft Windows and Oracle.

Magecart attacks, which first became public knowledge in 2018, ramped up in 2019 as multiple threat groups sought to compromise e-commerce websites and steal customers’ financial data. Hundreds of shopping websites, hotel sites, and businesses large and small were affected by the threat: Macy’s, Volusion, First Aid Beauty, and OXO are among those hit with Magecart.

We saw the rise of targeted ransomware in 2019 as attackers sought to buy or find their way into specific organizations. Most of these threats were driven by increasing cooperation among threat actors: As an example, researchers point to the distribution of Emotet, which landed in many global organizations and opened the door to any attackers who were willing to pay for access to them. One Emotet attack could lead to a full-blown infection of Ryuk or Bitpaymer.

“Rather than immediately deploy a ransomware, offenders often spend weeks exploring the compromised network to locate high-value assets as well as backups, thus maximizing their damage,” researchers explain in the report. “Ironically, companies that try to protect their data by using cloud services occasionally find that their service provider itself has been targeted.”

While misconfiguration and mismanagement of cloud resources are still the top cause for cloud attacks, the past year brought a growing number of attacks directly aimed at cloud services providers. More than 90% of businesses use some type of cloud service, but 67% of security teams complained about poor visibility into cloud infrastructure, security, and compliance, demonstrating how the cloud will continue to be an area of concern in the years to come.

High-Profile Global Vulnerabilities
To create a list of prominent bugs, researchers used data pulled from Check Point’s intrusion prevention system. Top of their list were Microsoft Remote Desktop Protocol flaws BlueKeep (CVE-2019-0708) and DejaBlue (CVE-2019-1182), both of which allow remote code execution. Shortly after BlueKeep was published, attackers began scanning the Web for exposed devices.

Also of note were Oracle WebLogic Server vulnerabilities CVE-2017-10271 and CVE-2019-2725, both of which let unauthorized attackers remotely execute arbitrary code and affect several applications and Web enterprise portals that rely on the servers. Attackers have exploited both of these bugs to deliver Sodinokibi ransomware, Satan ransomware, and the Monero cryptominer.

Researchers also highlighted CVE-2019-10149, a remote code execution flaw in the Exim mail server. The vulnerability can be exploited by attacker who send a specially crafted file to the victim’s server; if successful, they could execute arbitrary commands. Last year brought “a significant amount” of exploitation attempts in the wild, they report, as some new strains of malware exploit this bug to install cryptominers on targeted servers.

Looking Ahead: What’s Next for 2020?
Researchers also shared predictions for how cybercrime will continue to evolve this year. Targeted ransomware is top of mind. After major attacks hit healthcare organizations, as well as state and local governments in 2019, researchers predict attackers will continue to spend more time gathering intelligence on victims to achieve more disruption and demand larger ransoms.

Phishing tactics are expected to continue expanding beyond traditional email campaigns to include more SMS-based attacks and fraudulent messaging on social media and gaming platforms. Mobile malware attacks are expected to increase overall, they predict, after mobile banking malware jumped 50% in the first half of 2019 compared with 2018.

“Surprisingly, mobile banking malware requires little technical knowledge to develop, and even less to operate,” wrote Maya Horowitz, director of threat intelligence and research. The malware searches for a banking app on the targeted device and creates a fake overlay page once it’s opened. The user enters credentials, which are sent to the attacker’s server.

Researchers anticipate the use of Internet of Things devices will continue to grow rapidly, fueled by the bandwidth of 5G, making networks vulnerable to large-scale, multivector cyberattacks. They also predict a greater reliance on public cloud infrastructure will increase businesses’ exposure to outages, a risk that could drive organizations to consider hybrid cloud environments.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “With International Tensions Flaring, Cyber Risk is Heating Up for All Businesses.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/mobile-banking-malware-up-50--in-first-half-of-2019/d/d-id/1336834?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

EDRi’s guidelines call for more ethical websites

Most of us want to be good online citizens. That includes developing websites that have their visitors’ best interests at heart. Yet there are so many ways to get that wrong. Even a slight misstep could put visitors’ privacy or security at risk, or exclude people that might be less able than others. How can you know if you’re doing it right?

Enter European Digital Rights (EDRi), a collection of human rights groups across Europe, which has published a set of guidelines for ethical website development. It explains:

The goal of the project, which started more than a year ago, was to provide guidance to developers on how to move away from third-party infected, data-leaking, unethical and unsafe practices.

The document lists recommendations covering areas including security and privacy while listing alternatives to free online services that slurp up users’ data.

One recommendation is to host your own resources as much as possible. That means avoiding call-outs for things like third-party cookies, and avoiding frames with third-party content. It also means avoiding call-outs for CSS files, images, font files, and JavaScript libraries.

The document adds:

If downloading a resource, such as a JavaScript or font file, is not allowed by the terms of its provider, then they may not be privacy-friendly and should therefore be avoided.

It calls out large tech firms as companies offering services that ethical web developers should avoid, and provides a list of alternatives in areas including analytics, video players, and online maps. It points readers to Prism Break, a list of alternative online services that don’t track their users.

When it comes to security, a site can use DNSSEC to authenticate DNS queries, says the doc, also recommending HTTPS. It also asks website owners to provide a Tor-compatible version of their site using the Tor publishing tool Onionshare.

EDRi also includes website accessibility as a key ethical principle. It points to accessibility guidelines for developers, and also advises against the use of CAPTCHAs, arguing that they often make it more difficult for people with disabilities to access a site. Some CAPTCHAs also collect personally identifiable information about visitors, the organisation warns. If you’re going to use them, then at least use a simple version that doesn’t load external JavaScript, it says.

In general, the document seems to take a dim view of JavaScript. It stops short of advising against its use entirely but warns developers to think carefully about accessibility. Ideally, you’d build a specific non-JavaScript version of a site and then add JavaScript-based features on top of it. This enables you to respect noscript tags, it adds.

Following a lot of these guidelines would make it challenging to support some advertising business models on a site. But then, the document doesn’t want its readers to support tracker-based models, which some say are out of control. Instead of condemning advertising altogether, it points to alternatives, specifically ReadTheDocs’ ethical advertising model (which is a low-tech approach that eschews trackers).

There are some other aspects of this ethical web development guideline that developers might find difficult to follow to the letter. If your website accesses a JQuery library online to always ensure you’re using the latest version, that would seem to be a fail under these rules. One way around it could be to use Subresource Integrity (SRI), says the document. This uses a cryptographic hash that the downloader specifies to ensure the integrity of a file.

One notable omission from EDRi is the matter of dark patterns, which are user interface and language constructs that force users down a certain path. Lawmakers have called for tech firms to ban these tricks, which in the wrong hands can persuade website visitors to give up privacy rights, make purchases, or avoid cancelling subscriptions. While they make an appearance on websites, they’re especially common on mobile apps, which is a category that could also benefit from a set of ethical guidelines like this one.

This set of guidelines does its best to provide alternatives to services that contravene its ethical rules. For example, it points people to different services than Google Fonts, which EDRi explains requires web developers to buy into Google’s privacy policy. It will take some work for many developers to reconfigure their sites to fit these guidelines, but EDRi has laid out the steps and explained why it’s important. It’s a project that developers may choose to implement over a longer period.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kpjTYPyRRG8/

Facial recognition is real-life ‘Black Mirror’ stuff, Ocasio-Cortez says

During a House hearing on Wednesday, Rep. Alexandria Ocasio-Cortez said that the spread of surveillance via ubiquitous facial recognition is like something out of the tech dystopia TV show “Black Mirror.”

This is some real-life “Black Mirror” stuff that we’re seeing here.

Call this episode “Surveil Them While They’re Obliviously Playing With Puppy Dog Filters.”

Wednesday’s was the third hearing on the topic for the House Oversight and Reform Committee, which is working on legislation to address concerns about the increasingly pervasive technology. In Wednesday’s hearing, Ocasio-Cortez called out the technology’s hidden dangers – one of which is that people don’t really understand how widespread it is.

At one point, Ocasio-Cortez asked Meredith Whittaker – co-founder and co-director of New York University’s AI Now Institute, who had noted in the hearing that facial recognition is a potential tool of authoritarian regimes – to remind the committee of some of the common ways that companies collect our facial recognition data.

Whittaker responded with a laundry list: she said that companies scrape our biometric data from sites like Flickr, from Wikipedia, and from “massive networked market reach” such as that of Facebook.

Ocasio-Cortez: So if you’ve ever posted a photo of yourself to Facebook, then that could be used in a facial recognition database?

Whittaker: Absolutely – by Facebook and potentially others.

Ocasio-Cortez: Could using a Snapchat or Instagram filter help hone an algorithm for facial recognition?

Whittaker: Absolutely.

Ocasio-Cortez: Can surveillance camera footage that you don’t even know is being taken of you be used for facial recognition?

Whittaker: Yes, and cameras are being designed for that purpose now.

This is a problem, the New York representative suggested:

People think they’re going to put on a cute filter and have puppy dog ears, and not realize that that data’s being collected by a corporation or the state, depending on what country you’re in, in order to …surveil you, potentially for the rest of your life.

Whittaker’s response: Yes. And no, average consumers aren’t aware of how companies are collecting and storing their facial recognition data.

It’s “Black and brown Americans” who suffer the most from the ubiquity of this error-prone technology, Ocasio-Cortez said, bringing up a point from a previous hearing in May 2019: that the technology has the highest error rates for non-Caucasians.

Problems in facial recognition technology

At the May 2019 hearing, Joy Buolamwini, founder of the Algorithmic Justice League (AJL) – a nonprofit that works to illuminate the social implications and harms of artificial intelligence (AI) – had testified about how failures of facial analysis technologies have had “real and dire consequences” for people’s lives, including in critical areas such as law enforcement, housing, employment, and access to government services.

Buolamwini founded the AJL after experiencing such failure firsthand, when facial analysis software failed to detect her dark-skinned face until she put on a white mask. Such failures have been attributed to the lack of diversity within the population of engineers who create facial analysis algorithms. In other words, facial recognition achieves its highest accuracy rate when used with white male faces.

Here’s Buolamwini in the May hearing:

If you have a case where we’re thinking about putting, let’s say, facial recognition technology on police body cams, in a situation where you already have racial bias, that can be used to confirm [such bias].

In Wednesday’s hearing, Ocasio-Cortez said that the worst implications are that a computer algorithm will suggest that a Black person has likely committed a crime when they are, in fact, innocent.

Because facial recognition is being used without our consent or knowledge, she suggested, we may be mistakenly accused of a crime and have no idea that the technology has been used as the basis for the accusation.

That’s right, the AI Now Institute’s Whittaker said, and there’s evidence that the use of facial recognition is often not disclosed. That lack of disclosure is compounded by our “broken criminal justice system,” Ocasio-Cortez said, where people often aren’t allowed to access the evidence used against them.

Case in point: the Willie Lynch case in Florida. A year ago, Lynch, from Jacksonville, Florida, asked to see photos of other potential suspects after being arrested for allegedly selling $50 worth of crack to undercover cops. The police search had relied on facial recognition: the cops had taken poor-quality photos of the drug dealer with a smartphone camera and then sent them to a facial recognition technology expert who matched them to Lynch.

In spite of it being his constitutional right to see the evidence, a state appellate court decided that Lynch had no legal right to see other matches returned by the facial recognition software that helped put him behind bars. This, in spite of the algorithm having expressed only one star of confidence that it had generated the correct match.

From the American Civil Liberties Union’s (ACLU’s) writeup of the case:

Because the officers only identified him based on the results of the face recognition program, looking into how the face recognition algorithm functioned and whether errors or irregularities in the procedure tainted the officers’ ID was critical to his case. But when Lynch asked for the other booking photos the program returned as possible matches, both the government and the court refused. Their refusal violated the Constitution, which requires prosecutors to disclose information favorable to a defendant.

Ocasio-Cortez, in Wednesday’s hearing:

These technologies are almost automating injustices, both in our criminal justice system but also automating biases that compound on the on the lack of diversity in Silicon Valley, as well.

C-SPAN has full coverage of the three hours of testimony given in Wednesday’s hearing.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WEGyDh5kmBU/

Google will now accept your iPhone as an authentication key

On Monday, Google pushed out an update for the iOS version of Smart Lock, its built-in, on-by-default password manager.

Smart Lock – which has been available for Google’s Chrome browser since 2017 – now also lets iOS users set up their device as the second factor in two-factor authentication (2FA), meaning that you no longer have to carry around a separate security key dongle.

Smart Lock for iOS uses the iPhone’s Secure Enclave Processor (SEP), which is built into every iOS device with Touch ID or Face ID. That’s the processor that handles data encryption on the device – a processor that oh, so many law enforcement and hacker types spend so much time complaining about… or, as the case may be, cracking for fun, fame and profit.

After you set it up, you’ll just need your iPhone or iPad, and your usual password, to use in 2FA when you sign in to Google on a desktop using Chrome.

A big plus: it uses a Bluetooth connection, rather than sending a code via SMS that could be intercepted in a SIM swap attack. In a SIM-swap fraud attack, a hijacker gets their hands on a phone number – typically by sweet-talking/social-engineering it away from its rightful owner – after which they can intercept the codes sent for 2FA that the phone number’s rightful owner set up to protect their accounts.

SIM swap fraud is one of the simplest, and therefore the most popular, ways for crooks to skirt the protection of 2FA, according to a warning that the FBI sent to US companies in October 2019.

Given that Apple introduced SEP – which stores encrypted security keys on an iOS device – with the iPhone 5S, it won’t work on earlier models. You’ll need to be running iOS 10 or later to run the Smart Lock app.

How to use your iPhone for 2FA when signing into Google

Here’s how to get started with Smart Lock for iOS:

  1. Download the free Google Smart Lock app from the iTunes App Store.
  2. Follow the setup steps that ask for Bluetooth access.
  3. Log into your Google account and confirm that you want to use your iPhone for verification.

After that, whenever you want to log in to your Google account, you’ll need to enter your password. Then you’ll need to confirm – in a popup on your iPhone – that yes, it’s really you trying to sign in.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/E-1ND1RZhgg/

Oracle’s January 2020 update patches 334 security flaws

As the world’s second-largest software company, Oracle has become an organisation built on big numbers.

This includes the number of security patches it issues – which with the January 2020 update reached a joint record of 334, matching an identical number released in July 2018.

Unlike rivals such as Microsoft, Oracle only releases security patches every three months so that’s part of the explanation for the size of its updates, which now routinely head towards 300.

Another factor is simply the volume of software in the company’s stable – with around a hundred products and product components in January’s update alone.

Something that jumps out is that 60 individuals and companies are credited with reporting January’s batch of flaws to Oracle, including one, Alexander Kornbrust, credited with 41 CVEs on his own.

Oracle, then, has lots of flaws to fix because, as with rival Microsoft, it has lots of people looking for them. This can only be a good thing.

Database Server
A modest 12 CVEs in total, three of which are stated as being remotely exploitable. Five are ranked ‘High’ severity, which in Oracle’s nomenclature is the top severity level, factoring in how easy it would be to exploit.

Oracle communications applications
A relatively small application category but still able to offer patches for 23 flaws which could be remotely exploited without authentication, six of which have ‘Critical’ CVSS scores above 9.

Oracle Enterprise Manager
A total of 50 patches in all, 10 of which can be exploited remotely without authentication, including four rated with CVSS scores over 9. These depend on the version of Oracle Database and Fusion Middleware being used.

Oracle Fusion Middleware
A total of 30 vulnerabilities which could be exploited remotely without authentication, including three Criticals rated over 9 on CVSS.

Oracle Virtualization
A total of 22 flaws, three of which could be remotely exploited with authentication. This doesn’t include the two highest-rated flaws, CVE-2020-2674 and CVE-2020-2682, affecting VM VirtualBox, which both require local access. That sounds reassuring but it isn’t – attackers would exploit this class of flaw having gained access via other means.

The sheer number of vulnerabilities and the complex dependencies between them can make understanding Oracle’s update page a chore.

However, there are some standout CVEs, for example CVE-2019-2904, rated a ‘Critical’ 9.8 on CVSS and affecting multiple products in the stable.

This dates to October 2019, but Oracle has presumably expanded the products affected by it, hence its reappearance. That’s another facet of Oracle patching – flaws can kick around for a while before they work their way out of the system as they are patched.

Oracle offers the same patching advice it does every month:

Oracle continues to periodically receive reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released security patches. In some instances, it has been reported that attackers have been successful because targeted customers had failed to apply available Oracle patches.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uQefUNviL2Q/

Unlocking news: We decrypt those cryptic headlines about Scottish cops bypassing smartphone encryption

Vid Police Scotland to roll out encryption bypass technology, as one publication reported this week, causing some Register readers to silently mouth: what the hell?

With all the brouhaha over the FBI, like a broken record, once again demanding Apple backdoor its iPhone security, and tech companies under pressure to weaken their cryptography, how has the Scottish plod sidestepped all this and bypassed encryption?

What magic do they possess that world powers do not, as some of you asked us.

It’s pretty simple: the force is using bog-standard Cellebrite gear that, typically, plugs into smartphones via USB and attempts to forcibly unlock the handsets, allowing their encrypted contents to be decrypted and examined by investigators.

This is widely used kit – sold to cops, businesses and spies around the world – and it will be set up in various police stations across Scotland. We’re told selected officers will use the gear, when possible, to leaf through physically seized devices to see if the phones’ data is relevant to specific investigations, and whether it’s worth sending them off to a proper lab to extract the contents.

It’s a controversial move here in the UK, in that politicians, worried about the legality of it all, previously pumped the brakes on the tech deployment – which was scheduled for mid-2018 and is only now actually happening.

What’s going on?

Police Scotland is set to install 41 of what it refers to as “Cyber Kiosks” in stations around the country. The computers, reportedly costing £370,000 in total, will be used to attempt to view data from locked iOS and Android handsets in the course of criminal investigations.

“The technology allows specially trained officers to triage mobile devices to determine if they contain information which may be of value to a police investigation or incident,” the Scottish cops say of the program.

“This will allow lines of enquiry to be progressed at a much earlier stage and devices that are not relevant to an investigation to be returned quicker.”

The kiosks are built by Cellebrite, an Israeli vendor that specializes in providing law enforcement agencies with gear to bypass passcode locks on handsets. You can see one in action in this promo video from Police Scotland:

Youtube Video

Unlike the more secretive phone-unlocking-hardware maker GrayShift, Cellebrite is somewhat more upfront and straightforward about its products, openly boasting about its ability to bypass lock screens on iPhone and Android handsets.

The technology works in various ways: Cellebrite says for some phone models, its equipment copies a custom bootloader to the device’s RAM and runs that to bypass security mechanisms [PDF]. In some other cases, such as with Android devices, it tries to temporarily root the handset. The equipment can also attempt to exploit vulnerabilities in phone firmware, including iOS, to ultimately extract data.

It really depends on the hardware and operating system combination. Apple and Google tend to patch vulnerabilities exploited by this sort of unlocking gear, in a security arms race of sorts.

Cellebrite claims its top-end gear can “bypass or determine locks and perform a full file system extraction on any iOS device, or a physical extraction or full file system (File-Based Encryption) extraction on many high-end Android devices.” Privacy International has an analysis of Cellebrite’s advertised – stress, advertised – capabilities here.

According to Police Scotland, the kiosks will not store any copies of handsets’ storage memory, and instead will be used to observe data on device so that officers can decide whether to return the handsets to their owners or send the phones off for further investigation by a forensics lab.

Additionally, the police claim, officers are not gaining any additional powers, rather the equipment just speeds up the triage process that would have previously required a lab, we’re told. Any searches using the kiosks will be carried on the same legal basis [PDF] as any other search: officers are allowed to look through seized items that are suspected to be evidence of a crime.

iphone unlock

UK cops blasted over ‘disproportionate’ slurp of years of data from crime victims’ phones

READ MORE

“The common law of Scotland operates no differently in relation to the seizure of a digital device by a police officer in the course of an investigation to any other item which is reasonably suspected to be evidence in a police investigation or incident,” according to the force.

“Therefore, if a police officer in the execution of a lawful power seizes a digital device, the law allows for the examination of that device for information held within.”

An FAQ [PDF] adds that in special cases, including those involving child abuse images, internal or disciplinary cases, and devices already known to have evidence, the kiosks will be bypassed and the phones sent directly to the forensics lab.

The roll-out of these terminals is set to begin on January 20 and completed by the end of May.

And breathe out

Unfortunately, none of this should be a surprise to you. Depending on your phone model, there are various ways for the police to potentially delve into your device.

As Forbes pointed out earlier this week, cops in the US last year tried to use a GrayShift product to read the contents of a locked and encrypted iPhone 11 Pro Max, according to a search warrant. It’s not clear whether the extraction was actually successful; the police paperwork merely declares a “USB drive containing GrayKey-derived forensic analysis” of the iPhone as evidence.

Still, if all this unlocking kit is out there, one wonders why the FBI and others are demanding law-enforcement backdoors in gadgets. Is it because it doesn’t always work? Or are the Feds tired of forking out wads of cash for gear made by Cellebrite, GrayShift et al, and want a cheap and easy built-in solution instead? Or both? ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/17/scottish_cops_cellebrite_kiosks/

NSA and Github ‘rickrolled’ using Windows CryptoAPI bug

On Monday this week, the big cybersecurity news was speculative.

Was there a big, bad security bug in Microsoft Windows waiting to be announced the next day?

On Tuesday, the big news was the announcement that everyone had been guessing about.

Yes, there was a big bad bug, and it was in the Windows CryptoAPI.

It wasn’t a wormable remote code execution hole, so it wasn’t quite a WannaCry virus waiting to break out…

…but it was the first Patch Tuesday bug ever credited to the NSA.

That’s the US National Security Agency, ironically the very same the organisation that originally came up with the ETERNALBLUE exploit that ended up in the WannaCry virus after somehow escaping from the NSA’s control.

This time, the NSA gave the bug to Microsoft to patch the hole proactively, and here we are!

The vulnerability, denoted CVE-2020-0601, is a way by which crooks can mint themselves cryptographic certificates with other people’s names on them.

The simplest way of thinking about this bug is that it’s like a magic machine that lets you crank out fake IDs that not only look good when you show them to a cop, but also stand up to scrutiny even when the cop runs them through the ID scanner that checks back with headquarters.

Back on Tuesday, when the vulnerability was officially announced, we said:

We don’t yet know how hard it is to produce rogue certificates that will pass muster, and Microsoft understandably isn’t offering any instructions on how to do it.

All we know is that Microsoft has said it can be done, and that’s why the patch for CVE-2020-0601 has been issued.

So you should assume that someone will find out how to do it pretty soon, and will probably tell the world how to do it, too.

We don’t know whether to be happy or sad that we were correct.

The first proof-of-concept “fake ID generators” are out – we’ve already seen a Python program of 53 lines, and a Ruby script of just 21 – and they really are sitting there for anyone to use for free.

What we didn’t predict, though we probably should have, it exactly what the first widely-publicised “live attack” would do to prove its point.

(We say “live attack” – but, just to be clear, the researcher who did the work and tweeted about it didn’t actually attack anyone else’s server, or tell anyone else how to do so, so we don’t mean that in a negative or critical sense.)

Rickroll!

UK cybersecurity researcher Saleem Rashid filmed himself browsing with Edge to a rickroll page that not only claims to be Microsoft’s github.com but also shows up with a nice little checkmark saying “valid certificate”:

In a later photo in the same Twitter thread, he shows Chrome visiting the rickroll on a a webpage that identifies itself as nsa.gov, with a popup saying “Connection is secure” and “Certificate (Valid)”:

Rickrolling, in case you’ve never heard of it, is a sort-of humorous tradition beloved amongst techies and internet witticists where you unexpectedly take someone to a video of Rick Astley singing his 1987 hit Never gonna give you up.

Why Rick Astley, and why that song, we simply cannot tell you, but the rickrolling craze started in 2007.

Perhaps its most infamous appearance in the cybersecurity scene was in 2009, when an Australian youngster set loose the world’s first-ever Apple iPhone virus

…which let you know you’d become a victim by changing your phone’s wallpaper to a photo of the aforementioned Rick Astley.

Rashid’s tweet is great fun, but with a serious side, because it shows how the CryptoAPI bug could, indeed, be used to lull you into a dangerously false sense of security:

Never gonna git your hub
Never gonna let you down
Never gonna hack your site and fake-cert you.

It’s not just about you

An important thing to remember about this bug is that exploiting it isn’t just about what you might see if you browsed to a site with a fake certificate, or how you might be deceived by a program you downloaded in good faith.

The reason you might be deceived by this bug is because the program you were using at that moment was deceived by it, because it used the buggy part of the Windows CryptoAPI.

(You will also hear this vulnerability called “the crypt32 bug” because programs that make use of the CryptoAPI generally do so via a file called crypt32.dll.)

In other words, a rogue certificate doesn’t need to be visible to be deceptive – and, ironically, the obvious example of software that does digitial certificate validation behind the scenes for safety’s sake…

…is auto-updating code that’s there to fetch security fixes for you automatically in the background so you don’t have to keep your eye on the process yourself.

What to do?

As we pointed out in this week’s Naked Security Live video:

If you patch this hole, then it instantly become useless [against you] to the crooks.

So getting this month’s patches2020-01 Cumulative Update for Windows 10 if you’re patching a laptop rather than a server – is your primary defence, which also, as it happens, fixes some 49 other holes.

By the way, those other 49 holes closed in this month’s Patch Tuesday include several remote code execution vulnerabilities in Microsoft’s remote access tools.

Those vulnerabilities haven’t had the media attention that CVE-2020-0601 has received, yet could let attackers log right into your network or your computer without needing a password.

And if crooks can log straight into your network, they reduce the Windows CryptoAPI Spoofing Vulnerability to a minor worry, because they no longer need to trick anyone into running malware with bogus certificates – they can just launch the malware for themselves.

So, if the CryptoAPI bug gets you to embrace our advice to “patch early, patch often”…

…then perhaps we can write it up as a silver lining, not a dark cloud on the horizon.

LEARN MORE ABOUT THE VULNERABILITY AND HOW TO PATCH

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XhJpjHyVCqc/

Bad news: Windows security cert SNAFU exploits are all over the web now. Also bad: Citrix gateway hole mitigations don’t work for older kit

Vid Easy-to-use exploits have emerged online for two high-profile security vulnerabilities, namely the Windows certificate spoofing bug and the Citrix VPN gateway hole. If you haven’t taken mitigation steps by now, you’re about to have a bad time.

While IT admins can use the proof-of-concept exploit code to check their own systems are secure, miscreants can use them to, in the case of Citrix, hijack remote systems, or in the case of Windows, masquerade malware as legit apps or potentially intercept encrypted web traffic. Patches are available from Microsoft for the Windows vulnerability and should be deployed as soon as possible.

For Citrix, it will not be fully patched until January 20, and in the meantime, in certain cases, the official mitigations are not sufficient to thwart all methods of exploitation. There are an estimated 120,000 or more potentially vulnerable boxen on the open internet.

Windows smashed

Within hours of the NSA going public with details about its prized bug find, exploit writers posted working code demonstrating how the flaw can be abused to trick unpatched Windows computers into accepting fake digital certificates – which are used to verify the legitimacy of software, and encrypt web connections.

The vulnerability, CVE-2020-0601, lies within the crypt32.dll library in Windows 10 as well as Server 2016 and 2019. For what it’s worth, the bug occurs when matching an attacker-supplied certificate to a cached trusted cert held in an internal data structure. It’s a logic flaw – the attacker’s cert is matched without fully validating it – rather than a mathematical weakness.

A man wearing a VR headset in the year 2020

Welcome to the 2020s: Booby-trapped Office files, NSA tipping off Windows cert-spoofing bugs, RDP flaws…

READ MORE

One proof-of-concept code sample available to all is a tiny package of just 50-or-so lines of Python. Despite the ease with which the exploit is able to do its work, the author, Yolan Romailler at Swiss security shop Kudelski, said people shouldn’t panic over the network traffic eavesdropping aspect of CVE-2020-0601: a snoop has to be able to intercept your connections.

“In the end, please keep in mind that such a vulnerability is not at risk of being exploited by script kiddies or ransomware,” notes Romailler in his detailed write-up of the bug.

“While it is still a big problem because it could have allowed a man-in-the-middle attack against any website, you would need to face an adversary that owns the network on which you operate, which is possible for nation-state adversaries, but less so for a script kiddie.

“This is also probably why the NSA decided not to weaponize their finding, but to rather disclose it: for them it is best to have the USA patched rather than to keep it and take the risk of it being used against the USA, as the attack surface is so vast.”

As for the nitty-gritty of the bug, Romailler summarized it thus:

Specifically, it is possible to craft a private key for an existing public key, as soon as you are not using the standard generator, but instead can choose any generator. And you can choose you own generator in X.509 certificates by using an “explicit parameters” option to set it.

And because then the CryptoAPI seems to match the certificate with the one it has in cache without checking that the provided generator actually matches the standardized one, it will actually trust the certificate as if it had been correctly signed. (Although not entirely, as the system still detects that the root certificate is not the same as the one in the root CA store. That is: you won’t get these nice green locks you all wanted in your URL bar, but you’ll still get a lock without any warning, unlike when using a self-signed certificate, even if you just crafted that certificate yourself.)

Meanwhile, infosec outfit Trail of Bits has dubbed the flaw Whose Curve Is It Anyway? along with a logo and website, which features a proof-of-concept attack, as is customary these days. The biz succinctly summed up the bug thus:

At a high level, this vulnerability takes advantage of the fact that Crypt32.dll fails to properly check that the elliptic curve parameters specified in a provided root certificate match those known to Microsoft.

There’s more technical info here. And you can find another proof-of-concept exploit here with a signed and unsigned 7z.exe file as an example.

Cit-tricked

Things are less straightforward when it comes to the other major security bug dominating the news in the past week. The Citrix VPN gateway bug CVE-2019-19781, dubbed Shitrix by the infosec community, is under active exploit in the wild. Worse yet, Citrix has admitted that, for some installations running older firmware, its recommended mitigation techniques are not holding up against exploits. If you’re using Citrix ADC Release 12.1 builds before 51.16/51.19 and 50.31, you should try to upgrade your version.

Better yet, you should configure your network monitoring to catch attempts to exploit the software. A SANS ISC video describing the security snafu is below.

Youtube Video

An alert from the Dutch National Cyber Security Centre advises organizations that run Citrix ADC and Gateway boxes to consider turning off the machines entirely until the full-scale patch from Citrix is released on January 20.

“If the impact of switching off the Citrix ADC and Gateway servers is not acceptable, the advice is to closely monitor for possible abuse,” a translation of the alert reads. “As a last risk-limiting measure you can still look at whitelisting of specific IP addresses or IP blocks.”

As for exploits, you can find one proof-of-concept sample here. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/16/windows_citrix_patch_update/

CISO Resigns From Pete Buttigieg Presidential Campaign

The only Democratic campaign known to have a CISO loses Mick Baccio due to a “fundamental philosophical difference with campaign management.”

Only one Democratic presidential campaign was known to have a CISO. And now there are none. Mick Baccio, CISO for candidate’s Pete Buttigieg’s campaign, has resigned from the post, citing a “fundamental philosophical difference with campaign management regarding the architecture and scope of the information security program.”

Baccio had been in the position since July. The campaign says it has contracted with a third-party cybersecurity firm to direct its efforts, in addition to having an existing contract with Carbon Black.

According to the US intelligence community, the cybersecurity threats faced by campaigns in 2020 are more sophisticated and more numerous than those reported on in the 2016 election.

Read more here and here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Comprehend the Buzz About Honeypots.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/careers-and-people/ciso-resigns-from-pete-buttigieg-presidential-campaign/d/d-id/1336821?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Phishing Today, Deepfakes Tomorrow: Training Employees to Spot This Emerging Threat

Cybercriminals are evolving their tactics, and the security community anticipates voice and video fraud to play a role in one of the next big data breaches — so start protecting your business now.

Deepfake fraud is a new, potentially devastating issue for businesses. In fact, last year a top executive at an unidentified energy company was revealed to have been conned into paying £200,000 by scammers using artificial intelligence to replicate his boss’s voice — simply because he answered a telephone call, which he believed was from his German parent company. The request was for him to transfer the funds, which he dutifully sent to what he presumed was his parent company. In the end, the funds were stolen by sophisticated criminals at the forefront of what I believe is a frightening new age of deepfake fraud. Although this was the first reported case of this kind of fraud in the UK, it certainly won’t be the last.

Recently, a journalist paid just over $550 to develop his own deepfake, placing the face of Lieutenant Commander Data from Star Trek: The Next Generation over Mark Zuckerberg’s. It took only two weeks to develop the video.

When the Enterprise Evolves, the Enemy Adapts
We’re no strangers to phishing emails in our work inboxes. In fact, many of us have received mandatory training and warnings about how to detect them — the tell-tale signs of spelling errors, urgency, unfamiliar requests from “colleagues,” or the slightly unusual sender addresses. But fraudsters know that continuing with established phishing techniques won’t survive for much longer. They also understand the large potential gains from gathering intelligence from corporations using deepfake technology — a mixture of video, audio, and email messaging— to extract confidential employee information under the guise of the CEO or CFO.

Deepfake technology is still in its early days, but even in 2013, it was powerful enough to make an impact. While serving at the National Crime Agency (NCA) in the UK, I saw how a Dutch NGO pioneered the technology to create the deepfake of a 10-year-old girl, identifying thousands of child sex offenders around the globe. In this case, the AI video deepfake technology was implemented by a humanitarian-focused organization with the purpose of fighting crime.

But as the technology evolves, we’re seeing how much of the research into deepfakes surrounds its unlawful and criminal applications — many of which present seriously detrimental financial and reputational consequences. As more businesses educate their employees to detect and thwart traditional phishing and spearphishing attacks, it’s not difficult to see how the fraudsters may instead turn their efforts to fruitful deepfake technology to execute their schemes.  

How Deepfakes Will Thrive in the Modern Workplace
With the sheer amount of jobs requiring their employees to be online, it’s critical that workforces are educated and provided with the tools to detect, refute, and protect against deepfake attacks and fraudulent activity taking place in the workplace. It’s not difficult to see why corporate deepfake detection in particular is so crucial: Employees by nature are often eager to satisfy the requests of their seniors, and do so with as little friction as possible.

The stakes are raised even further when considering how large teams, remote workers, and complex hierarchies make it even more difficult for employees to distinguish between a colleague’s “status quo” and an unusual request or attitude. Add into that equation the fast-tempo demands to deliver through agile working methodologies, and it is easy to see how a convincingly realistic video request from a known boss to transfer funds could attract less scrutiny from an employee than a video from someone they know less well.

A New Era of Employee Security Training
Companies must empower employees to question and challenge requests that are deemed to be unusual, either because of the atypical action demanded or the out-of-character manner or style of the person making the request. This can be particularly challenging for organizations with very hierarchical and autocratic leadership that does not encourage or respect what it perceives as challenges to its authority. Fortunately, some business owners and academics are already looking into ways to solve the issue of detecting deepfakes.

Facebook, for instance, announced the launch of the Deepfake Detection Challenge in partnership with Microsoft and leading academics in September last year, and lawmakers in the US House of Representatives recently passed legislation to combat deepfakes. But there is much to be done quickly if we are to stay ahead of the fraudsters.

If organizations can no longer assume the identity of the email sender or individual at the other end of the phone, they must develop programs and protocol for training employees to over-ride their natural inclination to assume that any voice caller or video subject is real, and instead consider that there may be a fraudster leveraging AI and deepfake technology to spoof the identities of their colleagues.

Cybercriminals are constantly evolving their tactics and broadening their channels, and the security community anticipates voice and video fraud to play a role in one of the next big data breaches. So start protecting your business sooner rather than later.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Comprehend the Buzz About Honeypots.”

After nearly 35 years in the law enforcement, Ian Cruxton joined the private sector as CSO of Callsign, an identity fraud, authorization, and authentication company. While at the National Crime Agency (NCA), he led 7 of the 12 organized crime threats and regularly briefed the … View Full Bio

Article source: https://www.darkreading.com/risk/phishing-today-deepfakes-tomorrow-training-employees-to-spot-this-emerging-threat/a/d-id/1336778?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple