STE WILLIAMS

Mozilla expands bug bounty program and triples payouts for flaw finders for hire

Mozilla has decided to celebrate the 15th anniversary of its Firefox browser by expanding its bug bounty program to cover a range of new sites and services and – get this – triple its maximum payout.

So if you manage to fix a remote code execution bug in Firefox or some of Mozilla’s lesser known services such as its payment subscription service, VPN, localization, code management tools, speech recognition, and so on – you could walk away with $15,000. Subject to all the usual caveats.

The decision brings Mozilla to the bottom-end of the rest of the industry when it comes to rewarding security researchers for finding security holes. For example, Yahoo! – remember them? – offers up to $15,000 if you find any holes in… whatever Yahoo! does these days. Snapchat likewise.

But Mozilla’s rewards are still some way from the other tech giants. Intel for example offers anywhere between $500 and $100,000 depending on severity (side channel, anyone?). That is beaten by Microsoft which offers up to $300,000 – and a minimum of $15,000.

Dropbox will go up to $33,000; Twitter maxes out at $20,000; Facebook doesn’t give a maximum because it’s Facebook and it never does anything wrong anyway. Google, meanwhile, will give you $150,000 if you can crack ChromeOS in guest mode.

But wait! What’s this? Huawei has also jumped into the bug bounty game and has conspicuously offered more than Google to find hole its in its mobile phones.

In a not-so-subtle poke at the US government which continues to declare that the Chinese manufacturer is a national security risk, Huawei has said it will $220,000 for a critical vulnerability in one of its Android devices (Mate, P, Nova, Y9 and Honor) and up to $110,000 for a high severity spot. Google offers $200,000 and $100,000 respectively.

Top of the bug bounty heap however is Apple which earlier this year upped its maximum $200,000 payout to a tasty $1m if you can figure out how to hack an iPhone without requiring someone to click – or tap – something. If you stumble on a network attack that doesn’t require user interaction, you could be looking at a healthy $500,000 with a 50 per cent bonus if the bug is spotted in beta software.

android_money_648

Fancy buying a compact and bijou cardboard box home in a San Francisco alley? This $2.5m Android bounty will get you nearly there

READ MORE

But, of course, the average payout is what really counts for people that decide to spend some of their time using their technical know-how to probe companies’ software. And it ain’t great: across all the tech companies the average is fairly low.

Regardless, Mozilla’s decision, despite being a non-profit, to up its bug bounty to fit with the rest of the market is a sign of two things: one, that bug hunting is in a relatively healthy state where it is worth a company’s while to follow the market; and second that Mozilla appears to be making a big push to try to get more users onto its services.

Last month, the latest version of Firefox was released and the company has started pushing its privacy-friendly features – called Enhanced Tracking Protection (ETP) – as a key differentiator between it and, well, Chrome.

If it is going to make that privacy distinction stick, Mozilla better make sure that its new technology isn’t riddled with bugs that enables attackers to do the opposite and grab people’s personal data. Hence the beefed-up bug bounty.

Also worth reflecting on the fact it has been 15 years since Firefox 1.0 came out. And how much Internet Explorer sucked – in large part because it was full of security holes. ®

Sponsored:
Beyond the Data Frontier

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/19/mozilla_huawei_bug_bounty/

Americans Fed Up with Lack of Data Privacy

Eight out of every 10 US adults are worried over their inability to control how data about them is used, a new Pew Research survey shows.

The majority of American citizens believe that they are pervasively monitored and that their data is regularly collected and used in concerning ways that they cannot control and don’t fully understand, according to a new Pew Research Center study.

The report, based on a nationally representative panel of randomly selected US adults, shows that 62% of Americans feel they cannot prevent companies from collecting data on their activities, while 63% feel the same about government data collection.

Roughly eight out of every 10 Americans say they have very little or no control over how companies use their data, but are very concerned about how companies are using it. The vast majority conclude that the risks of data collection outweigh the benefits, the study found. 

“Clearly this survey adds up to a portrait of distress and a willingness to hear about policy options,” says Lee Rainie, director of Internet and technology research at the Pew Research Center. “The panoramic picture it paints is a society that is not happy … they are concerned. They don’t feel that they have control. They don’t think the benefits outweigh the risks anymore.”

The survey comes a year-and-a-half after the discovery that Cambridge Analytic used data from Facebook to create profiles on Americans to help the Trump campaign target ads against susceptible groups of Americans, and six years after Edward Snowden, a former contractor for the National Security Agency, leaked documents on the surveillance efforts of US intelligence agencies.

American feel that they have not benefited from the data economy and they don’t trust the companies who collect their data, according to the Pew report.

“[L]arge shares are worried about the amount of information that entities, like social media companies or advertisers, have about them,” the report said. “At the same time, Americans feel as if they have little to no control over what information is being gathered and are not sold on the benefits that this type of data collection brings to their life.”

Different segments of Americans have differing thresholds for gauging what is acceptable data use. Almost half — 49% — of American find it acceptable that the government collects data on people to determine if they pose a terrorist threat, while only 25% think it’s okay for a smart-speaker manufacturer to give law enforcement access to recording for law enforcement  

Overall, however, Americans appear to think that companies have not delivered on the trust given to them. 

Consumers “don’t know how to intervene in the system to make it work better,” says Pew’s Rainie. “They don’t think that the companies who collect the data are good stewards of the data.”

Who Reads Those Privacy Notices?

The current system of turning every data relationship between a consumer and a company into a contractual exchange where the customer purportedly reads a notice of how the company intends to use the data and consents to those terms has largely failed, according to the Pew data. While more than half of respondents (57%) encounter a privacy notice at least every week, only one in five (22%) claim they read the notices all the way through before agreeing.

Pew’s Rainie believes that people are likely exaggerating their diligence. “We don’t fact check, so the way we read that (the 22% data point) is that is a high-water mark,” he says. “The overview answer is: A lot of people admit that they don’t read the policies. A third do not read them at all.”

Perhaps, unsurprisingly, Americans are open to new approaches to privacy and data-protection laws. Currently, 63% of those surveyed do not understand current privacy laws, but three-quarters (75%) say that companies should be more regulated than they are now.

However, in potentially good news for companies, more people are in favor of better tools to manage data collection (55%) than are in favor of legislation.

But because citizens do not seem to have the same opinions over where the privacy lines should be drawn, policies continue to be difficult to form, Rainie says. 

“The policymakers would love to know where are the right lines — what seems legitimate to some people is not legitimate to others … The fact that Americans’ view of privacy ends up as a conditional set of judgements makes it hard to say, for every circumstance, this is where the line is. These data do not give that kind of clarity.”

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.’

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/americans-fed-up-with-lack-of-data-privacy/d/d-id/1336397?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

I ‘Hacked’ My Accounts Using My Mobile Number: Here’s What I Learned

A feature that’s supposed to make your account more secure — adding a cellphone number — has become a vector of attack in SIM-swapping incidents. Here’s how it’s done and how you can protect yourself.

As a director at a cyber-risk investigations company and a former FBI cyber analyst, I’m very familiar with SIM-swapping threats. For many people, the term SIM swapping conjures up an image of a hacker tapping into a phone company, or foreign fighters swapping out SIM cards to avoid government surveillance. In reality, SIM swaps are a legitimate function that happens daily at phone companies around the world. At the most basic level, a SIM swap is used by a telephone service provider to transfer an individual’s existing mobile phone number to a new SIM card and phone.

Unfortunately, criminals have learned to use SIM swapping to turn a profit. Criminals trick or bribe phone company employees into transferring a victim’s mobile phone number to a new SIM card and phone controlled by the criminal. But why would a criminal want to gain control of someone’s mobile phone number?

Enter the modern concept of mobile phone authentication. This is the practice employed by online service providers to verify a user’s identity by sending a one-time password to a mobile phone number that previously was linked to that account using two-factor authentication (2FA). While this is an easy way of resetting forgotten passwords, it also allows anyone in control of that mobile number to gain access to email, social media, and financial accounts tied to that number. If the Greek warrior Achilles is representative of 2FA in all its glory, then SMS-based mobile phone authentication is Achilles’ heel.

Hacking Three Accounts with One Phone Number
The idea of hacking someone with their phone number was so intriguing, I decided to simulate the hacking of my own accounts using just my mobile phone. I started with my Twitter account, where I selected “Forgot password?” and received an “Enter phone number” option. At this point, I didn’t remember ever connecting my Twitter account to my mobile number but figured I’d try.

I immediately received a one-time passcode from Twitter and was able to read the code via a notification on the locked screen of my cellphone. Upon entering the code into Twitter’s website, I was prompted to enter a new password and gained full control of the account. Since SMS notifications appear on my phone’s locked screen, anyone with physical access to my phone and my phone number could have taken over my Twitter account.

The most disturbing thing about my Twitter experiment is the knowledge that any family member, friend, or co-worker who had my phone number could enter it in Twitter’s “Forgot password?” field, pick up my locked phone to view the one-time password, and gain full control of my account. A SIM swap wasn’t even necessary.

The privacy implications of this scenario are unsettling, but this also highlights the potential for an individual to have offensive content sent out from their social media accounts, or worse, become implicated in a crime committed by someone who gained control of their accounts. The intruder (for example, estranged spouse or vindictive co-worker) would only need access to the victim’s phone number and locked phone. I did receive an email alert from Twitter that my password had been reset, but an attacker could gain access to my email account using the same technique and delete any notifications.

Bolstered by the hack of my Twitter account, I used the same technique against my dated Hotmail account, and achieved the same result. The steps for Hotmail included clicking “Forgot password,” entering my (very guessable) email address, and following a prompt to enter my mobile number. A one-time password was sent to my cellphone, allowing me to reset my password and gain access to years’ worth of email correspondence, all while bypassing the complex password I had set up for the account. I was starting to see how easily a SIM swapper or nosy individual could gain access to a variety of accounts by controlling a phone number.

At this point, I was in “think like an attacker” mode and searched my Hotmail inbox for financial statements. I found an email from a financial institution and clicked on “View statement.” Hacking the financial account required a bit more effort than just entering a mobile number, but the only additional hurdle was entering a Social Security number, which can often be purchased on Dark Web marketplaces. At this point in my experiment, I had gained access to a social media account, an email account filled with financial statements, and a financial account from which I could transfer funds.

Lessons Learned
What did I learn from hacking my accounts with my mobile phone? Mainly, if my accounts hadn’t been linked to my mobile phone and were solely protected by the complex passwords I use, they would have been more secure.

Many online providers suggest adding a mobile phone number as a way to implement 2FA — that is, 1) something you know and 2) something you have. Indeed, 2FA is used to initially link a user’s phone number to an online account; however, after the initial confirmation of the phone number, the authentication process often reverts back to single-factor authentication (a phone number) for authenticating accounts.

The false sense of security encouraged by the SMS-based authentication scenario leaves users vulnerable to SIM-swapping attacks and privacy vulnerabilities. Unless you have disabled certain notification features on your phone, someone with access to your locked phone could gain access to your social media, email, and potentially financial accounts with only a publicly available phone number and email address.

The Takeaway
This experiment has spurred me to make some immediate changes, which I suggest you consider doing as well: 

  • I will be deleting my phone number from my online accounts and will authenticate to accounts with complex passphrases and more-robust 2FA options, like Google Authenticator, Microsoft Authenticator, Duo, or a USB hardware authentication device such as YubiKey. (I obviously won’t be linking my mobile phone number to these 2FA applications.)
  • I will protect sensitive email contents by archiving and backing up email so it’s not accessible to an intruder if I’m hacked.
  • To protect against SIM swapping, I will add a PIN to my mobile account and plan on requesting that SIM transfers only take place in person for my account.
  • To deter mobile phone authentication attacks from opportunistic snoopers, I have disabled notifications on my phone’s lock screen.

Bottom line: A key feature advertised to make your account more secure — adding a mobile phone number — has actually proved to be a vector of attack in a growing number of SIM-swapping incidents. The security and privacy implications of this are serious, and the industry needs to move toward more secure authentication mechanisms in lieu of SMS-based mobile phone authentication.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Nicole Sette is a Director in the Cyber Risk practice of Kroll, a division of Duff Phelps. Nicole is a Certified Information Systems Security Professional (CISSP) with 15 years of experience conducting cyber intelligence investigations and technical analysis. Nicole served … View Full Bio

Article source: https://www.darkreading.com/endpoint/i-hacked-my-accounts-using-my-mobile-number-heres-what-i-learned/a/d-id/1336315?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Most Companies Lag Behind ‘1-10-60’ Benchmark for Breach Response

Average company needs 162 hours to detect, triage, and contain a breach, according to a new CrowdStrike survey.

The vast majority of companies cannot respond in time to prevent attackers from infecting other systems on their networks, according to a new CrowdStrike report released today.

The report, based on a global survey of 1,900 senior IT managers and security professionals, found only 33% of respondents thought their companies could contain a breach within an hour, with 31 hours as the average time to close a breach once it is discovered. In total, the average company would need 162 hours to detect, triage, and contain a breach, according to the CrowdStrike survey. 

The reality of businesses’ cybersecurity response falls far short of what the cybersecurity firm considers the best practice: 1 minute to detect, 10 minutes to triage, and 60 minutes to contain. 

“Clearly there is a lot of room for improvement to get to the benchmark,” says Thomas Etheridge, vice president of services at CrowdStrike. “The faster an organization has visibility into the initial stages of an attack, the better organizations are prepared to stop breaches.”

The report gives a view into the maturity level of companies’ security incident response groups and how effective they are against sophisticated threats. On average, only 5% of respondents believed they could regularly hit the 1:10:60 benchmark, and only 11% thought they could regularly detect a threat in one minute. 

“If [the] one-minute detection time could be achieved, IT leaders and security professionals alike can see the positive impact,” the report stated. “Not only would it give the intruder less time to try to access their targeted data, but it also gives the organization a head start when it comes to investigating the incident and ultimately containing [the attack].”

On average, 83% of respondents said they believe nation-state attacks to be a clear danger to their organizations. Companies in India were most concerned, with 97% of respondents indicating that attacks from nation-states were a danger, while organizations in Singapore worried the second most (92%), followed by US companies, coming in third (84%).

“The faster you detect a nation-state attack before it spreads throughout the organization, the less damage it will do,” Etheridge says. “In many cases, e-crime actors are adopting many of the same tactics — attacking in stages and spreading through the organizations before demanding a ransom.” 

The 1-10-60 rule is based on CrowdStrike data that shows most nation-state and criminals adversaries break out from the initial beachhead in a network and move laterally to other systems within hours. In 2017, the average adversary whose operations were investigated by CrowdStrike had an average breakout time of two hours. In 2018, when the company analyzed the data by nation, Russian nation-state actors executed most efficiently, with a breakout time of 19 minutes, while North Korean actors came in second with a breakout time of 2 hours, 20 minutes, and China-linked actors took third with a breakout time of approximately four hours.

CrowdStrike maintains that companies that detect intrusions, fully investigate the incident, and respond to the compromise within an hour are much more likely to limit damage from attacks.

“Organizations that meet this 1-10-60 benchmark are much more likely to eradicate the adversary before the attack spreads out from its initial entry point, minimizing impact and further escalation,” the company said in its “2019 Global Threat Report.”

In reality, the average organization takes 120 hours to detect an attack, five hours to triage, six hours to investigate, and 31 hours to contain, according to respondents to CrowdStrike’s survey.

To some degree, security teams have accepted the status quo. The largest portion of respondents — 33% — said they feel attackers are always one step ahead, making them more difficult to detect, according to the survey. About the same number — 32% — blame legacy infrastructure for making security more difficult to achieve. Other major reasons for the slow detection of threats include a lack of resources, shadow IT, and difficulty in being able to hire the right people.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.'”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/most-companies-lag-behind-1-10-60-benchmark-for-breach-response/d/d-id/1336401?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TPM-Fail: What It Means & What to Do About It

Trusted Platform Modules are well-suited to a wide range of applications, but for the strongest security, architect them into “defense-in-depth” designs.

On November 12, researchers, led by a team at Worcester Polytechnic Institute, disclosed details of two new potentially serious security vulnerabilities — dubbed TPM-Fail — that could allow attackers to steal cryptographic keys protected inside two types of Trusted Platform Modules (TPMs): chips made by STMicroelectronics, and firmware-based Intel TPMs.

TPMs are deployed in billions of desktops, laptops, servers, smartphones, and Internet-of-Things (IoT) devices to protect encryption keys, identity keys, passwords, and system code. An attacker could use stolen cryptographic keys to forge digital signatures, steal or alter encrypted information, and bypass operating system security features or compromise applications that rely on the integrity of the keys.

Because millions of deployed systems probably have the TPM-Fail vulnerability, the scope of exposure is wide. It’s especially troubling if companies and individuals don’t update their firmware using patches now available from Intel and STMicroelectronics. Similar vulnerabilities may well exist in TPMs from other manufacturers, as well.

However, when this type of issue occurred in the past, the impact was largely contained. The fact that the patches were made available at the same time as the vulnerability was announced helps to minimize damage, particularly for security-conscious system owners. The challenge is that not everyone is ready to perform these patches whenever an exploit such as this becomes known.

This vulnerability demonstrates just how hard it is to make really strong security. Absolute security does not exist. Intel and ST are two very reputable companies that do things the right way: They get certifications, work with third parties, and promptly patch vulnerabilities when found. TPM-Fail reminds us that cybersecurity is always changing, and effective security against attacks today may not work against more sophisticated attacks tomorrow. Creating and implementing a very good security design is the start of a security process; the task of protecting devices and data is continually evolving.

Nation-States versus a 75-Cent Component
This can look like an unfair fight, tasking a component that is often priced under a dollar with protecting the most critical infrastructure and pitting it against the world’s most sophisticated hacking experts. From politicians’ smartphones to systems controlling military aircraft and servers holding your family’s private data, much faith has been placed in TPMs.

Yet we have known for years that sophisticated attacks can be successful against devices such as these. Nobody should assume that a TPM or any other single security component can provide the “magic bullet” that protects a system against any and all compromises. We also need to recognize that some attacks, like this one, can leave the door open to less-sophisticated attackers taking advantage of vulnerabilities that someone else discovered to cause much more widespread damage.

Is somebody to blame? Probably not. A subtle point about these types of attacks, also called side-channel attacks, is that the implementations can be totally correct in terms of inputs and outputs, but information may still be leaked by observing the interaction of the device with the “real world.” An example: measuring the time the device takes to perform specific crypto operations.

Commercial companies don’t have unlimited budgets to make these components perfect, so they do their best with what they have, then take responsibility for addressing issues when they arise. In this case, STMicroelectronics and Intel designed their TPMs to be upgradeable and able to accept patches if an attack like this were to be found.

Encryption Key TPM Safety
In most cases, encryption keys are safe. It’s a fact, however, that any hardware device or firmware used for security purposes (e.g., TPMs and hardware security modules, or HSMs) may have design or firmware flaws that leave them susceptible. The attacks may expose critical cryptographic materials, including keys used to encrypt sensitive data. A recent example is the Roca vulnerability. Active monitoring of the latest vulnerabilities and patches should be used to make sure that your keys remain safe.

TPM-Fail should not cause everyone to stop using TPMs, but TPMs should not be seen as a “magical force field” for IoT and mobile devices. TPMs are one of the ways to provide roots of trust, and TPM-Fail does not mean their time is over. TPMs should be part of an overall security solution design and part of security in depth. In addition, because security requires constant vigilance and monitoring, your solution security design will need periodic scrutiny. The more cybersecurity experts you get to closely examine your security design, the better off you are.

Global Exposure to TPM-Fail Right Now?
Both Intel and STMicroelectronics provided prompt patches before the independent researchers broke the news. The patches should close those particular gaps and reduce (but not eliminate) the potential impact in the field. Practically speaking, however, many who own affected devices will be unaware of the TPM-Fail issue and it’s likely that millions of devices will not be patched.

It is also worth noting how difficult and expensive it is to patch devices out in the field. Going forward, it would be wise for organizations to design anything that needs security with the ability to receive remote updates of device firmware — meaning secure updates where the remote device can only accept legitimate updates from a source it can verify as legitimate — and block everything else.

Think Cyber-Resilience and “Defense in Depth”
Most important, for the future, we need to design in the ability to recover when a device in the field is compromised. This capability is called cyber-resilience, defined as the ability of a CPU — and device — to continue functioning, or resume functioning, after an attack is detected. Take driverless cars, for example. It’s not acceptable if a component in the steering system is attacked and goes offline while the car is flying down a highway.

TPM-Fail also makes a strong argument for “defense in depth” because even the better commercial offerings can be broken. So, put layers of protection in place, including new chip and firmware offerings that can add defense in depth to single-depth security solutions. That way, attackers must find their way through multiple security mechanisms.

A final recommendation: Have a process in place to monitor and react to newly announced security vulnerabilities, including the ability to securely field-upgrade the firmware on devices, and install patches without delay.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for RansomSoft Skills.”

Ari Singer, CTO at TrustiPhi and long-time security architect with over 20 years in the trusted computing space, is former chair of the IEEE P1363 working group and acted as security editor/author of IEEE 802.15.3, IEEE 802.15.4, and EESS #1. He chaired the Trusted Computing … View Full Bio

Article source: https://www.darkreading.com/application-security/tpm-fail-what-it-means-and-what-to-do-about-it/a/d-id/1336392?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DDoS Attacks Up Sharply in Third Quarter of 2019

DDoS attacks of all sorts were up by triple-digit percentages, with smaller volume attacks growing most rapidly.

DDoS attacks large and small are rapidly growing in frequency, with 241% more attacks launched in the third quarter of 2019 than were seen in the third quarter of 2018. The growth was especially large in smaller attacks — those under 5 gigabits per second — as these lower-level attacks were 303% higher in 2019 than in the same period in 2018.

The new data from Neustar also shows smaller attacks include many at the application layer. These made up 81% of the attacks in Q3 2019, up from 75% of all attacks in Q2 2019, and 69% of the attacks in Q3 2018.

Larger attacks haven’t disappeared, either: they grew nearly 200% year over year. Even so, the average attack size is down from 10.5 Gbps in 2018 to 7.6 Gbps in 2019. Smaller attacks can escape detection for days while degrading Web application performance and damaging customer experience and economic results, according to Neustar’s report, and are especially damaging to purely on-line companies like gaming and SaaS offerings.

And most organizations (59%) surveyed for the report say their organizations have suffered DDoS attacks.

For more, read here and here.

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/ddos-attacks-up-sharply-in-third-quarter-of-2019/d/d-id/1336409?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Attacker Mistake Botches Cyborg Ransomware Campaign

Cybercriminals attempted to install Cyborg ransomware on target machines by deceiving victims with a fraudulent Windows update.

Install Latest Microsoft Windows Update now!

Critical Microsoft Windows Update!

These are the two subject lines of fraudulent emails disguised to appear as Windows Update notifications while containing malicious attachments to infect targets with Cyborg ransomware. While the threat is not effective, experts warn its public builder can be used to create variants.

This campaign has been ongoing since at least Nov. 7 and likely as early as Nov. 3, which was when the malware’s GitHub repository was set up, says Karl Sigler, threat intelligence manager at Trustwave SpiderLabs, which discovered the campaign. Cyborg ransomware was also new to the research team who found this attack, which was seen spamming targets around the world.

“We have not seen this specific ransomware before, although our sample matches three other samples that have been uploaded to VirusTotal earlier this year,” Sigler says. This ransomware could be a variant of ransomware that appends the “777” extension to encrypted files, he adds. The name “Cyborg” is likely a nod to the first recorded ransomware from 1989: PC Cyborg.

The emails, which claim to come from Microsoft, contain a single sentence: “PLease install the latest critical update from Microsoft attached to this email.” Yes, “PLease” starts with two capital letters – a grammatical error that could tip users off to a potentially malicious message.

Trustwave researchers say the fake update attachment has a “.jpg” file extension but is, in fact, an executable file sized around 28KB with a randomized filename. The file is a malicious .NET downloader to deliver Cyborg ransomware to the system from Github. Researchers say the GitHub account was briefly active during their investigation but has since been taken down.

If the attackers had properly named the executable, it would have encrypted a victim’s files once it landed on a machine. However, they changed the extension from “.exe” to “.jpg,” says Sigler. “We often see attackers use double extensions in order to trick users into opening a file,” he explains. For example, they may use “file.jpg.exe.” By eliminating the “.exe” extension, the file would never execute unless an administrator purposely launched it from the command line.

“This campaign may have been a ‘test balloon’ of some sort, but as launched it would affect no one,” Sigler says. It’s unclear why Cyborg’s operators chose to do this, but it’s good news for all the potential victims who would not be infected if they opened the malicious attachments.

Inside Cyborg
In a blog post published today detailing their findings, researchers explained how they looked for additional variants of Cyborg by searching VirusTotal for “syborg1finf.exe,” the original filename of the ransomware they obtained. They obtained three samples of the ransomware.

“The file extension these Cyborg ransomware samples will append to the encrypted files varies as observed from the samples found on VT,” researchers wrote. This indicated a builder for Cyborg existed somewhere. A Web search revealed a YouTube video about “Cyborg Builder Ransomware V1.0 [ Preview free version 2019 ],” with a link to the Cyborg builder in Github.

A description below the video emphasizes the tool is designed for penetration testing and that illegal use of the software may send violators to prison. While this specific campaign is ineffective, the builder software is still available on Github and could be reused by anyone.

“After customizing the malware to their own needs, new attackers can use any number of social engineering type attacks or known exploits to install the malware,” Sigler says.

The ransomware may also be spammed using other themes and attached in different forms to evade email gateways. Attackers can tailor the threat to use a known ransomware file extension, which can mislead infected users from Cyborg’s identity, researchers explain.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/attacker-mistake-botches-cyborg-ransomware-campaign/d/d-id/1336410?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ransomware Surge & Living-Off-the-Land Tactics Remain Big Threats

Group-IB’s and Rapid7’s separate analysis of attack activity in recent months shows threat actors are making life harder for enterprise organizations in a variety of ways.

Data from two new vendor reports summarizing threat activity over the past few months shows that ransomware and living-off-the-land attacks continue to top the list of threats facing enterprise organizations.

One of the reports, from Singapore-based Group-IB, is based on an analysis of data gathered by the vendor’s computer emergency response team.

It shows that more than half (54%) of all malicious emails in the first six months of 2019 contained ransomware — a sharp increase from just 14% during the same period last year. Ransomware activity topped all other threats between January and the end of June this year.

Meanwhile, a report from Rapid7, based on an analysis of threat activity in the third quarter of 2019, shows attackers are continuing to heavily use legitimate tools and services — PowerShell — to build on and continue malicious campaigns. The security vendor’s analysis shows that phishing continues to be the top reason for organizations getting breached, but most breach detections don’t happen until the malware execution stage. Here are five takeaways from the vendor reports.

Ransomware Remerges as a Major Threat
Ransomware remerged as a major threat after seemingly being on the way out most of last year. In the first half of 2018, just 14% of the attacks that Group-IB tracked were ransomware-related, a sharp drop-off from the 40% recorded in 2017. Numerous vendor reports over the past year also have reported a steady decline in overall ransomware volumes and an increasing attacker focus on low-volume targeted attacks on enterprises. Group-IB’s data for the first half of 2019 suggests that overall ransomware volumes have begun rebounding once again.

Alexander Kalinin, head of Group-IB’s CERT, says a majority of the ransomware attacks observed in the first half of this year were of the mass-volume spray-and-pray variety that many had assumed was dying out.

However, many of these attacks showed certain similarities with targeted attacks in terms of their preparation, he notes. “The emails targeted [a] large number of people but within a specific industry,” Kalinin says. Emails containing ransomware were often drafted to be relevant to targets within a specific industry — a feature that is typically associated with targeted attacks, he says.

The most prolific ransomware strain that Group-IB tracked in the first half of 2019 was Troldesh, a malware tool that attackers used not just to encrypt files but also to mine cryptocurrency and generate phony traffic for ad-fraud campaigns, according to Group-IB.

Attackers Are Increasingly Using Delayed Action Links for Downloading Malware
To try and evade antimalware systems, cybercriminals are increasingly eschewing malicious attachments for links in emails, which when clicked download malware. Twenty-nine percent of the emails that Group-IB encountered last quarter had links to malware rather than attachments. That was double the number compared with 2018.

The links are often inactive when a victim receives an email, Kalinin says. Clicking on the links would not result in any malware being downloaded. If anything does get downloaded, it is usually a benign file. Most anti-malware tools would scan the links in real time, mark them as safe, and send the email to the user’s inbox.

The links get “activated” after the basic, initial vetting is over. If security checks are not performed over again, the victim receives an email marked as safe and can get infected, he notes. “Unlike attachments, the content accessible via links can be customized and replaced over time to bypass antivirus systems’ checks,” Kalinin says.

Once security checks are over, a cybercriminal can replace content accessible via a link with malware. “The content accessible via such links can also be customized, depending on the victim’s location, operating system, and other parameters.”

PowerShell Continues to Be an Attacker Favorite
Threat actors are increasing their use of legitimate admin, penetration testing, and other tools in attack campaigns. Among the most popular living-off-the-land tools that Rapid7 observed in use last quarter were cmd dot exe, ADExplorer dot exe, procdump64 dot exe, rundll32 dot exe, and mimikatz dot exe.

Few of the legitimate tools, though, were as popular as PowerShell. Rapid7 found that attackers are increasingly exploiting PowerShell to stay hidden when executing different attacks. Among the several tactics that attackers are using to exploit PowerShell include using old, less-restrictive versions of PowerShell and by bypassing policies set to restrict PowerShell using the ExecutionPolicy bypass switch, according to the vendor.

“PowerShell is installed on all systems and is extremely powerful,” says Wade Woolwine, principal threat intelligence researcher at Rapid7. “Uninstalling old versions and configuring PowerShell to run in Constrained Language Mode are two mitigations,” he says. Good endpoint detection and response capabilities are a must as well, he notes.

More Than 80% of Malicious File Were Disguised as .ZIP and .RAR files
Attackers sharply ramped up their use of .zip and .rar files to distribute malware in the first half of 2019. More than eight-in-10 malicious objects that Group-IB detected in the first six months of 2019 were delivered in password-protected archived files. The most common among them were .zip (32%) and .rar (25%).

The benefit for attackers is that such files make it hard for a majority of corporate security systems to automatically identify malware contained in them, Kalinin says. In many instances, the cybercriminals included the passwords for accessing the contents in the subject of the email, in the name of the archive, or in their subsequent correspondence with the victim. “Once unzipped and opened, it would download and install malware on a victim’s computer,” Kalinin says.

Healthcare and Entertainment Industries Were Big Malware Targets
A majority of the breaches that Rapid7 investigated last quarter stemmed from phishing. When the initial compromise was not detected and contained, criminals executed malware — including ransomware — on target systems. Organizations in the healthcare and entertainment sectors were especially heavily targeted in such malware attacks.

Seventy-five percent of incidents that Rapid7 investigated at entertainment organizations and 62.5% of those at healthcare organizations involved some kind of malware. Both healthcare and entertainment organizations are popular targets because they have a reputation for paying ransoms, says Woolwine. “Attackers will always go after the easy money.”

Others that were relatively heavily targeted in malware attacks included organizations in the manufacturing, retail, and real estate vertical markets.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/ransomware-surge-and-living-off-the-land-tactics-remain-big-threats/d/d-id/1336411?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ho Ho OUCH! There are 4x more fake retailer sites than real ones

Two more weeks until Cyber Monday!

Ready to shop? Got your list ready? Eyes peeled for deals? Psyched about brewing a nice pot of coffee, sitting down at your keyboard, typing in your favorite retailer’s site, tap-tap-tapping in your payment card info, hitting the buy button, and presto!

You’ve been phished!

OK, maybe you won’t stumble onto a copycat retailer site, but boy oh boy, the chances of that have blossomed like a jungle of parasitic mistletoes. According to research from Venafi, the total number of Transport Layer Security (TLS) certificates used by typosquatting domains to give themselves the aura of being safe and secure is now 400% greater than the number of authentic retail domains.

The specific numbers: Venafi found 109,045 TLS certificates on lookalike domains, compared with 19,890 on authentic retail sites. Over half of the certificates used on the imposter domains were certificates from Let’s Encrypt: an automated certificate authority that pumps out free certificates… including, say, the 15,270 “PayPal” certificates issued in 2017 to sites used for phishing.

The numbers are a bit mind-boggling: it means that there are now 4x the number of fake sites as legitimate retail sites. The number has more than doubled since 2018.

Be careful what you type

It also makes keyboard fumbles more dangerous than ever. You know how that goes: you quickly type a URL you use all the time, but this time, you fumble and accidentally swap, add, or delete a single letter and hit enter. Suddenly, you’re not in Kansas, anymore, Toto. You’re lucky if you get a 404 message. You’re also at great risk of ending up at the phishers’ Nightmare Before Christmas site, with at least some of those lookalike sites just waiting to phish your credit card away.

Venafi found that there are over 49,500 typosquatting domains targeting the customers of the top US retailers. As far as the UK goes, there are over 6x times more imposter domains than valid domains among the top 20 online retailers.

Mind you, not all of those sites are necessarily run by phishers. In the past, we’ve looked at typosquatting domains and found that, despite what you’d expect from sites that purposefully register misspellings of common URLs, they weren’t rife with malware.

In fact, cybercrime made up just under 3% of the findings. Pop-ups and ads were far more common (15%) while IT and hosting – pages offering to sell you interesting domain names – made up 12%.

But Venafi said that it has indeed seen “rampant growth” in the number of malicious, lookalike domains that are specifically used in predatory phishing attacks.

Jing Xie, senior threat intelligence researcher at Venafi, said in a press release that the growth of TLS certificates showing up on typosquatting sites is a result of the push to encrypt more, and potentially all, web traffic, which he called:

A trend that generally improves security for users but inadvertently introduces a new challenge to existing methods of phishing detection.

It’s tough enough to detect the imposter retailer sites by look alone, given that they carefully mimic logos, color schemes, other aspects of branding, and how the real sites work. What makes it even tougher is that they’re hiding under the wolf’s clothing of TLS certificates.

The padlock is not a guarantee of a safe site

As we’ve previously explained, TLS certificates are used by websites communicating over encrypted, HTTPS connections. They’re used to sign a website’s public encryption key, which ensures that your communication with that website is private and secure: you know which site you’re talking to, and that nobody else is listening in.

But you can’t always trust a site just because it’s got a certificate: The proliferation of typosquatting retailer lookalike sites is only the latest example of why.

In June, the FBI issued a warning that too many web users view the padlock symbol and the ‘S’ on the end of HTTP as a tacit guarantee that a site is trustworthy.

Given how easy it is to get hold of a valid TLS certificate for nothing, as well as the possibility that a legitimate site has been hijacked, this assumption has become increasingly dangerous.

Unfortunately, cybercriminals have spotted the confusion about HTTPS, which accounts for the growing number of phishing attacks deploying it to catch people off guard. From the FBI alert:

They [phishing attackers] are more frequently incorporating website certificates – third-party verification that a site is secure – when they send potential victims emails that imitate trustworthy companies or email contacts.

What to do?

Of course, it pays to be careful as you type, but just try telling your pumpkin-pie-stupefied fingers that.

Failing typing perfection, you could try using a password manager. They’re a good line of defense because they don’t get fooled by URLs that look right to error-prone human eyes: rather, they spot the URL tweaks introduced by typosquatters that are often too subtle for us to pick up on.

If you do spot a phishing scams – please do your bit to help everyone else. You can report potential cyberthreats to Sophos via our Submit a Sample page. In the UK, report phishes to law enforcement via Action Fraud. In the USA, use the FBI’s Internet Crime Complaint Center.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AqIuSVO4Uvg/

Brand new Android smartphones shipped with 146 security flaws

If you think brand new Android smartphones are immune from security vulnerabilities, think again – a new analysis by security company Kryptowire uncovered 146 CVE-level flaws in devices from 29 smartphone makers.

Without studying all 146 in detail, it’s not clear from the company’s list how many were critical flaws, but most users would agree that 146 during 2019 alone sounds like a lot.

The sort of things these might allow include the modification of system properties (28.1%), app installation (23.3%), command execution (20.5%), and wireless settings (17.8%).

Remember, these devices, which included Android smartphones made by Samsung and Xiaomi, had never even been turned on, let alone downloaded a dodgy app – these are the security problems shipped with your new phone, not ones that compromise the device during its use.

The culprit is a range of software specific to each manufacturer, installed in addition to Android itself or its Google applications.

But in common with Android and Google applications, these can’t be de-installed. The only way to patch one of these flaws is for the smartphone maker to be told about the issue and to issue a fix.

Factory soiled

We’ve been here before, of course. In August 2019, Google Project Zero researcher Maddie Stone gave a presentation at Black Hat to highlight the issue of malware she and her colleagues had discovered being installed on Android devices in the supply chain.

While this related to software deliberately installed to do bad things rather than vulnerable software, the effect from the user’s point of view is that they are exposed without realising it.

In one example, the Chamois SMS and click fraud botnet managed to infect 21 million devices. Even after a concerted clean up, two years later it was still clinging to the devices of nearly 7.4 million victims.

Less is still more

What then is the fundamental problem at work here? Clearly, these devices that are part of complex hardware and software supply chains so perhaps vulnerable or compromised Android devices just goes with that territory.

Not according to Kryptowire, whose CEO Angelos Stavrou made an important point in an interview with Wired:

We believe that if you are a vendor you should not trust anybody else to have the same level of permissions as you within the system. This should not be an automatic thing.

Arguably, it follows that perhaps vendors shouldn’t install so much hardwired software on Android devices that users can’t de-install. The suspicion is that some of it is only there for commercial reasons, a mildly scandalous motivation for risking the security of a device.

Our advice is to consider buying from a vendor that sells stock, or near-stock, Android (i.e. with a minimum of additional software).

The majority of the manufacturers found by Kryptowire to have vulnerable devices are brand names nobody outside of Asia is likely to encounter. On the other hand, a disproportionate number of the flaws were found in popular brands.

Undoubtedly, it would help if Android device makers spent more time examining their products for the sort of vulnerabilities security companies seem able to uncover quite easily once they ship. Will that happen? Over to you, Google.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9O4sLIwsqWk/