STE WILLIAMS

Power to the people! Google backtracks (a bit) on forced Chrome logins

Crowdpower!

Even the mighty, all-seeing EoG (eye of Google) can’t always predict how its users are going to feel about new features that are so “obviously” cool that they get turned on by default.

Here at Naked Security, we’ve always favoured opt in, where new features really are so nice to have that users can’t wait to enable them, rather than opt out, where users get the choice made for them and can’t wait to find out how to undo it.

So, we weren’t surprised that there was quite some backlash from Google Chrome users when the latest update to the world’s most widely-used browser changed the way that logging in worked.

As we reported earlier this week:

Users were complaining this week after discovering they’d been logged in to Google’s Chrome browser automatically after logging into a Google website.

In the past, by default, logging into Gmail and Chrome were two separate actions – if you fired up Chrome to read your Gmail, you wouldn’t end up logged in to Chrome as well.

You could choose to enable what’s often referred to as single sign-on, but it wasn’t out-of-the-box behaviour.

But Google – surprise, surprise – figured that what it calls “sign-in consistency” would be such a great help (to Google, if not necessarily to you) that it started doing a sort of single sign-on by default, instead of treating your various Google accounts separately.

As you can imagine, or have probably experienced for yourself if you are a Chrome user, that’s a rather serious sort of change to do without asking.

It’s a bit like a home automation system deciding that after the next firmware update it will start unlocking all your doors at the same time whenever you open your garage, even though that’s not how it worked before.

You can just imagine the marketing focus group arguing for this feature – it means you can park and then go straight on into your house with your arms full of groceries without fumbling for your keyfob a second time, and why did we never think of that before?

“Hey,” the focus group choruses, “A vocal minority has been shouting for this feature so we absolutely must have it, and everyone should get it right away just to prove that we’re clever!”

And you can imagine the techies arguing just as strongly against it – you could easily end up leaving your whole house unlocked unintentionally, and why did we ever think that was a good idea?

“Back off,” retort the techies, “You can’t loosen up security settings and then wait for users to realise by accident, so no one should get it without being asked first.”

Of course, you can also imagine the marketing team winning the battle: because frictionlessness; because cool; because shiny new feature; because progress; because, well, because “duh”! (Because lead generation opportunities, too, but let’s just think that instead of saying it out loud.)

Well, Google has capitulated, sort of.

The company still thinks you ought to appreciate the new feature of autologin, and it pretty much implies that you’re wrong if you feel otherwise, but it has had the good grace to pay attention nevertheless:

We’ve heard — and appreciate — your feedback. We’re going to make a few updates in the next release of Chrome (Version 70, released mid-October) to better communicate our changes and offer more control over the experience.

What Google doesn’t seem to have done is to revert the unpopular change to the status quo ante – by default, as far as we can see, you’ll still get logged into Chrome automatically whenever you log in to some Google service in Chrome, if you see what we mean.

There will now be a switch you can toggle to turn autologin off, but you’ll need to go there yourself and flip said switch.

Our preference is still for opt in by default, so we’d be happier to see Google add the toggle and set it off until turned on.

But we’re willing to be thankful for small mercies, and to applaud Google nevertheless for listening at least in part, and reacting quickly.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Gg2kL3a-D6I/

Mobile password managers vulnerable to phishing apps

Researchers have discovered that several leading Android-based password managers can be fooled into entering login credentials into fake phishing apps.

Password managers can be used to create, store, enter and autofill passwords into apps and websites. As well as allowing users to maintain scores of strong passwords, password managers can also provide some defence against phishing – their autofill features will enter passwords on sites they’re associated (and their mobile apps), but not on fakes.

The University of Genoa and EUROCOM’s Phishing Attacks on Modern Android study explores the difference between accessing a service through its mobile app and accessing it through its website on a desktop browser.

With desktop browsers, when a site is visited for the first time the password manager creates an association between its domain (verified by its digital certificate) and the credentials used to access it.

However, when somebody uses the website credentials to log in to an app, the process of verifying the app is more complicated and potentially less secure.

The main way password managers tell good apps from bad apps is by associating the website domain for that app with the app package name, a metadata ID checked using static or heuristically-generated associations.

The flaw is that package names can be spoofed – all the attacker has to do is create a fake app with the correct package name and the password manager will trust it enough to present the correct credentials.

The researchers found that several popular password managers were vulnerable to this kind of mapping weakness – LastPass, 1Password, Dashlane, and Keeper – with only Google Smart Lock (which isn’t primarily a password manager) able to resist.

Instant trouble

Even Google’s recently introduced Instant Apps – designed to be tried without the need for a download – could be abused by a phishing website to trigger a password manager autofill, the team discovered during testing.

This is particularly dangerous because it means it might be possible to execute a phishing attack without the need to install a fake app spoofing a package name (something Google Play doesn’t allow).

Write the researchers:

We believe this attack strategy significantly lowers the bar, with respect to all known phishing attacks on the web and mobile devices: to the best of our knowledge, this is the first attack that does not assume a malicious app already installed on the phone.

What can be done?

The problem is that the way password managers understand mapping legitimate domains to apps on Android is governed by three standards – the Accessibility Service (a11y); the Autofill Framework (Oreo 8.0 onwards); or using OpenYOLO, a separate Google-Dashlane collaboration.

The first of these, a11y, was designed for people with disabilities and ended up being used by malicious apps to abuse administrator rights, which led Google to implement Autofill Framework, and Dashlane to OpenYOLO. Unfortunately, all three standards are vulnerable to manipulation of package names, which suggests fixing this problem won’t be easy.

The researchers’ solution is a new getVerifiedDomainNames() API that dispenses with package names in favour of checking a hardcoded association between a website domain (and subdomains) and the app connecting to it.

The drawback of this is that websites would need to start publishing an assets file containing this data, something the researchers discovered barely 2% of more than 8,000 sample domains currently bother to do.

For now, this leaves password managers to fall back on their own defences. LastPass, for one, told Naked Security that it did not believe that the weakness had led to any of its customers being compromised:

Our app now requires explicit user approval before filling any unknown apps, and we’ve increased the integrity of our app associations database in order to minimise the risk of any fake apps being filled/accepted.

Naked Security believes that using a password manager is still one of simplest and most effective computer security steps you can take, and closer integration with mobile apps makes using a password manager easier.

You are much more likely to be burned by password reuse than by an autofill attack on a fake app. However, if you are concerned about this kind of attack, or similar attacks that exploit autofill features using hidden password fields, don’t abandon your password manager, just turn autofill off.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5efK4MJkwlw/

WhatsApp cofounder: “I sold my users’ privacy”

WhatsApp cofounder Brian Acton has revealed that he left the Facebook-acquired company 10 months ago because Facebook wanted to do things that made him squirm. He told Forbes:

It was like, okay, well, you want to do these things I don’t want to do. It’s better if I get out of your way. And I did.

Yes, he did. He got way out of the way, leaving $850 million on the table because he left Facebook a year before his stock grants vested. He’s still worth $3.6 billion though.

The Forbes interview is the first time Acton has talked about his reasons publicly.

He did, though, wave this terse farewell to Facebook back when the Cambridge Analytica scandal hit:

That was his last Tweet.

So, what was it that made Acton join the rapidly inflating ranks of the Silicon Valley mea-culpa-rati? A group that now includes an ex-Reddit mogul who’s apologised for making the world “a worse place”, and the former Facebook president wringing his hands over the company’s exploitation of “a vulnerability in human psychology”.

Users’ privacy, Acton said: a commodity he sold to the rather astonishing tune of $19 billion (what would eventually reach $22 billion) to Facebook …the company that eats users’ personal data and then burps utterances like “no, sorry, we can’t tell you all the data we have on you: that’s hard.

Acton:

At the end of the day, I sold my company. I sold my users’ privacy to a larger benefit. I made a choice and a compromise. And I live with that every day.

Essentially, the straw that broke the camel’s back was a disagreement over how to monetize WhatsApp: an app whose cofounders were known for despising ads.

As Forbes reports, Acton’s motto at the end-to-end encrypted messaging app maker was “No ads, no games, no gimmicks”. Not exactly the kind of company you’d think would sell itself to Facebook, which gets 98% of its revenue from advertising. Forbes stated:

Another motto had been “Take the time to get it right,” a stark contrast to “Move fast and break things.”

“Take the time to get it right” might have been its motto, but it hasn’t always a good description of its actions. It didn’t wait until Facebook bought it to break things: it was doing just fine breaking things before the 2014 mega-sale.

Such as the joint Canadian-Dutch privacy probe in 2013. We gave WhatsApp credit at the time: it made mistakes, but it worked with authorities to fix them. But then it got into hot water ten months later when it screwed up encryption again.

Little more than a year passed when WhatsApp CEO Jan Koum publicly asserted that “Respect for your privacy is coded into our DNA.”

But when WhatsApp’s code was put to the test again in April 2014, it turned out that it still didn’t care all that much about privacy. Researchers discovered that attackers who could sniff network traffic between a WhatsApp user’s phone and Google’s servers could pinpoint a user as soon as they shared their location with other WhatsApp users.

Once again, WhatsApp took the researchers’ findings to heart and promised to fix its blunder in the next release.

Hey, mistakes happen. WhatsApp’s early (pre-acquisition) history was that it at least tried to do the right thing. Except, well, sell itself to Facebook and somehow imagine that user privacy and data wouldn’t be digested in the process.

Facebook made Acton and WhatsApp Cofounder Jan Koum a deal they couldn’t refuse. Founder Mark Zuckerberg also promised the pair that there would be “zero pressure” to monetize for the next five years.

Well, that didn’t work out. From Forbes:

Within 18 months, a new WhatsApp terms of service linked the accounts and made Acton look like a liar. ‘I think everyone was gambling because they thought that the EU might have forgotten because enough time had passed.’

Europe didn’t forget. Europe instead fined Facebook $122 million for giving “incorrect or misleading information” to the EU about the acquisition. No matter: it was just a bump in the road, and the deal went through.

What does all this add up to? What does Acton’s remorse mean, in practical terms?

Not much. He missed out on a mega-payday but he’s still mega-rich, and WhatsApp users were still sold to Facebook under Acton’s leadership.

He’s suggested that Facebook used him to help get the WhatsApp acquisition past EU regulators who had been concerned it might be able to link accounts… which it subsequently did.

In August, Facebook said get ready – ads are coming to WhatsApp starting next year, along with new tools that will allow businesses to offer customer support via WhatsApp chat.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Babx37wQ_fc/

VirusTotal slips on biz suit, says Google’s daddy will help the search for nasties

Alphabet-owned malware aggregator website VirusTotal has given itself an enterprise-focused makeover.

The firm said the reboot “takes advantage of Alphabet’s “increased scalability of data collection, processing, and search” to help threat intel teams work faster.

Front and centre of the upgrade is the introduction of Private Graph. The feature will enable companies to shove their own data into VirusTotal to run analyses against billions of malware samples, visualising connections between certain strains and corporate entities including people, departments, servers and emails.

Private Graph is outfitted for secure team collaboration, making it more suitable for incident response. The tech will, among other things, allow an infosec crew to identify features that various waves of attack have in common and match them against indications of compromise.

VirusTotal Enterprise also adds high-speed searching via a new interface and an expanded set of search variables.

The main tasks of searching for malware samples (using VT Intelligence) and visualising malware relationships (via VT Graph) will be offered through programming interfaces. New API management of corporate groups will allow synchronisation with internal user directories. VirusTotal Enterprise accounts supports two-factor authentication for improved security.

The service aggregates many anti-malware products under a single roof. Analysts can upload files they’re suspicious about to pick up on malware that their own preferred tools might have missed or to catch false positives. According to Chronicle, the subsidiary of Alphabet Inc that runs VT, the service also allows interrogation of suspect URLs and searching by file hash or suspect IP address, among other features.

Samples of “suspect” files are then shared with participating software vendors. The utility of the service is perhaps evidenced by the creation of black-hat alternatives that do the scanning against various anti-malware engines but not the sharing.

The 14-year-old VirusTotal was founded as a side project by Spanish security firm Hispasec Sistemas back in 2004. The service was acquired by Google Inc. in September 2012. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/28/virustotal_enterprise_revamp/

Google To Let Users Disable Automatic Login to Chrome

The decision comes days after security researcher had blasted company for jeopardizing user privacy with browser update.

This story was updated on 9/28 to clarify the fact that with Chrome 70 Google will, by default, still automatically sign-in users to Chrome when they sign into a Google account, but users will get the option to disable the link.

Google has reversed course on a controversial recent browser update it introduced with little notice that automatically logs users into Chrome whenever they sign into any Google Web account.

Starting with the next release of Chrome—Version 70—Google is adding a control that gives users the choice of linking Web-based sign-in with browser based sign-in. Instead of automatically logging users into Chrome when they sign into a Gmail or other Google account, Chrome will let users decide if they want to be automatically signed into the browser or not.

“For users that disable this feature, signing into a Google website will not sign them into Chrome,” Chrome product manager Zach Koch announced in a blog post Sept 26.

Importantly though, the default setting in Chrome 70 will continue to be for users to get automatically signed into Chrome when they log into a Google account. What Google is making available with the next Chrome release is an option that lets users disable that setting, a Google spokeswoman clarified to Dark Reading Thursday. “Once the feature is disabled, it will stay disabled,” the spokeswoman says.

The decision essentially restores the status that existed before Chrome 69, where users had the choice of keeping their sign-in to Google accounts completely separate from their sign-in for Chrome. A Gmail user concerned about Google collecting their browsing data for instance could use Chrome in basic browser mode without being signed into it.

Google’s change of heart comes days after security researcher Matthew Green from Johns Hopkins University had blasted the update in Chrome 69 as being sneaky and posing a substantial threat to user privacy. In a searing and widely quoted blog post, Green described the update as being unnecessary and deliberately putting users at risk of mistakenly allowing Google to collect their browsing data.

Google, meanwhile, described the update as harmless and providing a way to simplify the way Chrome handles log-ins. The company has maintained that when automatically signing users into Chrome, it would only collect browsing data if a user explicitly consents to that collection.

Currently with Chrome 69, when a user signs into a Google account, their account picture or icon will appear in the Chrome UI. This enables the user to easily verify their sign-in status, according to Google. Signing out of Chrome will automatically log the user out of all their Google accounts.

In the the Google blog post, Koch claimed that Google had introduced that update in response to feedback from users on shared devices that were confused about their sign-in state. “We think these UI changes help prevent users from inadvertently performing searches or navigating to websites that could be saved to a different user’s synced account,” he said.

Koch’s blog made no reference to the concerns raised by Green and others over the recent update. He merely noted that Google had heard “feedback” and was making changes to Chrome 70 to give users back the control they had over Chrome log-ins.

Google is also updating its Chrome UIs so users can more easily understand if their browsing data is being synced—or collected. “We want to be clearer about your sign-in state and whether or not you’re syncing data to your Google Account,” said.

Related Content:

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/endpoint/-google-to-let-users-disable-automatic-login-to-chrome/d/d-id/1332907?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Most Prevalent Phishing Subject Lines

The most popular subject lines crafted to trick targets into opening malicious messages, gleaned from thousands of phishing emails.PreviousNext

(Image: Amy Walters - stock.adobe.com)

(Image: Amy Walters – stock.adobe.com)

Chances are good there’s a phishing scam lurking amid your emails right now. If there isn’t, then perhaps there will be tomorrow, or the next day. The question is, will you fall for it?

Phishing emails are getting tougher to block because attackers are crafting their bait to be more convincing to targets, researchers report. And employees are quick to open potentially malicious emails, even when they know they should be on alert, says Webroot CISO Gary Hayslip.

“I think it’s to the point where it’s getting commonplace,” he says. “Users are used to seeing phishing emails now. They suck at not responding to them or clicking on them … which is frightening, because [attackers] prey on human nature.”

People are curious and they want to help, he continues, and it’s these two qualities that make them susceptible to phishing attacks. When they do fall for scams, most employees are quick to realize it. “I’m really busy,” “I missed that,” “I should’ve caught that email,” are all commonly heard phrases from victims who have opened malicious emails and realized they did wrong.

“No matter how much technology you put in place to block them, stuff always gets through,” Hayslip adds.

Webroot recently scanned thousands of phishing emails from the past 18 months to learn more about the trends around common subject lines designed to trick targets. Hayslip presented the findings to about 100 fellow CISOs around the country and learned “almost everybody’s seeing the same thing,” he says. Financially related messages and notions of urgency are commonly seen in phishing emails, albeit under different subject lines.

John “Lex” Robinson, cybersecurity strategist at Cofense (formerly PhishMe) echoes Hayslip’s sentiments and says attackers are getting better and better at understanding the context of the emails they’re sending and who they’re targeting.

“If you think about the way we communicate today versus 15, 20, or 30 years ago, it’s a lot less formal,” he says. Phishing doesn’t need to be formal; it needs to align with business jargon.

Here’s a look at the most commonly used phishing subject lines, the messages they include, and what they reveal about their attackers’ goals and tactics.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/7-most-prevalent-phishing-subject-lines-/d/d-id/1332901?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DEF CON hackers’ dossier on US voting machine security is just as grim as feared

Hackers probing America’s electronic voting systems have painted an astonishing picture of the state of US election security, less than six weeks before the November midterms.

The full 50-page report [PDF], released Thursday during a presentation in Washington DC, was put together by the organizers of the DEF CON hacking conference’s Voting Village. It recaps the findings of that village, during which attendees uncovered ways resourceful miscreants could compromise electoral computer systems and change vote tallies.

In short, the dossier outlines shortcomings in the electronic voting systems many US districts will use later this year for the midterm elections. The report focuses on vulnerabilities exploitable by scumbags with physical access to the hardware.

“The problems outlined in this report are not simply election administration flaws that need to be fixed for efficiency’s sake, but rather serious risks to our critical infrastructure and thus national security,” the report stated. “As our nation’s security is the responsibility of the federal government, Congress needs to codify basic security standards like those developed by local election officials.”

Criminally easy to hack

Researchers found that many of the systems tested were riddled with basic security blunders committed by their manufacturers, such as using default passwords and neglecting to install locks or tamper-proof seals on casings. These could be exploited by miscreants to do anything from add additional votes to create and stuff the ballot with entirely new candidates. It would require the crooks to get their hands on the machines long enough to meddle with the hardware.

Some electronic ballot boxes use smart cards loaded with Java-written software, which executes once inserted into the computer. Each citizen is given a card, which they slide in the machine when they go to vote. Unfortunately, it is possible to reprogram your card yourself so that when inserted, you can vote multiple times. If the card reader has wireless NFC support, you can hold your NFC smartphone up to the voting machine, and potentially cast a ballot many times over.

“Due to a lack of security mechanisms in the smart card implementation, researchers in the Voting Village demonstrated that it is possible to create a voter activation card, which after activating the election machine to cast a ballot can automatically reset itself and allow a malicious voter to cast a second (or more) unauthorized ballots,” the report read.

“Alternatively, an attacker can use his or her mobile phone to reprogram the smart card wirelessly.”

The DEF CON village was not without its share of controversy. Voting machine maker ESS condemned the conference’s workshops and contests as a security threat, while the organizers noted that the results of the gathering were limited because hackers were only being able to access publicly obtainable machines – typically decommissioned devices bought on eBay – leading some wondering how much damage a hacker could deal to today’s in-production voting systems.

People voting with good old paper

Judge: Georgia’s e-vote machines are awful – but go ahead and use them

READ MORE

Ultimately, however, the researchers believe that the findings from the event show that there are more than enough holes to warrant a larger effort by US Congress to get national security standards in place for electoral computer systems.

“While many local election officials have worked tirelessly to advocate for Congress to act and fund robust security practices, it’s not enough. National security leaders must also remind Congress daily of the gravity of this threat and national security implications,” the report stated.

“It is the responsibility of both current and former national security leaders to ensure Congress does not myopically view these issues as election administration issues but rather the critical national security issues they are. Disclosing vulnerabilities does not seem to be enough to get them fixed, even years later.”

Hopefully, the dossiers’ authors – Matt Blaze, University of Pennsylvania; Jake Braun, University of Chicago; Harri Hursti, Nordic Innovation Labs; David Jefferson, Verified Voting; Margaret MacAlpine, Nordic Innovation Labs; and Jeff Moss, DEF CON founder – aren’t hoping to get that any time soon. Despite the repeated calls to improve election security ahead of the midterms, Congress has steadfastly refused to take any significant action. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/28/defcon_vote_hacking/

Resident evil: Inside a UEFI rootkit used to spy on govts, made by you-know-who (hi, Russia)

A UEFI rootkit, believed to have been built by Kremlin spies from an anti-thief software program to snoop on European governments, has been publicly picked apart by researchers.

A rootkit is a piece of software that hides itself on computer systems, and uses its root or administrator-level privileges to steal and alter documents, spy on users, and cause other mischief and headaches. A UEFI rootkit lurks in the motherboard firmware, meaning it starts up before the operating system and antivirus suites run, allowing it to bury itself deep in an infected machine, undetected and with high-level access privileges.

According to infosec biz ESET, a firmware rootkit dubbed LoJax targeted Windows PCs used by government organizations in the Balkans as well as in central and eastern Europe. The chief suspects behind the software nasty are the infamous Fancy Bear (aka Sednit aka Sofacy aka APT28) hacking crew, elsewhere identified as a unit of Russian military intelligence.

That’s the same Fancy Bear that’s said to have hacked the US Democratic Party’s servers, French telly network TV5, and others.

The malware is based on an old version of a legit application by Absolute Software called LoJack for Laptops, which is typically installed on notebooks by manufacturers so that stolen devices can be found. The code hides in the UEFI firmware, and phones home to a backend server over the internet. Thus, if the computer is nicked, it will silently reveal its current location to its real owner.

Fancy Bear Anonymous bear logo

Fancy that, Fancy Bear: LoJack anti-laptop theft tool caught phoning home to the Kremlin

READ MORE

As we reported in May, eggheads at Netscout’s Arbor Networks spotted LoJack being reused by Fancy Bear agents to develop LoJax. Now, ESET has documented in detail [PDF] the spyware’s inner workings, and listed signatures that can be used to detect and remove it from your own networks.

Essentially, the miscreants compromise a machine, gain administrator privileges, and then try to alter the motherboard firmware to include a malicious UEFI module that, if successful, installs and runs LoJax every time the computer is normally booted.

This malicious code thus gets to work before the OS and antivirus tools kick in. Changing the hard drive or reinstalling the operating system is no good – the malware is stored in the system’s builtin SPI flash, and reinstalls itself on the new or wiped disk.

Once up and alive, LoJax contacts command-and-control servers that are disguised as normal websites and are known to be operated by Russian intelligence. It then downloads its orders to carry out.

On Thursday, the ESET team wrote:

We found a limited number of different LoJax samples during our research. Based on our telemetry data and on other Sednit tools found in the wild, we are confident that this particular module was rarely used compared to other malware components at their disposal. The targets were mostly government entities located in the Balkans as well as Central and Eastern Europe.

Our investigation has determined that this malicious actor was successful at least once in writing a malicious UEFI module into a system’s SPI flash memory. This module is able to drop and execute malware on disk during the boot process.

This persistence method is particularly invasive as it will not only survive an OS reinstall, but also a hard disk replacement. Moreover, cleaning a system’s UEFI firmware means re-flashing it, an operation not commonly done and certainly not by the typical user.

Asus Z97-A UEFI BIOS

Hacking Team spyware rootkit: Even a new HARD DRIVE wouldn’t get rid of it

READ MORE

It turns out LoJack, otherwise known as Computrace, was a pretty decent template for designing a piece of hidden firmware-level spyware. “While researching LoJax, we found several interesting artifacts that led us to believe that these threat actors might have tried to mimic Computrace’s persistence method,” ESET stated.

LoJax uses a kernel driver, RwDrv.sys, to rewrite the UEFI flash firmware and its settings to store itself, so that when the PC starts up, it copies itself to disk and runs itself. This kernel driver was swiped from a legitimate utility called RWEverything.

We’re told by ESET that Secure Boot, if enabled, should stop LoJax from injecting itself into the firmware storage, because the code won’t have a valid digital signature and should be rejected during startup. Be aware, though, this requires a sufficiently strong Secure Boot configuration: it has to be able to thwart administrator-level malware with read-write access to the UEFI storage.

There are firmware settings that can thwart the flash installation simply by blocking write operations. If BIOS write-enable is off, BIOS lock-enable is on, and SMM BIOS write-protection is enabled, then the malware can’t write itself to the motherboard’s flash storage.

Alternatively, wiping the disk and firmware storage will get rid of this particular rootkit strain.

Modern systems should be able to resist malicious firmware overwrites, we’re told, although ESET said it found at least one case of LoJax in the PC’s SPI flash.

“While it is hard to modify a system’s UEFI image, few solutions exists to scan system’s UEFI modules and detect malicious ones,” wrote Team ESET. “Moreover, cleaning a system’s UEFI firmware means re-flashing it, an operation not commonly done and certainly not by the average user. These advantages explain why determined and resourceful attackers will continue to target systems’ UEFI.”

While the steps taken to inject the malware into the firmware are somewhat involved, the end result is quite simple: creating a resident software evil that makes sure companion malware is loaded up when a compromised system boots up.

ESET presented its research on the UEFI rootkit it had uncovered at the 2018 Microsoft BlueHat conference on Thursday, September 27. See the above-linked PDF for more details in more depth. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/28/uefi_rootkit_apt28/

Your specialist subject? The bleedin’ obvious… Feds warn of RDP woe

The FBI and the US Department of Homeland Security have added their voices to warnings of insecure deployments of Remote Desktop Protocol (RDP) services.

RDP servers can be left misconfigured, or poorly secured, allowing scumbags to waltz into networks and cause further damage. Compromised logins are so abundant they fetch a mere $10 a pop on dark web souks, all-too-many people hand over their logins to scammers, and vulnerable systems wind up with ransomware scrambling their files, as Hancock Health in Indiana discovered earlier this year.

Of the RDP-spread ransomware infections the FBI’s advisory highlighted on Thursday, probably the one striking the most fear into sysadmin hearts was SamSam, a campaign that started in 2015 and has since then earned its operators an estimated US$5.9m in illicit gains.

SamSam rose to prominence following a Talos warning in 2016 and has plagued hospitals, schools, and US city administrations.

band_aid_648

Microsoft to lock out Windows RDP clients if they are not patched against hijack bug

READ MORE

The FBI/DHS public service announcement reiterates what sysadmins (and home users) should know, but all too often aren’t acting on. Whether business or home, the statement said, you should “review and understand what remote accesses their networks allow and take steps to reduce the likelihood of compromise, which may include disabling RDP if it is not needed.”

The most common vulnerabilities, the agencies said, are weak passwords enabling brute-force or dictionary attacks; old versions using CredSSP encryption and therefore allowing man-in-the-middle attacks; unrestricted access to TCP port 3389 from anywhere in the world; and allowing unlimited login attempts to RDP accounts.

The agencies’ advice is mundane, but worth reiterating: audit your use of RDP and disable it if you can (especially on critical devices), install all available patches, use strong and secret login credentials, and block TCP port 3389 from cloud VM instances and any IP address ranges you never use.

So, essentially, firewall RDP, use a VPN for access, enforce strong passwords and lockout policies, use multi-factor authentication, keep RDP access logs for 90 days and actually look at them for intrusion attempts, and make sure any contractors with RDP access stick to your policies. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/28/fbi_dhs_rdp/

Oslo clever clogs craft code to scan di mavens and snare dodgy staff

Researchers from the University of Oslo in Norway have developed a system that tries to combat rogue employees and inside jobs – by combining cyber and real-world security knowhow.

Known as PS0, the framework [PDF] combines traditional PC and network security systems with input from physical sensors and other surveillance hardware such as cameras and ID badges, eventually combining all of it into a single database that could be queried by administrators.

The idea, say the researchers, would be to give companies the ability to connect multiple events to help give them a picture of how an attack, particularly one from inside the organization, unfolded over time – and possibly stop one happening in the first place.


PS0 system diagram

“In an idealistic environment any type of malicious activity should be prevented or detected and mitigated but this is almost never the case, especially when the attacker is a trusted authority,” the report reads.

“It is the case that many times malicious activity goes undetected for a long period and incidents are not reported in a timely manner.”

In one example, researchers said an administrator would respond to an incident by querying the system in SPARQL semantic query language with a set of parameters including things like access logs, device permissions, and surveillance or sensor records. The system will then produce the results with records or a provenance graph showing how the fields intersect.

NSA

NSA dev in the clink for 5.5 years after letting Kaspersky, allegedly Russia slurp US exploits

READ MORE

The result would be a clear picture of who was where and what they were doing, both within the network and on the floor of the office itself. The latter, the researchers suggest, is the key to catching insider threats. Even someone with elevated permissions that may never trigger an alarm on the network or servers could get caught on cameras or physical logs.

While the system seems complex, the researchers say that early trials with student volunteer admins showed that the entire system was surprisingly easy to learn. In many cases the volunteers would learn how to perform basic queries on the system and solve the attack scenarios with minimal training.

“All eleven analysts without having any prior experience with the system succeeded to identify the insiders,” the researchers said.

“This is mainly based on the intuitive approach the analysts followed to investigate the incidents.”

In the end, the researchers believe the framework could be followed by organizations to create systems that would be more flexible than existing security offerings and give a deeper insight into logs and records, allowing admins to catch both isolated incidents and long-running espionage operations. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/28/insider_threat_software/