STE WILLIAMS

Microsoft Issues Advisory for Windows Hello for Business

An issue exists in Windows Hello for Business when public keys persist after a device is removed from Active Directory, if the AD exists, Microsoft reports.

Microsoft has issued an advisory (ADV190026) to provide guidance to businesses following the disclosure of an issue in Windows Hello for Business (WHfB). The problem exists when public keys persist following a device’s removal from Active Directory, if the Active Directory exists.

The issue was discovered by Michael Grafnetter, IT security researcher and trainer for CQURE and GOPAS, who has been investigating the inner workings of WHfB and discovered multiple attack vectors for the passwordless authentication tool. One of these vectors involves msDS-KeyCredentialLink, which could potentially be used or misused for persistence by an attacker.

Today’s advisory refers to another one of his findings. When someone sets up WHfB, the WHfB public key is written to the on-premises AD, and its keys are tied to a user and device that has been added to Azure AD. If the device is removed, its linked WHfB key is considered orphaned. However, these orphaned keys are not deleted, even if their corresponding device is removed. While any authentication to Azure AD using an orphaned key will be rejected, some of these WHfB keys cause a security issue in AD 2016 and 2019 in hybrid or on-premises environments.

An authenticated attacker could access orphaned keys created on Trusted Platform Modules (TPMs) affected by CVE-2017-15361, as detailed in separate security advisory ADV170012, to compute their WHfB private key using the orphaned public keys. The attacker could use the stolen private key to authenticate as the user within the domain with Public Key Cryptography for Initial Authentication (PKINIT).

“This attack is possible even if firmware and software updates have been applied to TPMs that were affected by CVE-2017-15361 because the corresponding public keys might still exist in Active Directory,” Microsoft explains in its advisory. Its advisory is intended to provide guidance to clean up orphaned public keys created using an unpatched TPM, before the updates detailed in ADV170012 were applied.

So far, there is no evidence to suggest this issue has been used to attack machines in the wild, officials say. Read mitigation steps in Microsoft’s full advisory here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “A Cause You Care About Needs Your Cybersecurity Help.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/application-security/microsoft-issues-advisory-for-windows-hello-for-business/d/d-id/1336514?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Navigating Security in the Cloud

Underestimating the security changes that need to accompany a shift to the cloud could be fatal to a business. Here’s why.

The cloud has changed a lot about the way we conduct business, but one of the most significant shifts has been in the realm of cybersecurity. The expansion of workloads running in the cloud has driven an uptick in security attacks focusing on cloud technologies. As organizations grapple with an increased attack surface, data breaches have become more common, wider-reaching, and costlier than ever.

How is this different from the workplace of years past? Companies used to run their software on-premises, which meant a firewall was all you needed to protect your employee and customer data. IT teams also relied on a monolithic tech stack, deploying apps from a single vendor that offered closed systems and thick client apps. And before the cloud made remote work easy and accessible, employees had to work in central office locations in order to access company technology.

The shift to the cloud has changed all that. Software has moved from on-premises to cloud-native or hybrid environments, and companies are implementing best-of-breed tech stacks that rely on multiple vendors. At the same time, cloud and mobile technologies are allowing employees to access their work from anywhere in the world, which has given rise to a new age of freedom and limited capabilities for control within the traditional approach to security. These four steps can set your company up for success in today’s modern technology landscape.

Step 1: Adopt a zero-trust mentality.
The fact that an increasing number of organizations continue to use cloud services means that companies can’t assume that users can be trusted simply based on the network they’re on. Rather, all users must be verified regardless of their device, their location, or their IP address before gaining access to corporate data or applications.

There are a couple of ways to implement this approach. First, security teams must understand the true identity of who is accessing their network, and monitor and log all network traffic. This means establishing security checkpoints and enforcing rules about who can continue to access the network past each checkpoint. [Editor’s Note: Okta, along with other vendors, markets software to manage and secure user authentication processes in the cloud.] Second, companies need to keep a close eye on access permissions and give the minimum necessary amount of access to every user. For example, if a salesperson doesn’t need access to hiring information or customer login credentials, don’t give it to them. The more a user or employee can access, the higher the risk of a compromised account.

Step 2: Implement micro-segmentation.
Traditional security measures like firewalls are good at regulating what comes in and out of your network. But today, when the workloads themselves are in the cloud and virtual, and access is happening from all over, knowing who is coming in and out of your network doesn’t make you any more secure. This is where micro-segmentation comes in. Micro-segmentation will allow your team to establish customized policies for different segments, giving you more comprehensive security overall. These policies can also be deployed virtually, making a micro-segmented approach ideal for a cloud environment.

Step 3: Encrypt data and move to a passwordless experience.
Encryption is one of the simplest ways to secure your data. Only people with the correct passwords or keys can access encrypted data, so it’s a straightforward way to secure information stored in the cloud. However, Have I Been Pwned (HIBP) reports that hackers have managed to breach 555,278,657 passwords, and research Okta commissioned from Opinium in May revealed that over a third of users reuse the same passwords for multiple accounts.

Ultimately, this reinforces why password-specific policies should not be the last line of defense for your organization. In fact, because login credentials are compromised and reused so frequently, going passwordless is often the best long-term way to keep data safe. Your team can eliminate passwords altogether by investing in technology like physical security keys and relying on more robust contextual access systems, and as identity management continues to evolve, a passwordless future is becoming more and more possible for organizations. But if going passwordless is not an option at your organization, you should, at a minimum, establish strong password regulations. Greater password lengths encourage the use of passphrases, which provide greater protection against brute-force attacks. Eliminating the reuse of old passwords curbs the potential for future account compromises as well.

Step 4: Don’t forget life-cycle management.
In November 2018, an employee who was fired by the Chicago Public Schools system stole personal data from 70,000 people from a private CPS database. This scenario is every HR and security executive’s worst nightmare: a disgruntled employee leaves the company and retaliates by taking sensitive data with them. Unfortunately, even if an employee quits or gets terminated on good terms, passwords stored in the cloud could later be breached if their account is left open, or is orphaned. Although immediate offboarding can be daunting, it’s a vital part of security and worth the investment. As soon as someone stops working at your organization, you need to cut them off from future access to any data.

Onboarding is equally as important. When a new employee starts, a streamlined onboarding process that requires them to set up secure accounts and participate in security training will mean there is less room for error and risk down the road.

Underestimating the security changes that need to accompany a shift to the cloud could be fatal to a business. As soon as your company starts leveraging cloud tools, you need to embed security in your plan from day one. By adopting a zero-trust approach and carefully managing who can access your data and network, you’ll go a long way toward preventing a crippling data breach.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Diya Jolly is chief product officer at Okta. As CPO, Diya leads product innovation for both workforce and customer identity. She plays an instrumental role in furthering Okta’s product leadership, enabling any organization to use any technology. Diya joined Okta from Google, … View Full Bio

Article source: https://www.darkreading.com/cloud/navigating-security-in-the-cloud/a/d-id/1336477?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Edge Cartoon Contest: You Better Watch Out …

Feeling creative this holiday season? Submit your caption in the comments, and our panel of experts will reward the winner with a $25 Amazon gift card.

We provide the cartoon. You write the caption!

Submit your caption in the comments, and our panel of experts will reward the winner with a $25 Amazon gift card. The contest ends Dec. 31. If you don’t want to enter a caption, help us pick a winner by voting on the submissions. Click thumbs-up for those you find funny; thumbs-down, not so. Editorial comments are encouraged and welcomed.

Click here for contest rules. For advice on how to beat the competition, check out How To Win A Cartoon Caption Contest.

John Klossner has been drawing technology cartoons for more than 15 years. His work regularly appears in Computerworld and Federal Computer Week. His illustrations and cartoons have also been published in The New Yorker, Barron’s, and The Wall Street Journal.
Web site: … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/the-edge-cartoon-contest-you-better-watch-out--/b/d-id/1336517?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

(Literally) Put a Ring on It: Protecting Biometric Fingerprints

Kaspersky creates a prototype ring you can wear on your finger for authentication.

It sounds more like science fiction, but it’s basically another mode of authentication: Kaspersky has developed a wearable ring with a stone storing a unique “fingerprint” for authenticating to biometric systems. 

The security firm, with the help of a 3D accessory designer, designed the ring, which contains a unique fingerprint made up of conductive fibers sitting in a rubber compound.

“That ring can be used to authenticate the user with biometric systems, such as a phone or a smart home door lock. And if the data of the ring fingerprint leaks, the user can block this particular ring and replace it with a new one — and their own unique biometric data won’t be compromised,” the company said in blog post announcing it.

When used to authenticate to a smartphone, the smartphone’s sensor “reads” the biometric stone, which Kaspersky says comes in the shape and texture of a real finger (ew?). The fibers activate the reader.

When the stone is pressed on a fingerprint sensor, the conductivity activates the reader. The fingerprint sensor then measures both the connectivity and the pattern of the fibers, and also compares the physical pattern of the fingerprint with the one that was set up on the device.

But don’t bother trying to find one for Christmas. “The ring is just a concept,” not a product, to raise awareness about security issues of biometrics, according to Kaspersky.

Read more here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “A Cause You Care About Needs Your Cybersecurity Help.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/(literally)-put-a-ring-on-it-protecting-biometric-fingerprints/d/d-id/1336516?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Black Hat Europe Q&A: Understanding the Ethics of Cybersecurity Journalism

Investigative journalist Geoff White chats about why now is the right time for his Black Hat Europe Briefing on hackers, journalists, and the ethical ramifications of cybersecurity journalism.

Now that major data leaks are a semi-regular occurrence it’s more important than ever for cybersecurity professionals to understand how the media covers them, and there’s no better place to do that than Black Hat Europe in London this week.

In his Black Hat Europe Briefing this afternoon on Hackers, Journalists and the Ethical Swamp investigative journalist Geoff White (who has covered technology for, among others, BBC News, Channel 4 News and Forbes) takes five high-profile hacking incidents and analyzes how they reflect key trends and tactics for working with (and some cases manipulating) the news media.

White chats with us a bit about why now is the right time for a talk like this, and what practical takeaways Black Hat Europe attendees can expect from him at the event.

Alex: Tell us a bit about yourself and your path into security work.

Geoff: I’m an investigative journalist and I’ve covered tech for, among others, BBC News, Channel 4 News and most recently The Times. I started doing security coverage while at Channel 4 News about ten years ago when I (and a lot of other people) started to realize the amount of power the tech companies were wielding, and how swaths of society were becoming increasingly reliant on the industry.

Since then I’ve covered election hacking, the Snowden leaks, Bitcoin fraud, bank hacking and energy sector attacks (I’ve got a book coming out in July next year covering all of this, called Crime Dot Com).

Alex: What inspired you to pitch this talk for Black Hat Europe?

Geoff: As part of my book I wrote a chapter about how “hackers” (in all senses of the word) interact with journalists. It’s an experience I’ve been through a few times, and of course there are many examples of data leakers, whistleblowers and occasionally cyber criminals leaking information to journalists either directly or indirectly.

But it struck me that there’s been a sea-change in tactics and approaches: we’re now at a stage where huge swathes of information can be strategically leaked either directly online or via the media, some sections of which are increasingly struggling for attention and therefore keen to cover the latest leaks regardless of the source.

For those whose information is leaked, the effects can be ruinous. It’s an area fraught with ethical challenges, and I don’t think there’s enough open and honest conversation about it.

Alex: What do you hope Black Hat Europe attendees will get out of attending your talk?

Geoff: I hope the talk will get them thinking about the ethics of data leaking and journalism. At one end of the spectrum there are (as there always have been) cases of anonymous whistleblowers working with diligent journalists to reveal stories of huge public importance. At the other end, there are cyber-criminals strategically and cynically dumping sensitive stolen information to harm a target.

But in between there is a big grey area, and if journalists and ethical data leakers want to get public support for their work, they need to be aware of the pitfalls.

Alex: Can you share a specific example of a leak in the big grey area between “of high public importance” and “weaponized disclosure” that you think was handled poorly, and how you hope it would have been improved based on your learnings?

Geoff: For me the most troubling example was the hacking of Sony Pictures Entertainment in late 2014, which has now been attributed by the US to the North Korean government.

Of course, at the time it was far from clear who was behind it, but nonetheless you had a situation where journalists were being emailed directly with stolen information by unknown sources. They were spoon-fed a series of increasingly damaging leaks that, in hindsight, had been calculated to do maximum damage to the company.

You could argue that it was in the public interest to expose sensitive information about a major media company, but the moral high ground got increasingly hard to occupy as more information emerged about who was behind the hack, and I worry that damages public support for journalism.

Learn more about Geoff’s Briefing (and lots of other interesting cybersecurity content) in the Briefings schedule for Black Hat Europe, which returns to The Excel in London December 2-5, 2019.

For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/black-hat-europe-qanda-understanding-the-ethics-of-cybersecurity-journalism/d/d-id/1336519?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FBI: Russia-based FaceApp is a ‘potential counterintelligence threat’

Last summer, users geeked out, privacy lovers freaked out, and at least one lawmaker fretted about an aging/expression-tweaking/gender-swapping mobile app called FaceApp (no relation to Facebook) that hails from Russia.

We’re on it, the FBI said last week, saying that it views any app or product coming out of Russia as a “potential counterintelligence threat.”

In a 25 November letter responding to concerns raised by Senator Chuck Schumer, FBI assistant director Jill Tyson said that the agency is investigating FaceApp over its ties to Russia.

FaceApp lets you do things like, say, get a handle on what you’ll look like if you still want to go to Hogwarts when you’re 80.

The app also pulls what you can think of as a FaceGrab – i.e., what its license says is its “perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license” to not just users’ manipulated likenesses, but also to their privacy, as in, username, location or profile photo.

In July 2019, Sen. Schumer had written to the director and chair of the Federal Trade Commission (FTC), calling on the FTC and FBI to look into the national security and privacy risks posed by the millions of Americans who were handing over full, irrevocable access to their personal photos and data to an app from a company – Wireless Lab – based in Russia.

What Schumer said at the time:

FaceApp’s location in Russia raises questions regarding how and when the company provides access to the data of US citizens to third parties, including potentially foreign governments.

At the time, Wireless Lab responded by saying that FaceApp only uploads photos selected by users for editing, and that it may store them in the cloud because that’s where it does its processing. It usually deletes images from its servers within 48 hours, the company said. Also, it said that it doesn’t send user data to Russia, instead processing images on US cloud providers’ infrastructure.

US, Russia, wherever: they’re all grabbing data

Forbes spoke with Ian Thornton-Trump, a CompTIA faculty member, who said that an adversarial nation like Russia – or any other adversarial country – would be happy to hoover up whatever data it can from whatever advantageously positioned US person it might get it from:

Russia would very much appreciate and encourage the use of FaceApp by anyone with a security clearance and their immediate family.

But how is a FaceApp data grab any different from a US company’s data grab?

We needn’t look far for examples of grabby US apps: Facebook, Google and Apple are all facing antitrust and privacy investigations.

At any rate, as far as nation state espionage goes, LinkedIn is a spy’s playground. A few years back, Germany’s spy agency – Bundesamt für Verfassungsschutz (BfV) – published eight of the most active profiles it says were being used on LinkedIn to contact and lure German officials for espionage purposes.

A more recent example: in June 2019, we saw a LinkedIn profile for “Katie Jones,” an extremely well-connected, attractive, young redhead and purportedly a Russia and Eurasia Fellow at the top think-tank Center for Strategic and International Studies (CSIS). An Associated Press investigation concluded that the profile was actually a deepfake created by artificial intelligence (AI) that managed to convince about 40 people to connect with it.

Thornton-Trump suggested that the hand-wringing over FaceApp’s potential risk to national security amounts to little more than pre-election “political pandering.”

I feel like this is a political opportunity to stoke the narrative of ‘tech company in Russia is bad; tech company in USA is good.’

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nPnVqJv23S8/

Microsoft looks to Rust language to beat memory vulnerabilities

Microsoft is pressing ahead with an ambitious plan to de-fang common vulnerabilities hiding in old Windows code by using an implementation of the open-source Rust programming language.

The company’s been working on the research initiative, dubbed Project Verona, for some time, but a recently posted presentation from September’s Collaborators’ Workshop adds to the impression of its growing importance.

Traditionally, Windows software requiring fine control, such as device drivers, low-level OS functions such as storage and memory management, is written in C or C++.

But that control comes at the expense of mistakes that lead to insecure code, particularly memory issues which account for up to 70% of the vulnerabilities that Microsoft finds itself patching later.

Most of these were made in the past and are sitting in legacy code that would take a lot of resources to rewrite from scratch with no guarantee they wouldn’t suffer the same problems eventually.

Memory safe

Rust, by contrast, has built-in protections against common memory problems such as use after free, type confusion, heap and stack corruption, and uninitialized use, which can afflict the C and C++ languages that Windows is written in.

Microsoft has been busy rewriting unnamed software components in Rust to see whether the concept works despite the language’s limitations, and the fact it is still mentioning it suggests it has found some success.

Project Verona’s Rust alternative now has a “production quality” runtime, a prototype interpreter and type checker. This would be made available as an open-source tool within weeks, he said.

It’s as if Microsoft is admitting that rather than badger its developers to write safe code for the next 10 years, a better option is simply to limit the parameters of the tools they use to create it.

If one were to pick holes, it might be to ask why it’s taken Microsoft so long to get around to adopting a memory-safe language such as Rust, years after Mozilla started sponsoring it to improve the security of its Firefox browser.

As the then-director of strategy at Mozilla Research, Dave Herman, noted in 2016:

Our preliminary measurements show the Rust component performing beautifully and delivering identical results to the original C++ component it’s replacing – but now implemented in a memory-safe programming language.

Microsoft’s Rust implementation is more complex than Mozilla’s because it needs more capabilities to work across a wider range of components. It’s still not clear when updated code might ship but it is starting to look inevitable at the current rate of progress.

If Microsoft’s enthusiasm for Rust reveals one thing it’s how the company has become more open-minded about using open-source tools to improve its security – something that surely bodes well for users of its software, including Windows.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Y3XJT2q7OEs/

Facebook made to ‘correct’ user’s post as Singapore flexes fake-news muscle

Over the past week or so, Singapore has flexed its new fake-news muscle twice. The result: two “amended” Facebook posts.

Singapore passed the law in question – the Protection From Online Falsehoods And Manipulation (POFMA) Act – in May 2019, and it went into effect on 2 October.

POFMA outlaws “false statements of fact”, including statements that an individual knows to be false or misleading and which threaten Singapore’s security, public health, friendly relations with other countries, or elections; or statements that stoke divisions between groups or that lead people to lose faith in the government.

The penalty for not complying with a correction direction order is up to a year in prison for an individual, and/or a fine of up to SG $20,000 (USD $14,650, £11,284). For a business – say, an online media platform like Facebook – the fine can be up to SG $500,000 (USD $366,249, £281,811). The fines and/or prison sentences shoot up for people who run fake online accounts or who use bots to spread fakery.

POFMA is considered one of the most far-reaching anti-fake-news law in recent years, and it’s sparked imitation: Nigerian lawmakers have proposed a law that would jail people for lying on social media.

The first target: an opposition politician

Singapore first invoked the law last week, compelling an opposition politician to amend a 13 November post in which he blamed the government for its failing investment in a Turkish restaurant chain.

In the original post, British-born Brad Bowyer had accused the government of using “false and misleading statements” to smear reputations. Finance Minister Heng Swee Keat, under POFMA, asked Bowyer to retract implications that the Singaporean government had influenced investments made by two state investors that Bowyer had said had made bad financial moves.

He amended the post of his own accord, writing at the top that “this post contains false statements of fact” and telling readers to click on a government link for “the correct facts.”

On 24 November – last Monday – he noted in another Facebook post that he had no problem complying with the request, given that it’s fair to have “both points of view and clarifications and corrections of fact when necessary.”

I do my best to use public facts and make informed statements of opinion based on the details I have access too [sic].

I am not against being asked to make clarifications or corrections especially if it is in the public interest.

The second target: an ex-pat dissident

Singapore’s second target was less compliant, so the government instead flexed its muscle at Facebook. This time, it was an ex-pat living in Australia – Alex Tan – who refused to tweak his government-displeasing Facebook post, in spite of the fact that, as he admitted, it was based on hearsay.

According to BuzzFeed News, Alex Tan wasn’t even aware that he’d been served with a “correction direction” until a friend contacted him over Messenger on 28 November – last Thursday. The friend told Tan that Singaporean media outlets were reporting that one of his Facebook posts had been determined to be fake news.

The correction direction focused on a 23 November post on the anti-Singaporean government State Times Review Facebook page, run by Tan, that alleged that a Singaporean whistleblower had been arrested. The government denied the arrest.

Tan conceded that his claim about the arrest – based on a tip – may not have been accurate, calling it “sensationalized”. That doesn’t mean he’s going to slap the fake-news disclaimer on the post, though, he told BuzzFeed:

Basically, on the same day [I became aware of the order], I put up a post saying that I wouldn’t comply with orders from a foreign government.

So Singaporean authorities instead pulled their first-ever bypass: on Friday, Singaporean minister for home affairs K. Shanmugam instructed the POFMA office to order Facebook to correct the post… which it did.

Singaporean Facebook users saw this notice below Tan’s post:

Facebook is legally required to tell you that the Singapore government says this post has false information.

A Facebook spokesperson told The Straits Times that yes, the company had applied a label to the post, as required by Singapore law. Let’s hope free expression doesn’t suffer, the spokesperson said:

As it is early days of the law coming into effect, we hope the Singapore Government’s assurances that it will not impact free expression will lead to a measured and transparent approach to implementation.

As far as Tan is concerned, Facebook handled the situation well, given that it refrained from saying that his post contained misleading statements, only that the Singapore government says it contained false information:

Facebook did a great job. They didn’t say that this post contains a falsehood.

Facebook’s action is in keeping with its most recent step in the evolution of how it handles fake news. In April 2018, in its ongoing fight against fakery, Facebook said that it had started putting some context around the sources of news stories. That includes all news stories: both the sources with good reputations, the junk factories, and the junk-churning bot-armies making money from it.

You might recall that in March 2017, Facebook started slapping “disputed” flags on what its panel of fact-checkers deemed fishy news.

You might also recall that the flags just made things worse. The flags did nothing to stop the spread of fake news, instead only causing traffic to some disputed stories to skyrocket as a backlash to what some groups saw as an attempt to bury “the truth”.

”I don’t really care”

As far as the threat of jail time or fines go, Tan couldn’t care less. He’s got no plans to leave Australia to return to Singapore, so what can the Singapore government do to him?

To be honest, as long as I stay in Australia, I don’t really care. This is way below my other priorities. I’ve got my full-time work here. I just bought a new apartment. I’m not really losing sleep over it.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/M8F1E6yjtlY/

Steam players – beware of fake skins as phishers try to hijack accounts

Phishing scammers have once again targeted users of the popular Steam gaming service, it was revealed this week.

The credential-stealing scam, first reported by security researcher ‘nullcookies’ on Twitter, offers new skins every day. A skin is a modification providing a new look and feel for items in Steam’s online games, and they are in hot demand. There are entire digital marketplaces dedicated to trading them.

The scammers post to a Steam user’s profile. A typical message reads:

Dear winner! Your SteamID is selected as winner of Weekly giveaway. Get your ☆ Karambit | Doppler on giveavvay.com.

A quick search reveals over a hundred Steam profiles displaying similar text.

The URL, which Cloudflare now flags as a suspected phishing scam, appears to be down. The screenshot posted on nullcookies’ Twitter account shows a site offering a $30,000 giveaway, featuring a selection of 26 loot boxes.

Bleeping Computer explains that the site asked for a user’s login credentials, promising that in exchange, the words STEAM RAIN would appear in a chat window on the left of the screen. Clicking on the link would score the victim one of the free skins on offer that day, said the scam site.

The chat window was, of course, a fake, as was the whole proposition. Victims who clicked on the link met a fake Steam login form that took their information for the crooks to use. That enabled them to perpetrate more fraud by using the victim’s account to post the same phishing link.

This phishing attack is notable because it is so convincing. Often, phishing websites feature poor language or spelling mistakes, but this scam went to extra lengths to convince victims that it was real. For example, the crooks reportedly used JavaScript to randomly select phrases from a list, periodically updating the chat window.

The site even included a faux Steam Guard two factor authentication (2FA) screen that sends a special access code to the address that the user entered, just as Steam’s real 2FA mechanism does. This all helped to lull the user into a false sense of security.

Phishing scams gravitate towards heavily used online services like banks and popular email account providers. Steam is one of the most successful online gaming providers, peaking at around 14.5 million concurrent users this week. It’s no wonder, then, that this isn’t the only phishing attack that its users have endured.

Other scams have reportedly lured gamers into clicking on screenshots of items offered for sale, triggering drive-by downloads, while some phishers have pretended to be Steam’s operators warning of a security problem.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uYhtPOqzZ_0/

Mozilla locks nosy Avast, AVG extensions out of Firefox store amid row over web privacy

The Firefox extensions built by Avast have been pulled from the open-source browser’s online add-on store over privacy fears.

Adblock Plus founder Wladimir Palant confirmed this week Mozilla has taken down the Avast Online Security and Avast-owned AVG Online Security extensions he reported to the browser maker, claiming the code was snooping on users’ web surfing.

The problem, as Palant has been documenting on his blog for some time, is that the extensions – which offer to do things like prevent malware infections and phishing – may go well beyond their needed level of access to user information to do their advertised functions.

According to Palant, the Avast extensions, when installed in your browser, track the URL and title of every webpage you visit, and how you got to that page, along with a per-user identifier and details about your operating system and browser version, plus other metadata, and then transmit all that info back to Avast’s backend servers. The user identifier is not always sent, according to Palant: it may not be disclosed if you have Avast Antivirus installed.

The rub seems to be that Avast says it needs this personal data to detect dodgy and fraudulent websites, while Palant argues the company goes too far and wanders into spyware territory. While Avast’s explanation is plausible, there are much better and safer ways to check visited pages for nastiness, typically involving cryptographic hashes of URLs, than firing off all visited web addresses to an Avast server, we note.

Palant also accused the Avast SafePrice and AVG SafePrice extensions of similarly harvesting people’s information: SafePrice checks you’re getting a good deal when shopping online.

He pointed out that AVG bought a company called Jumpshot in 2013, three years before AVG was acquired by Avast, that touts “clickstream data” that includes “100 million global online shoppers and 20 million global app users. Analyze it however you want: track what users searched for, how they interacted with a particular brand or product, and what they bought. Look into any category, country, or domain” – which sounds a lot like the data the Avast and AVG extensions collect.

It’s just not a great look for the security outfit, even if there is no connection between the services. In any case, the approach taken by Avast appears to have fallen foul of Mozilla’s recently updated rules for extensions on privacy, and so, the add-ons were kicked out of the Firefox store.

“The amount of data collected here exceeds by far what would be considered necessary or appropriate even for the security extensions, for the shopping helpers this functionality isn’t justifiable at all,” Palant argued.

Banned but not disabled

While the extensions are no longer accessible from the official Firefox add-on service, they still work with the browser, so those currently using the extensions will still be able to do so.

Avast acknowledged the take-down, and told The Register it was working with Mozilla on a resolution.

“We have offered our Avast Online Security and SafePrice browser extensions for many years through the Mozilla store,” an Avast spinner told us. “Mozilla has recently updated its store policy and we are liaising with them in order to make the necessary adjustments to our extensions to align with new requirements.

“The Avast Online Security extension is a security tool that protects users online, including from infected websites and phishing attacks. It is necessary for this service to collect the URL history to deliver its expected functionality. Avast does this without collecting or storing a user’s identification.”

There seems to be some confusion over that last part: Avast says it doesn’t collect user identifiers, yet according to Palant, the extensions may generate a per-user identifier code, called userid, that’s sent with each URL.

Palant, meanwhile, is now hoping to convince Google to follow Mozilla’s lead and block the Avast add-ons for Chrome and Opera users.

“Google Chrome is where the overwhelming majority of these users are,” the programmer noted. “The only official way to report an extension here is the ‘report abuse’ link. I used that one of course, but previous experience shows that it never has any effect.” ®

Sponsored:
Beyond the Data Frontier

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/04/avast_avg_mozilla_takedown/