STE WILLIAMS

IoT security? We’ve heard of it, says UK.gov waving new regs

The British government has finally woken up to the relatively lax security of IoT devices, and is lurching forward with legislation to make gadgets connected to the web more secure.

The Department of Digital, Culture, Media and Sport said it will require makers of IoT hardware to ship devices with unique passwords that cannot be reset to a factory default setting.

The regulation will also require these companies to “explicitly state” how long they will continue to support devices when customers purchase the product, and appoint someone – one throat to choke – to act as a point of contact so that punters can more easily report issues.

“Our new law will hold firms manufacturing and selling internet-connected devices to account and stop hackers threatening people’s privacy and safety,” digital Minister Matt Warman – a former Telegraph hack – said in a statement. “It will mean robust security standards are built in from the design stage and not bolted on as an afterthought.”

The regulation is a belated step in the right direction, some in the infosec community told us. “The result of the consultation show strong support for regulation of the wild west that is IoT security,” said Ken Munro, a security researcher at infosec firm Pen Test Partners. “Next, the government needs to step up and legislate quickly to protect us from those smart device vendors who don’t treat our privacy and security with the respect they should do.”

But others, such as Jason Nurse, an assistant professor in cybersecurity at the University of Kent, worry how effective the regulations will be in practice. “If manufacturers require consumers to setup new passwords at product installation, these individuals will need to manage these passwords for each connected device,” he told us.

“This could significantly increase the number of passwords the average household has to manage – and there are also questions about what happens when such passwords are forgotten or misplaced.”

Smart devices have become a booming part of consumer electronics in recent years. But experts have warned that many devices are vulnerable to hackers and eavesdropping. In December, hackers were able to infiltrate the bedroom of an eight-year-old child via a Ring home security camera installed in her bedroom. The Amazon-owned company unveiled new privacy features at CES earlier this month. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/28/uk_government_cracks_down_on_iot_security/

Mozilla bans Firefox extensions for executing remote code

Every time it looks as if Mozilla is getting on top of the problem of malicious or risky extensions, it finds itself having to step in to block another batch.

In the latest action, noticed by a ZDNet reporter, Mozilla banned 197 extensions, 129 of which were published by one B2B software developer, 2Ring.

The nature of the banned extensions is difficult to say – Mozilla lists them on Bugzilla using only the IDs they used on addons.mozilla.org (AMO) – however, 2Ring’s products appear to be designed for organisations using Cisco telephony and other software products.

These will now be disabled in Firefox and it will no longer be possible to install or update them.

What did the extensions do to incur Mozilla’s wrath? According to the reviewer who made the decision:

I’ve reviewed the add-ons and confirmed they are executing remote code.

Dredging a swamp

The hard ban on extensions that execute remote code seems to have happened around the time pre-release versions of Firefox 72 hove into view, but this was only noticed by some developers and users when the company abruptly banned several page translation extensions in November.

At the time, some developers complained that Mozilla hadn’t communicated the change and offered no workaround for a small number of cases where it might prove useful.

In fact, the policy on remote code goes back to the early days of Firefox but was apparently not always enforced. Mozilla’s policy is now unambiguous – add-ons must be self-contained and not load remote code, which opens up the user to all sorts of risks.

That implies that, prior to November, extensions loading such code could operate with more freedom, specifically those that were being self-hosted as unlisted extensions rather than served via the AMO.

That doesn’t mean that every extension loading remote code in the past was doing so for malicious reasons, but it underlines how Mozilla is having to tighten controls in the face of growing abuse.

It’s becoming a perpetual game of whack-a-mole.

Last year it slapped a ban on extensions using obfuscated code, such as JavaScript code where the purpose or intention is in some way hidden.

That followed an incident in 2018 where Mozilla banned 23 extensions for doing things the company claimed they shouldn’t have been.

In fact, a growing list of extensions have ended up in hot water in recent months for all sorts of things, including search hijacking, and siphoning off user credentials and other user data.

Once installed, browser extensions can acquire a great deal of power, as a study last year reported when it found a small number trying to bypass basic protections such as Same Origin Policy (SOP) policy.

One conclusion is that if browsers are mini software platforms, they are only as good as the security imposed by companies such as Mozilla on the developers uploading to them.

This places huge pressure on review teams to spot problems before they occur. Today, most of the action is still in retrospect and after the fact.

Browser extensions security tips

As Mozilla points out, many extensions aren’t written by well-known developers, so a deeper dive might be necessary.

  • Install as few extensions as possible, and only from official web stores.
  • Check the reviews and feedback from others who have installed the extension.
  • Pay attention to the developer’s reputation and how responsive they are to questions and how frequently they post version updates.
  • Study the permissions they ask for (in Firefox, Options Extensions and Themes Manage) and check they’re in line with the features of the extension. And if these permissions change, be suspicious.

Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/NvbQrtROsLI/

Cisco patches bugs in security admin center and Webex

Cisco has patched a critical bug that could give attackers unauthorised access to Firepower Management Centre (FMC), the device that controls all of its security products.

Cisco’s FMC is an administrative controller for the company’s network security products, giving administrators access to firewalls, application controllers, intrusion prevention, URL filtering, and malware protection systems. According to the company’s advisory, issued on 22 January, the vulnerability could allow a remote attacker to execute administrative commands on the device after bypassing authentication.

The problem lies in how the FMC handles authentication responses from Lightweight Directory Access Protocol (LDAP) servers. LDAP is a popular protocol that applications use to access directories (known as directory system agents). The directories hold information about users, including their access credentials.

The FMC is only vulnerable if it uses an external LDAP server to authenticate users of its web-based management interface. Cisco advises customers to check these using the product’s administrative interface. Go to the System menu, then Users, and finally External Authentication. Look for an External Authentication Object that is enabled and lists LDAP as its authentication method.

The bug, CVE-2019-16028, has a CVSS score of 9.8. Cisco has patched it in maintenance releases for versions 6.4 and 6.5, which are both available now. It will also introduce maintenance releases for versions 6.2.3 and 6.3.0 in February and May respectively. Until then, customers can use hot fixes for those products. Those using earlier versions should migrate to a fixed release, the company said.

Webex

This was the one critical bug in a collection of 28 advisories that Cisco released last week. It also announced patches for several bugs with high severity, including some in its collaboration products.

One of these, a bug in its Webex Meetings Suite and Meetings Online websites, enables attackers using an iOS or Android device to join a password-protected meeting without authenticating. The vulnerability exposes unintended meeting information in the mobile app that enables the attacker to access a known meeting ID or URL from their web browser, which then launches the mobile Webex application.

Another vulnerability in the video endpoint API of its Cisco TelePresence software fails to properly validate user-supplied input. An attacker could exploit it to read and write arbitrary files to the system, but they would need an In-Room Control or administrator account to do so.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RKqRqaSNXxc/

Facial recognition firm sued for scraping 3 billion faceprints

New York facial recognition startup Clearview AI – which has amassed a huge database of more than three billion images scraped from employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and Venmo – is being sued in a potential class action lawsuit that claims the company gobbled up photos out of “pure greed” to sell to law enforcement.

The complaint (posted courtesy of ZDNet) was filed in Illinois, which has the nation’s strictest biometrics privacy law – the Biometric Information Privacy Act (BIPA).

The suit against Clearview was just one chunk of shrapnel that flew after the New York Times published an exposé about how Clearview has been quietly selling access to faceprints and facial recognition software to law enforcement agencies across the US, claiming that it can identify a person based on a single photo, revealing their real name and far more. From the New York Times:

The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

Clearview told the Times that more than 600 law enforcement agencies have started using Clearview in the past year, and it’s sold the technology to a handful of companies for security purposes. Clearview declined to provide a list of its customers.

Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told the newspaper that the “weaponization possibilities” of such a tool are “endless.”

Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.

The secretive company “might end privacy as we know it,” the Times predicted in its headline. From the report:

Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable – and his or her home address would be only a few clicks away. It would herald the end of public anonymity.

The complaint claims that Clearview’s technology gravely threatens civil liberties.

Constitutional limits on the ability of the police to demand identification without reasonable suspicion, for instance, mean little if officers can determine with certainty a person’s identity, social connections, and all sorts of other personal details based on the visibility of his face alone.

The lawsuit claims that Clearview isn’t just selling this technology to law enforcement: it’s also allegedly sold its database to private entities including banks and retail loss prevention specialists; has “actively explored” using its technology to enable a white supremacist to conduct “extreme opposition research”; and has developed ways to implant its technology in wearable glasses that private individuals could use.

Clearview thus joins Facebook and Vimeo in being accused of violating BIPA by amassing biometric data without people’s consent.

Representatives of Facebook, YouTube, Twitter, Instagram and Venmo told the Times that their policies prohibit this type of scraping. Twitter said that it’s explicitly banned use of its data for facial recognition. Last week, Twitter also sent a cease-and-desist letter to Clearview, telling it to stop collecting its data and to delete whatever data it now has.

In interviews with the Times, Clearview founder Hoan Ton-That shrugged at the notion that scraping data violates site policies:

A lot of people are doing it. Facebook knows.

US lawmakers have expressed concern. Senator Ron Wyden said on Twitter that Clearview’s possible use of its technology to suppress media interest was “troubling”:

Senator Edward J. Markey echoed the Times’s “end of privacy as we know it” prediction, sending a letter to Clearview on Thursday in which he suggested that its technology could “facilitate dangerous behaviors and effectively destroy individuals’ ability to go about their lives anonymously”.

Clearview’s product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified.

Markey called on Clearview to provide a list of all the law enforcement or intelligence agencies that Clearview has talked to about acquiring its technology and which ones are currently using it.

Not the first time

This is far from the first time that facial recognition has threatened the end of anonymity, mind you. In May 2019, we heard about a programmer who claimed to have cross-referenced 100,000 faces of women appearing in adult films with photos in their social media profiles.

Three years before that, porn actresses and sex workers were being outed to friends and family by people using a Russian facial recognition service to strip them of anonymity. Users of an imageboard called Dvach in early April 2016 began to use the “FindFace” service to match explicit photos with images posted to the Russian version of Facebook, the social network VK.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9Wtt8BCE-R8/

States sue over rules that allow release of 3D-printed gun blueprints

A coalition of states is suing the Trump administration in an effort to stop it from making it easier for people to make 3D-printed guns.

Specifically, top law enforcement officials are trying to keep the administration from allowing people to post blueprints online to print what are sometimes called “ghost guns”: unregistered, untraceable firearms that are tough to detect, even with a metal detector.

The lawsuit was filed in Seattle on Thursday. The office for Washington state Attorney General Bob Ferguson said in an announcement that the lawsuit has been brought by attorneys general in 20 states and the District of Columbia.

Law enforcement officials have for years been trying to raise awareness of the dangers of 3D-printed ghost guns. One such was used by Eric McGinnis: a Dallas man who was arrested in 2017 after police heard him shooting rounds in the woods.

McGinnis had tried to buy a gun but failed the background check after attacking his girlfriend the year before. When police searched him, they found a partially 3D-printed rifle, along with a hit list that included the names of federal lawmakers.

These things aren’t all plastic

A word about that “partially printed” 3D rifle: the notion of 3D printing will likely conjure images of an end product made entirely from printed plastic. However, most things aren’t made from a single material, and in the case of printed guns, that means printed plastic parts that are joined with essential metal components.

In other words, 3D printed parts don’t need to be the end product: they can, rather, assist in the fabrication of the end product – for example, besides the plastic bits of a printed gun, 3D printing can also assist in rifling the metal barrels for shotguns.

Multiple suits

Thursday’s lawsuit isn’t the first go-round when it comes to suing the government over the sharing of 3D-printed gun plans online. In July 2018, Ferguson led a similar multi-state lawsuit, suing the administration for ”giving dangerous individuals access to 3D printed firearms.”

In November 2019, a federal judge in Seattle agreed with the plaintiffs, ruling that it’s illegal to deregulate downloadable gun files. Besides illegal, it’s also “arbitrary” and “capricious”, Judge Robert Lasnik ruled.

Here’s Ferguson, from Thursday’s press release:

Why is the Trump Administration working so hard to allow domestic abusers, felons and terrorists access to untraceable, undetectable 3D-printed guns?

Even the president himself said in a tweet that this decision didn’t make any sense – one of the rare instances when I agreed with him. We will continue to stand up against this unlawful, dangerous policy.

Proponents of the administration’s attempts say that it’s citizens’ constitutional right to get at 3D gun blueprints. Hampering the posting of such content would violate the First Amendment protection of freedom of expression, they claim, as well as the Second Amendment’s protection of Americans’ right to keep and bear arms.

Legal timeline

Here’s a timeline for the current suit’s genesis and for what the attorneys general say have been contradictory stances taken by the administration:

2015: a gun-file distributor sues Obama administration. In June 2015, the State Department made it plain that it intended to regulate Americans’ publishing of online data that could enable someone to digitally fabricate a gun.

Defense Distributed, a global distributor of open-source, downloadable 3D-printed gun files, sued after the Department of State forced it to remove files from the internet. At the time, the federal government successfully argued that posting the files online violates firearm export laws and poses a serious threat to national security and public safety. The Supreme Court declined to hear the case.

2018: the government flip-flops. The Trump administration reversed the government’s stance and settled the Defense Distributed case, agreeing to allow unlimited public internet distribution of the downloadable files. A state coalition filed a lawsuit in July 2018. The government lost the case after the Seattle judge said that the administration’s decision to allow the distribution of the files was “arbitrary, capricious and unlawful.”

2019: 2nd try. The administration is now trying again, this time by publishing new rules that would transfer regulation of 3D-printed guns from the State Department to the Department of Commerce. The new rules were made available to the public the week prior to the new lawsuit and were finalized on Thursday, the day the suit was filed.

The states are claiming in this second lawsuit that regulatory change would effectively allow for unlimited distribution of the guns.

The administration itself has admitted that regulating ghost guns is legal. The Department of Commerce acknowledged in the rules that regulation doesn’t violate the First or Second Amendments:

Limitations on the dissemination of such functional technology and software do not violate the right to free expression under the First Amendment. Nor does the final rule violate the right to keep and bear arms under the Second Amendment.

The government also acknowledged the dangers posed by the distribution of 3D-printed gun blueprints in the new rules:

Such items could be easily used in the proliferation of conventional weapons, the acquisition of destabilizing numbers of such weapons, or for acts of terrorism. […] The potential for the ease of access to the software and technology, undetectable means of production, and potential to inflict harm on U.S. persons and allies abroad present a grave concern for the United States.

Besides the District of Columbia and Washington state, the suit was filed by the AGs of Illinois, California, Colorado, Connecticut, Delaware,Hawaii, Massachusetts, Maryland, Maine, Michigan, Minnesota, North Carolina, New Jersey, New York, Oregon, Pennsylvania, Rhode Island, Virginia, and Vermont.

The Justice Department hasn’t commented on the suit.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JxB-qrD61WI/

NetWars! Let the SANS Tournaments commence: Compete and learn all about forensics, incident response, red teaming – and much more

Sponsored Attendees of SANS’ world-class courses consistently rate the hands-on exercises as the most valuable part of the experience. With NetWars, however, SANS has raised the ante with a set of cyber-tournaments that let participants work through a range of challenging levels and master the skills employed by information security professionals.

SANS certified instructor Steve Armstrong, with SANS since 2007, explains how NetWars work. “We have several different types of NetWars Tournaments,” he said. Core NetWars is middle of the road – you’re challenged with assignments that cover incident response, system understanding and management, Linux, web compromises, and penetration testing. Meanwhile, DFIR NetWars is much more into evidence-handling.

“Players are given disk and phone images and have to extract artefacts, web histories, and more, to determine what has happened during the various incidents,” Armstrong added. “We also run Grid, ICS, and Cyber Defense NetWars Tournaments.”

NetWars was created to encourage those of all levels, from junior up to advanced cyber professionals. Hints are offered if you feel your skills aren’t sufficiently advanced.

SANS NetWars delivers advanced challenges to test even the more technically gifted attendees, and returning or accomplished students will not go unchallenged. You can rest assured that you won’t get bored with NetWars.

“We change the games about every 18 months, so it doesn’t get stale,” Armstrong said. “That way, people attended a training course every couple of years or so will get a different NetWars experience every time. Plus, each game tends to be themed and immersive, and although the question engine stays the same, the questions are all based around the various themes. For example, the activities could revolve around themes ranging from Star Wars to Willy Wonka or Lord of the Rings.”

NetWars runs over two evenings, each for three hours, providing a full six hours of additional training – basically, a full extra day for no additional cost. “Students find it to be a nice way to relax and socialise with friends and to make new acquaintances. Taking on the challenges together and applying all the skills that they have learned, players build camaraderie with those around them as the evening progresses,” Armstrong said.

As multilevel, individual, or team-based capture-the-flag events, NetWars are structured so that they encourage people in the early part of their career to learn.

As participants proceed further through the game, they engage in a way that helps them improve their skills. You don’t have to know Python or C, and it all runs in virtual machines. “All we expect you to bring is an enthusiasm to learn, a willingness to apply yourself, and an open mind,” Armstrong adds.

By having participants compete against one another with a live scoreboard, NetWars puts people under the type of pressure that’s often experienced by security practitioners in their day-to-day roles.

The best way to participate is to enrol in any of SANS’s four- to six-day courses, which gives you the option to attend NetWars for free. You can experience world-class training and NetWars at its upcoming SANS London Events.

In addition to live NetWars, SANS also offers NetWars Continuous, a four-month online subscription that offers students the chance to test their cyber skillset, take on challenges, and learn hands-on offensive and defensive skills 24 hours a day, seven days a week.

Sponsored by SANS Institute.

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/28/sans_tournaments_commence/

New Zoom Bug Prompts Security Fix, Platform Changes

A newly discovered Zoom vulnerability would have enabled an attacker to join active meetings and access audio, video, and documents shared.

CPX 360 – New Orleans, La. – A previously undisclosed and now patched vulnerability in the Zoom conferencing platform could have let attackers drop into active meetings by generating and verifying Zoom IDs.

Zoom users know the platform’s unique meeting IDs are made up of 9, 10, or 11-digit numbers. If hosts don’t require a conference password or enable the Waiting Room feature, Zoom ID is the only factor protecting meetings from unauthorized attendees. Check Point researchers found it was possible for an attacker to generate potentially valid Zoom IDs and automate their verification.

“The number should be privately shared, and it should be that nobody should be able to guess it,” says Check Point head of cyber research Yaniv Balmas. “We found a vulnerability in Zoom that allows it to tell us whether a number is a meeting number in a matter of minutes.”

Researchers pre-generated a list of potential meeting IDs and prepared a URL string for joining a meeting. When the URL was entered with a random meeting ID number, they noticed the HTML body of the returned response indicated “Invalid meeting ID” or “Valid Meeting ID found,” depending on whether the ID was linked to an active conference. Automating this approach allowed them to quickly determine valid ID numbers and drop in on random ongoing calls

“Although we know the number is valid, we don’t know whose chat it’s going to be,” Balmas notes. “You can call it Zoom roulette.”

Exploiting this vulnerability could grant an attacker the same privileges as any Zoom attendee, meaning they would have access to audio, video, and documents shared during the call. While the intruder would not be invisible, Balmas points out, it wouldn’t be difficult to go unnoticed.

“To be frank, when you have a meeting with 20 to 30 participants, do you take the time to validate each and every participant in the meeting?” he asks. Researchers were able to correctly predict roughly four percent of the randomly generated meeting IDs, which they consider a high chance of success compared with pure brute force.

This isn’t a simple attack, Balmas says, but an intermediate adversary could pull it off. Other vulnerabilities the research team uncovers are typically more technical, he explains. “I wouldn’t call this one easy, but the bar is definitely lower than what we usually do,” he adds.

The Check Point Research team discovered this flaw last year and contacted Zoom in July 2019. Following a responsible disclosure process, the communications company today released a fix and introduced several mitigations to its platform, so this type of attack is no longer possible. The patch released today must be deployed manually.

Zoom is adding passwords by default to all future scheduled meetings. Users can add a password to meetings they have scheduled; Zoom is sending instructions to users. Password settings can be enforced at the account and group levels by the account administrator.

Further, Zoom will no longer automatically indicate if a meeting ID is valid or invalid; this way, an attacker would not be able to narrow the pool of meetings in an attempt to join one. Repeated attempts to scan for IDs will cause a device to be blocked for a period of time.

Who’s On the Line? Poking Holes in Conference Platforms
This is the latest in a series of vulnerabilities discovered in popular conference platforms. Late last year, the CQ Prime Threat Research Team disclosed the “Prying-Eye” flaw, which existed in the Zoom and Cisco Webex conferencing tools. Prying-Eye could let attackers scan for, and drop into, video meetings unprotected by a password.

More recently, Cisco issued a patch for CVE-2020-3142, a vulnerability in Cisco Webex Meetings Suite websites and Cisco Webex Meetings Online that would allow an unauthenticated, remote attendee to join a password-protected meeting without entering the password. An attacker could exploit this by accessing a known meeting ID or URL from a mobile device’s Web browser.

“Video chats are a very valid attack vector,” says Balmas. “I wouldn’t be surprised if [attackers] are trying to look for more vulnerabilities there.” Flaws like these could grant intruders access to directors’ meetings and other calls where participants discuss sensitive business matters.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/new-zoom-bug-prompts-security-fix-platform-changes/d/d-id/1336892?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Maryland: Make malware possession a crime! Yes, yes, researchers get a free pass

A US state that was struck by a ransomware attack last year is now proposing a local law that would ban possession of malicious software.

Local news website the Baltimore Fishbowl reported that Maryland’s Senate heard arguments on Senate Bill SB0030, a proposition that would “label the possession and intent to use ransomware in a malicious manner as a misdemeanor” punishable with up to 10 years in prison and/or a $10,000 fine.

A local US Democratic Party politician, Susan Lee of Montgomery County, is the bill’s lead sponsor. She told a local news agency: “It’s important to establish so criminals know it’s a crime,” Sen. Lee said. “[The bill] gives prosecutors tools to charge offenders.”

Baltimore, the largest city in Maryland, was struck twice by ransomware in 2018 and 2019. Last year’s infection temporarily closed down various public sector institutions including the city council, mail servers for its police force and its legislative reference office, as we reported at the time.

A block-caps line of the bill itself (PDF, 5 pages) says: “THIS PARAGRAPH DOES NOT APPLY TO THE USE OF RANSOMWARE FOR RESEARCH PURPOSES.”

Brett Callow, a threat analyst from infosec biz Emsisoft, opined to The Register that this move to criminalise intentional possession of malware by miscreants is unlikely to “scare the pants off the anonymous cybercriminals who are already breaking myriad international laws and whose extortion schemes are earning them billions”.

He said: “First, I doubt that too many people in Maryland actually possess ransomware (except for the cities which have been reluctant recipients of it, that is). Second, making something illegal doesn’t help unless you can catch and prosecute those who break the law.”

Callow also told El Reg a cautionary tale about a company called Southwire, which was struck by a ransomware attack. The attackers threatened to publish the stolen data unless Southwire paid them off; Southwire went to a local (US) court and obtained a takedown order on their website as well as a legal demand to return the data.

The attackers, part of the Maze ransomware gang, responded by simply mirroring their site in China, publishing what they said was 10 per cent of the stolen data and threatening to keep publishing it in 10 per cent packets unless the company paid up, as BleepingComputer reported.

Legal remedies for ransomware only work if you know who your attacker is and what jurisdiction they’re in. Strangely enough, most ransomware gangs go to great lengths to ensure their victims can’t work this out. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/27/ransomware_possession_criminal_maryland/

Google halts paid-for Chrome extension updates amid fraud surge: Web Store in lockdown ‘due to the scale of abuse’

On Saturday, Google temporarily disabled the ability to publish paid Chrome apps, extensions, and themes in the Chrome Web Store due to a surge in fraud.

“Earlier this month the Chrome Web Store team detected a significant increase in the number of fraudulent transactions involving paid Chrome extensions that aim to exploit users,” said Simeon Vincent, developer advocate for Chrome Extensions, in a post to the Chromium Extensions forum. “Due to the scale of this abuse, we have temporarily disabled publishing paid items.”

Vincent said the shutdown is temporary while Google looks for a long-term way to address the problem. Developers who have paid extensions, subscriptions, or in-app purchases and received a rejection notice for “Spam and Placement in the Store,” he said, can probably attribute the notification to the fraud fighting shutdown.

Vincent said those who have received a rejection notice can reply to the email and request an appeal. This process needs to be done for each new version published or updated while the fraud block is in place.

Google did not respond to a request to clarify how Chrome Web Store fraud was being carried out.

The Chocolate Factory provides developers with several payment options for selling apps, extensions, and themes. The Chrome Web Store has a one-time payment system called Chrome Web Store Payments. For Chrome Apps, which soon will be phased out in Chrome (and later for ChromeOS), developers can use a Google Payments Merchant Account and Chrome Web Store API to sell in-app virtual goods.

Over the past few days, developers of Chrome extensions have been reporting account suspensions and app rejections that appear to be related to the fraud emergency. KodeMuse Software, an India-based software biz that makes several Chrome extensions, insists its code complies with laws and Google policies, says its account was inexplicably suspended.

The anti-fraud measures may have gone into effect prior to Vincent’s announcement. Developers began reporting that they’d received “Spam and Placement in the Store” warnings on January 19, and more reports followed over the next few days.

Abandoned house

Google’s clever-clogs are focused on many things, but not this: The Chrome Web Store. Devs complain of rip-offs, scams, wait times

READ MORE

In an email to The Register, Jeff Johnson, who runs Lapcat Software, which makes macOS and iOS audio apps and a privacy extension for Chrome and Safari called StopTheMadness, said that existing extensions remain accessible in the Chrome Web Store, but updates and new extensions are being rejected.

“I submitted a minor bug fix update on January 19, and I received an email on January 22 from Chrome Web Store Developer Support titled ‘Chrome Web Store: Removal notification for StopTheMadness,'” he explained, noting that the extension was not removed but the update was rejected.

“There have been many complaints in Google’s Chromium Extensions forum in the past few weeks, but Google provided no useful information until now.”

Johnson said that he has a Safari app extension in the Mac App Store and while developer support isn’t great, the Chrome Web Store is worse and feels understaffed – a charge other software makers have made.

“The Mac App Store usually reviews my updates within 24 hours, and if something goes wrong, I can contact support and get a response within a reasonable amount of time,” he said. “With the Chrome Web Store, however, my updates can take up to a week to get reviewed, and if something goes wrong, you’re almost hopelessly lost.”

“Google seems to want to automate things as much as possible and avoid employing human staff,” he continued. “There’s no phone # you can call. There is email, but when they finally respond – if they ever do respond – you get the feeling that the response was written by AI rather than a real person.”

Johnson attributed Google’s lack of communication with developers for the current situation, where a large number of developers encounter problems due to a sudden policy change and have nowhere to turn for help. ®

Speaking of browser extensions… Cast your mind back to December and you may recall antivirus-maker Avast ran into trouble with its Firefox add-on. The extensions were booted out of Mozilla’s web store for breaking its privacy rules.

It appears the extensions harvested a lot of information about their users and sent it all back to Avast – including URLs of sites visited, along with a per-device unique ID. Avast-owned Jumpshot then sold that, apparently deanonymized, data on “100 million global online shoppers and 20 million global app users,” boasting to customers: “Analyze it however you want: track what users searched for, how they interacted with a particular brand or product, and what they bought. Look into any category, country, or domain.”

This hosepipe-like feed includes things like web search terms, videos watched, links clicked on, and so on.

And, crucially, it is seemingly easy to deanonymize this data. If you’re a big brand, or any website, really, and you get told by Jumpshot that device ID ABC123 was used to buy some stuff at 10.05am from your dot-com, and you see that purchase in your own logs at that time, you now know ABC123 is used by a particular shopper, and you can identify them in all their other Jumpshot-collected web activity.

And all that Jumpshot data appears to have been sold to big names, too, such as Unilever, Nestle Purina, and Kimberly-Clark, judging by the outfit’s marketing.

Avast told PC Mag today it has stopped all user info harvesting “for any other purpose than the core security engine, including sharing with Jumpshot.” However, according to the web magazine’s Michael Kan:

Nevertheless, Avast’s Jumpshot division can still collect your browser histories through Avast’s main antivirus applications on desktop and mobile. This include AVG antivirus, which Avast also owns. The data harvesting occurs through the software’s Web Shield component, which will also scan URLs on your browser to detect malicious or fraudulent websites. For this reason, PCMag can no longer recommend Avast Free Antivirus as an Editors’ Choice in the category of free antivirus protection.

Avoid Avast.

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/27/google_disables_web_store/

Remember the Clipper chip? NSA’s botched backdoor-for-Feds from 1993 still influences today’s encryption debates

Enigma More than a quarter century after its introduction, the failed rollout of hardware deliberately backdoored by the NSA is still having an impact on the modern encryption debate.

Known as Clipper, the encryption chipset developed and championed by the US government only lasted a few years, from 1993 to 1996. However, the project remains a cautionary tale for security professionals and some policy-makers. In the latter case, however, the lessons appear to have been forgotten, Matt Blaze, McDevitt Professor of Computer Science and Law at Georgetown University in the US, told the USENIX Enigma security conference today in San Francisco.

In short, Clipper was an effort by the NSA to create a secure encryption system, aimed at telephones and other gear, that could be cracked by investigators if needed. It boiled down to a microchip that contained an 80-bit key burned in during fabrication, with a copy of the key held in escrow for g-men to use with proper clearance. Thus, any data encrypted by the chip could be decrypted as needed by the government. The Diffie-Hellman key exchange algorithm was used to exchange data securely between devices.

Any key escrow mechanism is going to be designed from the same position of ignorance that Clipper was designed with in the 1990s

Not surprisingly, the project met stiff resistance from security and privacy advocates who, even in the early days of the worldwide web, saw the massive risk posed by the chipset: for one thing, if someone outside the US government was able to get hold of the keys or deduce them, Clipper-secured devices would be vulnerable to eavesdropping. The implementation was also buggy and lacking. Some of the people on the Clipper team were so alarmed they secretly briefed opponents of the project, alerting them to insecurities in the design, The Register understands.

Blaze, meanwhile, recounted how Clipper was doomed from the start, in part because of a hardware-based approach that was expensive and inconvenient to implement, and because technical vulnerabilities in the encryption and escrow method would be difficult to fix. Each chip cost about $30 when programmed, we note, and the relatively short keys could be broken by future computers.

In the years following Clipper’s unveiling, a period dubbed the “first crypto wars,” Blaze said, the chipset was snubbed and faded into obscurity while software-based encryption rose and led to the loosening of government restrictions on its sale and use.

It is important to note, said Blaze, that the pace of innovation and unpredictability of how technologies will develop makes it incredibly difficult to legislate an approach to encryption and backdoors. In other words, security mechanisms made mandatory today, such as another escrow system, could be broken within a few years, by force or by exploiting flaws, leading to disaster.

Photo by a katz / Shutterstock.com

FBI Director wants ‘adult conversation’ about backdooring encryption

READ MORE

This unpredictability in technological development, said Blaze, thus undercuts the entire concept of backdoors and key escrow. The FBI and Trump administration (and the Obama one before that) pushed hard for such a system but need to learn the lessons of history, Blaze opined.

“Any key escrow mechanism is going to be designed from the same position of ignorance that Clipper was designed with in the 1990s,” explained the Georgetown Prof. “We are going to be looking back at those engineering decisions 10 years from now as being equally laughably wrong.”

Daniel Weitzner, founding director of the MIT Internet Policy Research Initiative, said this problem is not lost on all governments trying to work out new encryption laws and policies in the 21st century. He sees a number of administrations trying to address the issue by bringing developers and telcos in on the process.

“What the legislators hear is a complicated problem that they don’t know how to resolve,” Weitzner noted. “Moving the debate to experts on one hand gets you down to details, but it is not necessarily easy.” ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/27/clipper_lessons_learned/