STE WILLIAMS

Silence Group Quietly Emerges as New Threat to Banks

Though only two members strong, hackers pose a credible threat to banks in Russia and multiple countries.

A pair of Russian-speaking hackers, likely working in legitimate information security roles, has quietly emerged as a major threat to banks in Russia and numerous other former Soviet republics in recent months.

The duo, who security vendor Group-IB is tracking as “Silence,” is known to have stolen at least $800,000 from banks in Russia, Ukraine, Belarus, Poland, Kazakhstan, and Azerbaijan over the past year. The actual financial damages caused by the pair could be a lot higher given the likelihood that many incidents remain undiscovered or unattributed to Silence because of the group’s relative newness to security researchers, Group-IB said in a report this week.

Group-IB researchers first began tracking Silence in 2016 following a failed attempt to steal money from a Russian bank. The hackers disappeared from sight for more than one year after that, but then resurfaced in October 2017 when they attacked a bank’s ATM network and stole over $100,000 in a single night. Since then, Group-IB says it has identified Silence as being responsible for at least two more bank thefts — one in February 2018, when they netted $550,000 via a bank’s ATM machines, and the second in March, involving $150,000.

Several aspects about Silence make it interesting, Group-IB says. One distinctive feature is its unusually small size, especially considering the damage it has been creating. The Silence group appears to currently comprise just an operator and a developer.

The operator appears to be the one in charge, with in-depth knowledge about tools for conducting pen tests on banking systems, navigating inside a bank’s network, and gaining access to protected systems. The developer seems to be an adept reverse-engineer who is responsible for developing the tools and exploits that Silence has been using to break into bank networks and steal money.

The pair’s tactics and behavior suggest that both are either currently working in a legitimate information security role or were recently in one, says Rustam Mirkasymov, head of dynamic analysis of malicious code at Group-IB. For example, Silence appears to have ready access to unique, non-public malware samples that only security researchers typically have. The developer’s seemingly deep knowledge of ATM machines and processes suggests the individual is an insider or was one recently. The pair’s behavior during incidents also suggest they are analyzing and closely following security reports, Mirkasymov says.

Because of the group’s small size, the hackers have so far been somewhat limited in their ability to carry out attacks. Typically, they have averaged about three months between incidents, which is about three times as long as other financially motivated threat groups, such as Carbanak/Cobalt/FIN7 and MoneyTaker, usually take.

The two-person threat group has also shown a tendency to observe and learn from the actions of other threat actors, Mirkasymov says. Initially, Silence used third-party tools in its attacks but over time developed its own sophisticated toolkit. The unique set of card processing and ATM attack tools Silence has developed includes “Atmosphere,” a tool for getting ATMs to dispense large amounts of cash on demand; “Farse,” a utility for grabbing passwords from infected systems; and “Cleaner,” for getting rid of incriminating logs.

Like many other advanced persistent threat (APT) actors, Silence uses several borrowed tools in its capers, including a bot for conducting initial attacks and a tool for launching distributed denial of service (DDoS) attacks. Initially, the Silence duo used hacked servers and compromised accounts for carrying out its campaigns, but they have evolved to using phishing domains and self-signed certificates to drop malware on target networks.

“Now [that] they have tested the waters, they are formed, experienced, and ready to conduct sophisticated attacks on banking systems,” Mirkasymov says. Rather than reinventing the wheel, “they prefer to use well-known techniques, such as logical attacks on ATMs, and attacks on payment systems and card processing, employed by other financially motivated cybercriminals,” he says.

Silence’s geography of successful attacks so far has been limited to the so-called Commonwealth of Independent States (CIS), or nations that once belonged to the Soviet Union. But its ambitions appear much broader. According to Mirkasymov, the group has sent phishing emails to bank employees in some 25 countries, including Germany, Great Britain, the Czech Republic, Romania, Malaysia, Kenya, Israel, Cyprus, and Greece.

Silence does not only attack banks, Mirkasymov cautions. The group also has shown a tendency to attack online stores, news agencies, and insurance companies, using their infrastructure to conduct attacks on financial institutions.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/silence-group-quietly-emerges-as-new-threat-to-banks/d/d-id/1332742?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

PowerPool Malware Uses Windows Zero-Day Posted on Twitter

Researchers detected the vulnerability in an attack campaign two days after it was posted on social media.

There are several good reasons why you shouldn’t post zero-day exploits on social media. For starters, lurking attackers will snatch the code and leverage it in a malware campaign.

Such is the case with a Microsoft Windows zero-day bug shared on Twitter last week. Two days after the vulnerability and proof-of-concept was posted on Twitter and GitHub, respectively, ESET researchers identified the exploit in a campaign from the PowerPool threat group.

The vulnerability, first shared in a (now deleted) tweet on August 27, affects the Advanced Local Procedure Call (ALPC) function within the Windows Task Manager in  Windows 7 through Windows 10. The flaw allows Local Privilege Escalation (LPE), which lets an executable escalate privileges and allows restricted users launch a process to gain administrative control.

Twitter user SandboxEscaper, who sent the initial post, linked back to a GitHub repository with PoC code. It didn’t take long for attackers to modify and recompile the exploit. PowerPool, which has a range of tools already at its disposal, took advantage.

PowerPool has a small bunch of targets, researchers explain in a blog post on the discovery. It may be too early to tell, but few occurrences indicate recipients are carefully chosen and not part of a spam campaign. ESET telemetry and uploads to VirusTotal (experts only accounted for manual uploads from the Web interface) indicate affected countries include Chile, Germany, India, the Philippines, Poland, Russia, the United Kingdom, the United States, and Ukraine.

“We guess this is an espionage campaign, due to the nature of their backdoors,” says ESET malware researcher Matthieu Faou. “However, their malware are basic and cannot be compared to the ones developed by most APT groups.”

While this campaign is more targeted, PowerPool has previously launched spam attacks. ESET data shows the group has been active since 2017 but hasn’t been linked to any public breaches.

But First, They Changed the Code

PowerPool didn’t use the exact binary that SandboxEscaper posted. Instead, they modified and recompiled the source code to insert their own malware and gain system privileges. The binary provided at the time of disclosure is a PoC showing how to exploit the flaw, Faou explains. It’s not really malicious, he says, because it will ultimately execute notepad.exe with system privileges. PowerPool wanted to execute their own malware.

The flaw is in the SchRpcSetSecurity API function, which doesn’t correctly check user permissions. This grants anyone write access to files in the Task Manager regardless of their rights; as a result, people with read-only access can replace content in write-protected files or create a file within the folder to link to, and gain write access to, any target file.

The exploit can also be used to replace content of protected target files with malicious code, giving malware admin rights. PowerPool chose to weaponize the vuln by changing the content of GoogleUpdate.exe, the updater for Google apps typically run under admin privileges by a Microsoft Windows task. Once they have write access, they overwrite GoogleUpdate.exe with a copy of their second-stage malware to gain system rights when the updater is next called.

The group uses a few different tactics for initial compromise, one of which involves emails with their first-stage malware as an attachment. From there, attackers primarily use two different backdoors: one deployed after the initial compromise and a second-stage backdoor.

The first-stage backdoor does reconnaissance on the machine and includes two executables. First of these is the main backdoor; this establishes persistence through a service and collects proxy information. The CC server’s address is included in this binary, which can execute commands and send information on the target device back to the CC server. The second executable captures a screenshot of the target’s display and exfiltrates it through the backdoor.

Next up is the second backdoor, which is malware downloaded via the first stage. Researchers speculate this is when the operators determine the machine is interesting enough to warrant further analysis; however, “it is clearly not a state-of-the-art APT backdoor,” they report.

Once attackers gain persistent access to a machine with the second backdoor, they leverage open-source tools (mostly written in PowerShell) to move laterally throughout the network.

Vulnerability Disclosure 101

Faou says the nature of this disclosure made weaponization simple for PowerPool.

“First, what is really important in this vulnerability disclosure is the release of the source code of the exploit, and not only a compiled version of it,” he explains. “Thus, this is easy for malware developers to reuse it in their malware.”

In contrast, when only a compiled version is available, malware developers first should reverse-engineer the exploit before including their malware. The process can be time-consuming, he says, and difficult to finish before a patch is issued for the bug.

Security researchers who discover vulnerabilities should coordinate disclosure with the vendor, giving them time to issue a fix before the bug is made public, Faou continues. This protects users; it’s unlikely vulnerabilities will be used in massive campaigns before public disclosure.

While this campaign only targets a limited pool of victims, ESET researchers still urge caution: “…it shows that cybercriminals also follow the news and work on employing exploits as soon as they are publicly available,” they say.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/application-security/powerpool-malware-uses-windows-zero-day-posted-on-twitter/d/d-id/1332743?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Ways Blockchain is Being Used for Security

Blockchain is being used as a security tool. If you haven’t thought about adopting it, you might want to reconsider your take.PreviousNext

The distributed ledger of blockchain has found application in many fields, from cryptocurrency to supply chain. Much of the excitement about blockchain is due to its reputation as an inherently secure technology. But can that inherent security be applied to the field of security itself?

In a growing number of cases, the answer is “yes.” Security professionals are finding that the qualities blockchain brings to a solution are effective in securing data, networks, identities, critical infrastructure, and more. As with other emerging technologies, the biggest question is not seen as whether blockchain can be used in security, but in which applications it is best used today.

Blockchain is being used in a number of security applications, ranging from record-keeping to acting as part of the active data infrastructure, and more options likely are on the horizon.

But while excitement over blockchain’s potential grows, it’s important to keep that potential in perspective.

One of the claims frequently made about blockchain is that it is an “un-hackable” technology. While no intrusive hacks have been demonstrated yet, it’s wrong to say that blockchain can’t be hacked. In early 2018, a “51% attack”, in which a threat actor managed to gain control over more than half of a blockchain’s compute power and corrupt the integrity of the ledger, showed that novel techniques can be effective. While this particular attack is expensive and difficult, the fact that it was effective means that security professionals should treat blockchain as a useful technology – not a magical answer to all problems.

Here are some ways blockchain is being used or considered as a security tool. 

(Image: NicoElNino)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/application-security/7-ways-blockchain-is-being-used-for-security/d/d-id/1332735?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

MEGA secure upload service gets its Chrome extension hacked

Remember MEGA – or, more precisely, Megaupload as it once was?

Sure you do!

It was a New Zealand cloud storage business masterminded by Kim Dotcom, a larger-than-life digital-era entrepreneur (Dotcom is literally as well as figuratively big, standing more than 2m tall).

Megaupload is no more, having ended up embroiled in piracy allegations that led to a controversial raid on Dotcom’s home, Dotcom’s high-profile arrest, and the demise of the company.

Dotcom himself is still in New Zealand, where he’s been fighting extradition to the US for the past six years.

As far as we know, three Kiwi courts have already pronounced that his extradition can go ahead, so Dotcom is down to his final legal appeal now, assuming he can persuade the Supreme Court to hear his case.

After the bust

After the bust, the Megaupload service noisily reinvented itself, minus the controversial word “upload”, as the capital-lettered MEGA, bullishly and very pointedly launching on the anniversary of Dotcom’s arrest.

MEGA took the approach that by doing all its cryptography right in your browser, instead of relying on encrypted sessions terminating at the company’s servers, it wouldn’t know and would never be able to tell what you had uploaded.

The only person who would ever have copies of the cryptographic keys used for scrambling and unscrambling your files would be you – just as if you encrypted them offline on a USB drive and then uploaded a sector-level disk image of the already-encrypted data.

The new-look MEGA service announced itself as truly secure cloud storage and argued that it could never again be accused of knowingly contributing to copyright infringement.

Similarly, there would be no point in any law enforcement agency appealing to MEGA to decrypt customer data, with or without a warrant.

The company simply couldn’t comply with any such request in the first place, so it could never be accused of refusing to comply.

If this sounds familiar in 2018, it’s because true end-to-end encryption has become mainstream since Mega’s launch in 2013, and is now implemented in many of today’s mobile and web-based products, notably messaging apps and password managers.

As for Kim Dotcom, well, he fell out with Mega in 2015, claiming that he no longer trusted the site for a variety of rather vague reasons related to Chinese investment, New Zealand government involvement and Hollywood interference.

MEGA, for its part, is sailing along without Dotcom, dubbing itself as “The Privacy Company,” with an enviably simple tagline of user-encrypted cloud services.

OK, that’s enough by way of introduction.

(We took our time about it because we thought the company’s history, both legally and cryptographically, was interesting – and intriguing! – enough to repeat here.)

Today’s story isn’t about any of that – it deals with a security advisory issued yesterday by MEGA, warning that a hacked version of its Chrome browser plugin ended up in the Chrome webstore for several hours.

Somehow, crooks got hold of MEGA’s webstore upload credentials, built a bogus version of the company’s plugin that was Trojanised with password-stealing code, and uploaded it as the latest official release.

One of Chrome’s big security features, of course, is automatic updating, so anyone who was online during the danger period (2018-09-04T14:30Z to 2018-09-04T18:30Z) may very well have received the malware-laden version.

According to MEGA, the infected extension sniffed out and stole “credentials for sites including amazon.com, live.com, github.com, google.com (for webstore login), myetherwallet.com, mymonero.com, [and] idex.market.”

Additionally, says MEGA, “HTTP POST requests to other sites” were logged and exfiltrated, too.

What does this mean?

As far as we can make out from MEGA’s rather brief statement, what this means is that any credentials for the abovementioned sites would have been sniffed out and stolen by the crooks.

Also, just about any data you entered in a web form (or any file you uploaded) on any non-HTTPS website was probably stolen, too.

Ironically, it looks as though Google’s walled garden safety procedures only kicked in after five hours, an hour after MEGA had managed to overwrite the bogus update (3.39.4) with a legitimate one (3.39.5).

As a result, the MEGA extension is currently no longer available at all on the webstore – not even the updated one that overwrote the imposter.

(At the time of writing [2018-09-05T16:30Z], there were several extensions using MEGA’s logo and brand name, apparently none of which were the real deal.)

In another irony, noted by MEGA in its security report, Chrome extensions accepted for the webstore are digitally signed by Google on behalf of their creators.

This official digital signature is therefore applied by Google after an unsigned extension is uploaded, rather than applied by the creator before the upload happens.

In other words, once the crooks had got hold of MEGA’s webstore login credentials, they’d already hit a home run because they didn’t need MEGA’s code signing keys as well – they could upload unsigned code that Google would sign for them.

And, in a final irony, passwords for and data stored on MEGA itself weren’t targeted by the poisoned extension – whether that was a backhanded compliment from the crooks, or a bit of a slap in the face, we can’t say.

What to do?

  • If you don’t use MEGA, you can relax.
  • If you use MEGA but don’t use Chrome, you can relax.
  • If you use MEGA and Chrome but have never installed the MEGA extension, you can relax.
  • If you had the affected extension installed during the time window listed above, consider changing all your passwords.
  • If you aren’t using a password manager, consider trying one now. Password managers are particularly helpful when you need to change a whole lot of passwords at the same time.
  • If you aren’t using two-factor authentication (2FA), consider it now. We’re guessing that at least some of MEGA’s developers weren’t using 2FA, and that the crooks got in more easily as a result.

As for Google’s code signing policies, we’re inclined to agree with MEGA here: requiring signed uploads would be a good thing.

Even if the extension ultimately ends up signed by Google instead of the creator, surely the additional step of digital validation that Google could carry out when updating an extension would make things harder for the crooks?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/J32ZyP0wo5c/

Everything DM gets direct message slap: Marketing biz cops £60k ICO fine

A scurrilous marketing agency that fired 1.42 million emails at prospective customers was today saddled with a £60,000 fine by the UK’s data watchdog.

The Information Commissioner’s Office said Stevenage-based Everything DM Ltd (EDML) pestered people for a year from May 2016 via its direct marketing system, Touchpoint.

EDML, which was paid to send the mailers on behalf of agency clients, was unable to prove the recipients had ever agreed to receive them from itself or its customers.

A probe of the circumstances by the ICO found EDML had relied on the consent of third parties but had failed to take necessary steps to ensure the data compiled with the Privacy and Electronic Communications Regulations (PECR).

ICO director of investigations Steve Eckersley, said:

“Firms providing marketing services to other organisations need to double-check whether they have valid consent from people to send marketing emails to them.”

He added that “generic third party consent is not enough” and that companies that don’t verify their position will discover they are breaking the law.

EDML has been issued with an enforcement notice warning it to meet PECR regulations head on.

According to Companies House, EDML was previously called Marketing File but updated its naming in November last year. The business has filed abbreviated accounts meaning it has a turnover less than £6.5m or fewer than 50 employees. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/05/ico_slaps_marketing_biz_with_60k_fine/

Thoughts on the Latest Apache Struts Vulnerability

CVE-2018-11776 operates at a far deeper level within the code than all prior Struts vulnerabilities. This requires a greater understanding of the Struts code itself as well as the various libraries used by Struts.

About a week ago, a security researcher disclosed a critical remote code execution vulnerability in the Apache Struts web application framework that could allow remote attackers to run malicious code on the affected servers.

The vulnerability (CVE-2018-11776) affects all supported versions of Struts 2 and was patched by the Apache Software Foundation on August 22. Users of Struts 2.3 should upgrade to 2.3.35; users of Struts 2.5 need to upgrade to 2.5.17. They should do so as soon as possible, given that bad actors are already working on exploits.

“On the whole, this is more critical than the highly critical Struts RCE vulnerability that the Semmle Security Research Team discovered and announced last September,” Man Yue Mo, the researcher who uncovered the flaw, told the media, referring to the vulnerability (CVE-2017-9805) that hackers used to compromise Equifax last year, which led to the lifting of personal details of over 148 million consumers.

More Critical than Equifax Vulnerability
Struts, an open source framework for developing web applications, is widely used by enterprises worldwide, including many Fortune 100 companies. In 2017, the Equifax credit reporting agency used Struts in an online portal, and due to Equifax not identifying and patching a vulnerable version of Struts, attackers were able to capture personal consumer information such as names, Social Security numbers, birth dates and addresses of over 148 million US consumers, nearly 700,000 UK residents, and more than 19,000 Canadian customers.

Over the past year, I’ve lost track of the number of people asking whether they should migrate from Apache Struts to some other framework. Behind all those requests was an implicit fear that more critical issues were present.

Unfortunately, changing application frameworks isn’t as easy as adopting a new pizza chain or even buying a new car. Rather its more akin to dumping your favorite sports team for another. The investment your development team made in understanding the framework and implementing features supported by that framework likely won’t transfer to an alternate option — assuming a compatible one even exists.

Instead, development teams need to accept that frameworks will have issues and that those issues are likely the result of attempts by the framework developers to appeal to the broadest audience. This means that you shouldn’t expect the framework to magically protect against security issues you can address in your code. Input validation is a perfect example of this — bad data going in often results in bad data coming out, or in operations that we never anticipated would be performed.

Modern software is increasingly complex, and identifying how data passes through it should be a priority for all software development teams. To give some background, developers commonly use libraries of code, or development paradigms which have proven efficient, when creating new applications or features. This attribute is a positive when the library or paradigm is of high quality, but when a security defect is uncovered this same attribute often leads to a pattern of security issues.

Root Causes
In the case of CVE-2018-11776, the root cause was a lack of input validation on the URL passed to the Struts framework. In both 2016 and 2017, the Apache Struts community disclosed a series of remote code execution vulnerabilities. These vulnerabilities all related to the improper handling of unvalidated data. However, unlike CVE-2018-11776, the prior vulnerabilities were all in code within a single functional area of the Struts code. This meant that developers familiar with that functional area could quickly identify and resolve issues without introducing new functional behaviors.

CVE-2018-11776, on the other hand, operates at a far deeper level within the code, which in turns requires a deeper understanding of not only the Struts code itself but the various libraries used by Struts. It is this level of understanding that is of greatest concern — and this concern relates to any library framework. Validating the input to a function requires a clear definition of what is acceptable.

Equally critical are requirements that the documentation for public functions clearly document how those function use and operate on any data passed to them. This forms a contract between the framework and the consumer, and sets expectations surround proper usage. Absent this contract and associated documentation, it’s difficult to determine if the code is operating correctly or not. This contract becomes critical when patches to libraries are issued because it is unrealistic to assume that all patches are free from behavioral changes.

Shortly after the Apache Software Foundation released its patch, a proof-of-concept (PoC) exploit of the vulnerability was posted on GitHub. The PoC included a Python script that allows for easy exploitation. The firm that discovered the PoC, threat intelligence company Recorded Future, also said that it has spotted chatter on underground forums revolving around of the flaw’s exploitation. Companies that don’t want to become the next Equifax should immediately identify where and what version of Apache Struts they have in use and apply the patch as needed.

Related Content:

 

Black Hat Europe returns to London, Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Tim Mackey is a technical evangelist for Black Duck by Synopsys. Within this role, he engages with various technical communities to understand how to best solve application security problems. He specializes in container security, virtualization, cloud technologies, … View Full Bio

Article source: https://www.darkreading.com/application-security/thoughts-on-the-latest-apache-struts-vulnerability-/a/d-id/1332716?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Issues Chrome Updates for Windows, Mac, Linux, Android

Chrome 69 for the desktop platforms, as well as Chrome for Android 69, will be available over the next few weeks.

A new version of Google Chrome is rolling out to Windows, Mac, Linux, and Android over the next few weeks, the company announced this week.

Chrome version 69.0.3497.81 is being promoted to the stable channel for Windows, Mac, and Linux. The update packs 40 security fixes, including patches for vulnerabilities an attacker could abuse to assume control over a target system, according to a US-CERT advisory on the update.

In addition to Google’s security team, many of the bugs were detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.

Chrome for Android 69 (69.0.3497.76) has also been released and will be available on Google Play over the next few weeks, Google reports. The 10th anniversary edition aims to improve mobile payment security with third-party payment apps, password generation on more websites, stability and performance improvements, and a cleaner modernized design.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/google-issues-chrome-updates-for-windows-mac-linux-android/d/d-id/1332736?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google releases free AI tool to stamp out child sexual abuse material

Since 2008, the National Center for Missing Exploited Children (NCMEC) has made available a list of hash values for known child sexual abuse images. Provided by ISPs, these hash values (which are like a digital fingerprint) enable companies to check large volumes of files for matches without those companies themselves having to keep copies of offending images or to actually pry open people’s private messages.

More recently, in 2015, the Internet Watch Foundation (IWF) announced that it would share hashes of such vile imagery with the online industry in a bid to speed up its identification and removal, working with web giants Google, Facebook, Twitter, Microsoft and Yahoo to remove child sexual abuse material (CSAM) from the web.

It’s been worthy work, but it’s had one problem: you can only get a hash of an image after you’ve identified it. That means that a lot of human analysts have to analyze a lot of content – onerous work for reviewers, and also an approach that doesn’t scale well when it comes to keeping up with the scourge.

On Monday, Google announced that it’s releasing a free artificial intelligence (AI) tool to address that problem: technology that can identify, and report, online CSAM at scale, easing the need for human analysts to do all the work of catching new material that hasn’t yet been hashed.

Google Engineering Lead Nikola Todorovic and Product Manager Abhi Chaudhuri said in the post that the AI “significantly advances” Google’s existing technologies to “dramatically improve how service providers, NGOs, and other technology companies review violative content at scale.”

Google says that the use of deep neural networks for image processing will assist reviewers who’ve been sorting through images, by prioritizing the most likely CSAM content for review.

The classifier adds on to the historical approaches of detecting such content – matching hashes of known CSAM – by also targeting content that hasn’t yet been confirmed as CSAM.

The faster the identification, the faster children can be rescued, Google said:

Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse.

Google is making the tool available for free to NGOs and its industry partners via its Content Safety API: “a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”

Susie Hargreaves, CEO of the IWF:

We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material. By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.

How much speed is speeded up? Google says that it’s seen the system help a reviewer find and take action on 700% more CSAM content than can be reviewed and reported without the aid of AI.

Google said that those interested in using the Content Safety API should reach out to the company by using this API request form.

This won’t be enough to stop the spread of what Google called this “abhorrent” content, but the fight will go on, the company said:

Identifying and fighting the spread of CSAM is an ongoing challenge, and governments, law enforcement, NGOs and industry all have a critically important role in protecting children from this horrific crime.

While technology alone is not a panacea for this societal challenge, this work marks a big step forward in helping more organizations do this challenging work at scale. We will continue to invest in technology and organizations to help fight the perpetrators of CSAM and to keep our platforms and our users safe from this type of abhorrent content. We look forward to working alongside even more partners in the industry to help them do the same.

Fred Langford, deputy CEO of the IWF, told the Verge that the organization – one of the largest organizations dedicated to stopping the spread of CSAM online – first plans to test Google’s new AI tool thoroughly.

As it is, there’s been a lot of hype about AI, he said, noting the “fantastical claims” made about such technologies.

While tools like Google’s are building towards fully automated systems that can identify previously unseen material without human interaction, such a prospect is “a bit like the Holy Grail in our arena,” Langford said.

The human moderators aren’t going away, in other words. At least, not yet. The IWF will keep running its tip lines and employing teams of humans to identify abuse imagery; will keep investigating sites to find where CSAM is shared; and will keep working with law enforcement to shut them down.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RTSmxV-Nm_Y/

Can ‘sonar’ sniff out your Android’s lock code?

Researchers have demonstrated a novel – if slightly James Bond technique – for clandestinely discovering the unlock pattern used to secure an Android smartphone.

Dubbed ‘SonarSnoop’ by a combined team from Lancaster University in the UK and Linköping University in Sweden, the idea is reminiscent of the way bats locate objects in space by bouncing sound waves off them.

Sound frequencies inaudible to humans between 18kHz and 20kHz are emitted from the smartphone’s speaker under the control of a malicious app that has been sneaked on to the target device.

These bounce off the user’s fingers as the pattern lock is entered before being recorded through the microphone. With the application of machine learning algorithms specific to each device (whose speakers and microphones positions vary), an attacker can use this echo to infer finger position and movement.

Technically, this is known as a side-channel attack because it exploits the characteristics of the system without the need to discover a specific weakness or vulnerability in its makeup (the Meltdown and Spectre CPU cache timing attacks from earlier this year are famous examples of this principle).

In the context of acoustic attacks, this method is considered to be active because sound frequencies must be generated to make it work, as opposed to a passive method where naturally-occurring sounds would be captured.

Does it work?

Under testing against a Samsung S4, SonarSnoop worked well enough that it was able to reduce the range of possible unlock patterns by 70% without targets being aware they were under observation.

This doesn’t sound terribly impressive given that there are several hundred thousand possible patterns on Android’s nine-dot grid, but it turns out that most people choose from a small subset of these, with the dozen most common being chosen by one in five.

Because devices impose limits on the number of incorrect tries, narrowing down the search range is a useful endeavour for any attacker.

Say the authors:

Our approach can be easily applied to other application scenarios and device types. Overall, our work highlights a new family of security threat.

The concept is just the latest example of researchers using sound to bypass security in unexpected ways. Researchers recently discovered it’s possible to “read” your screen just by listening to it. And earlier this year, an Israeli team revealed how speakers could be used to jump air-gapped defences, which followed similar ‘Fansmitter’ experiments to use fan noise as a covert channel.

The researchers even figured out how data might be sneaked out of a device inside a Faraday cage, which sounds impossible until you read up on the MAGNETO and ODINI proofs of concept.

The ‘SonarSnoop’ attack relies on a malicious app being installed on your phone, so it’s not as if someone can sniff your lock code in your local coffee shop, at least not without some pre-planning. There is no evidence that any of these techniques have ever been used in real attacks.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0M8h7AE-65c/

Knock, knock: Digital key flaw unlocks door control systems

Attackers could be able to unlock doors in office buildings, factories and other corporate buildings at will, thanks to a flaw in a popular door controller, discovered by a Google security researcher.

David Tomaschik, who works as senior security engineer and tech lead at Google, uncovered the flaw in devices made by Software House, a Johnson Controls company. Forbes reports that he conducted his research on Google’s own door control system.

Tomaschik, who described his project at a talk in August at DEF CON’s IoT Village, explored two devices. The first was iStar Ultra, a Linux-based door controller that supports hardwired and wireless locks. The second was the IP-ACM Ethernet Door Module, a door controller that communicates with iStar.

When a user presents an RFID badge, the door controller sends the information to the iStar device, which checks to see if the user is authorised. It then returns an instruction to the door controller, telling it to either unlock the door or to deny access.

Software House’s website still promotes the original version of its IP-ACM as a “highly secure option to manage their security”. But judging from Tomaschik’s research, that’s a bit wide of the mark.

The devices were using encryption to protect their network communication – however, digging through their network traffic, Tomaschik found that Software House had apparently been rolling its own crypto rather than relying on tried and tested solutions.

The devices were sending hardcoded encryption keys over the network, and were using a fixed initialization vector, which is an input to the cryptographic function that creates the key. Moreover, the devices didn’t include any message signing, meaning that an imposter could easily send messages pretending to be from a legitimate device, and the recipient wouldn’t check.

This key unlocked the kingdom, so to speak. It enabled him to impersonate Software House devices on the network, doing anything that they could. This included the power to unlock doors, or stop others from unlocking them.

To engineer such an attack, all an intruder would need is access to the same IP network used by the Software House devices. If a company hasn’t carefully segmented and locked down its network and lets these devices communicate over a general office network, and if the attacker can gain access to that, then it presents a potential intrusion point.

We asked Software House for a statement about this, and a spokesperson said:

This issue was publicly reported at the end of December 2017. In early January 2018, we notified our customers of the issue and our plans to address it with a new version of the product.  We released that new version addressing the issue in early February 2018.

Tomaschik blogged about it last December, and a CVE bug relating to this issue was published on 30 December last year. He discovered the issue in July 2017, telling Software House about it in the same month. The company acknowledged the flaw and proposed a fix.

The fix involved a change in the encryption system to an algorithm based on TLS encryption that doesn’t consistently send the same keys across a public network. On its product page, the company says of the v2 Ethernet Door Module:

IP-ACM v2 now supports 801.1X and TLS 1.2 secure network protocols for added protection against the threat of cyber attacks.

However, Tomaschik has argued that this alone might not be enough, because Software House systems’ original IP-ACM units didn’t have enough memory to cope with installing new firmware.

Software House admitted the inability to update the firmware for existing devices in an emailed statement to Naked Security:

We did notify customers that v1 of the product did not have enough physical memory to upgrade to TLS.

Google reportedly took steps to protect its offices by segmenting its network, but there are likely to be many pre-version 2 units installed in the wild that cannot be updated to fix the encryption key problem, and many companies that do not fix this issue. Ethernet-based door unlockers don’t have a speedy refresh cycle, after all.

The situation also highlights the difference between conventional and IoT products, Tomaschik said when blogging about his DEF CON talk (blog also contains slides):

It’s not meant to be a vendor-shaming session. It’s meant to be a discussion of the difficulty of getting physical access control systems that have IP communications features right. It’s meant to show that the designs we use to build a secure system when you have a classic user interface don’t work the same way in the IoT world.

Until a company replaces its door controllers with the new hardware, anyone with the code to execute an attack could theoretically gain access to its facilities. Tomaschik hasn’t released his proof of concept code, but that doesn’t mean someone else couldn’t engineer it.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ljAXoesSViM/