STE WILLIAMS

Cyber-Risks Hiding Inside Mobile App Stores

As the number of blacklisted apps on Google Play continues to drop, attackers find new ways to compromise smartphones.

Mobile devices – pervasive in the workplace, heavily used, and often unregulated – present a wealth of opportunity to cybercriminals aiming to access employees’ sensitive information.

The mobile threat landscape is always shifting, says Jordan Herman, researcher at RiskIQ, which recently published its “Mobile Threat Landscape Q1 2019” report. Researchers scanned more than 120 app stores and nearly 2 billion resources to detect mobile apps in the wild. In the past four quarters, RiskIQ has categorized 8 million mobile apps, of which 217,982 were blacklisted.

A rush of apps continues to flood mobile marketplaces. In the first quarter of 2019, RiskIQ saw 2.26 million new apps, nearly 6% more than the fourth quarter of 2018. Given the sheer size, scope, and complexity of the global app ecosystem, it’s tough for organizations to monitor their mobile presence and protect customers and employees from an evolving range of threats.

“The fact that it changes from quarter to quarter goes to show how many different ways there are to attack mobile,” Herman says. “Mobile is so ubiquitous and so ingrained in our day-to-day lives that threat actors can target users in hundreds of ways and keep trying until something works.” Threats range from fake antivirus apps to phishing attempts to Magecart incidents.

As Herman points out, there are several ways to develop and distribute malicious apps. Some may sign up the user for paid subscription services without the user’s knowledge, granting the developer monetary gain. Others may steal personal data that can be used for identity theft. Some may try to disguise themselves as popular apps, while yet others may appear benign (a flashlight app, for example) but request excessive permissions to steal data stored on the phone.

Following three consecutive quarters of decline, the number of blacklisted apps rose 15% between the fourth quarter of 2018 and the first quarter of 2019. Google Play had 1.4 million apps – more than three times that of the Apple App Store – and accounted for 58% of all blacklisted apps in 2018. The next highest blacklisted store was 9Apps, which made up about 19% of the blacklist total. Feral apps (those listed on the open Web) accounted for nearly 9% of blacklisted mobile apps.

But Google Play is falling as a hot spot for malicious applications: The number of blacklisted apps in the store fell for the second consecutive quarter, down nearly 64% since Q3 2018. “Our data indicates Google is getting better at policing the Play store,” Herman says. Rogue apps still appear given Android is the world’s most popular mobile platform and the Play store is more open to developers, but new app stores are emerging with far more malicious intent.

Inside Malicious Apps  
After Google Play, which had nearly 38,000 blacklisted apps between the fourth quarter of 2018 and the first quarter of 2019, 9Game was the second most blacklisted store. Most (96%) of the applications on 9Game.com and 30% of apps in “Vmallapps” were blacklisted, RiskIQ reports.

“Our data indicates that Google is getting better at policing the Play store,” Herman says. The company regularly removes blacklisted apps and does so quickly once the apps are identified.

9Game appears to be a “wholly malicious” store, with nearly every app requesting permission for the camera, location data, Wi-Fi, file system, Internet, and settings. With these permissions, any app downloaded from the store has full reign over the device that installed it. The app can install more malicious apps without the user’s knowledge and send anything it finds on the phone wherever it wants. AndroidAPKDescargar is another example of a malicious store; it targeted Spanish-speaking Android users and was the most blacklisted app store in 2017.

Whether an application is obviously malicious depends on the developer’s sophistication and user’s awareness. Some malicious apps require permissions far beyond their function – for example, a flashlight app that requires GPS or microphone access. This is seemingly obvious; however, an app with hidden code that changes settings or downloads malware may not be.

When Good Apps Go Bad
Mobile apps created with good intentions can prove harmful if they’re not properly developed. Positive Technologies explores this further in its “Vulnerabilities and Threats in Mobile Applications 2019” report, also released this week. High-risk vulnerabilities were found in 38% of iOS apps and 43% of Android apps. Insecure data storage, detected in 76% of mobile apps overall, was the most common issue. Most (89%) vulnerabilities can be exploited remotely.

Leigh-Anne Galloway, Positive Technologies’ cybersecurity resilience lead, points to top security flaws: incorrect session termination, by which an attacker can access a user’s session after they log out; insecure interprocess communication, by which user data can be accessed; and the absence of Certificate Pinning, which allows a man-in-the-middle attack with fake certificates.

Mobile device users’ data is at risk, she adds, as 71% of mobile apps leave information exposed to unauthorized access. “Most vulnerabilities appear at the design stage of the application, before writing the code, and they can be fixed only by making changes to the code,” Galloway explains, adding that unauthorized access to user data is the most common mobile app threat.

While the report often distinguishes between iOS and Android apps, it’s not worth thinking about the security of specific platforms, she adds. Most flaws (74% in iOS apps and 57% in Android apps) are related to the shortcomings of protection mechanisms that arise during the design phase.

“Developers do not provide security when planning functionality,” she explains. “So when developing an application, many security platform capabilities are simply not used or are used incorrectly.” This contributes to similar vulnerabilities appearing in an app across platforms.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/mobile/cyber-risks-hiding-inside-mobile-app-stores/d/d-id/1335031?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Used Nest cams were letting previous owners spy on you

A former Nest cam owner recently found that he could still see images from his old security camera, even after performing the factory reset you’re supposed to do before you offload your gizmos.

The real problem: he wasn’t seeing a feed of his own property. Instead, he was seeing the new owner’s place, via his Wink account. Wink is a brand of software and hardware that connects with, and controls, smart-home devices.

According to a report from Wirecutter, the original owner – a member of the Facebook Wink Users Group – said that he’d connected the Nest Cam to his Wink smarthome hub. Somehow, resetting it didn’t cut the cord: the feed, via a series of stills, from his former camera to his Wink account didn’t go away.

After the Wirecutter report was published on Wednesday, Google – owner of Nest – sent a statement to the publication to let them know that it had fixed the issue and that users’ devices will be automatically updated:

We were recently made aware of an issue affecting some Nest cameras connected to third-party partner services via Works with Nest. We’ve since rolled out a fix for this issue that will update automatically, so if you own a Nest camera, there’s no need to take any action.

Re-testing of a Nest Indoor Cam and the Wink Hub confirmed that the issue has indeed been corrected.

We don’t know what the problem was, or how Google fixed it.

In fact, there’s a lot of “we dunno!” to go around in the Internet of Things (IoT) – things that are going to be plugged into your life, your living room, your bedroom or what have you. It might be wise to keep that in mind when you’re considering purchasing preowned versions of these kind of cameras, locks and other devices outfitted with microphones.

At Christmas we told you about Mozilla’s IoT gift guide, which ranked popular IoT gifts in terms of their security. If you buy a second-hand connected device, always perform a factory reset on it and set up new credentials. If you’re buying new, make sure the device can receive security updates and replace any default passwords with strong, unique ones of your own, straight away.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3i5HGHIkdNA/

Microsoft uses AI to push Windows 10 upgrade to users

On 12 November 2019, users running Windows 10 version 1803 (the October 2018 update) or earlier on non-enterprise licenses will be required to upgrade to version 1903 (May 2019 update) or find themselves no longer able to receive monthly security updates.

According to Microsoft, from that date, Windows 10 version 1803 Home, Windows 10 version 1803 Pro, Windows 10 version 1803 Pro for Workstations, and Windows 10 version 1803 IoT Core will reach end of servicing, which means:

Windows Update will automatically initiate a feature update to ensure device security and ecosystem health.

Support for Windows 10 Enterprise users still on 1803 will end in November 2020.

Needless to say, despite recent warnings this was coming, the update demand is likely to come as a shock to many 1803 users who only received what is still the most widely used version of Windows 10 as recently as May 2018.

Machine learning model will push updates to users

How will users of 1803 or earlier know they must update? In fact, it seems they won’t – Microsoft will automatically make that decision for them.

The intriguing detail of how this will be done emerged earlier this week in a tweet by @WindowsUpdate:

In short, from this week, 1803’s days are numbered, but it is a machine that will make the decision about how numbered.

Factors influencing the upgrade decision will include whether the machine in question is deemed to be one suffering from any one of a list of known version 1903 compatibility issues, which will need to be resolved first.

The caution is well-founded and painfully ironic. Last October, Microsoft was forced to pause upgrades to version 1809 when users reported losing access to files – precisely why some users decided to stick with 1803 in the first place. That was despite Microsoft boasting at the time…

One of our most recent improvements is to use a machine learning (ML) model to select the devices that are offered updates first.

And versions 1803 and later weren’t immune from problems either, for example Microsoft having to put updates on hold for some users with USB and SD Card storage.

What’s so great about 1903 anyway?

Eye-catching features which emerged from these struggles include the ability to pause updates for up to 35 days (before trying and perhaps pausing again) and the ability for Windows 10 to automatically roll back an update should it encounter a problem.

If you’re lucky enough to be running a Pro or Enterprise license, you’ll also get the intriguing Windows Sandbox, a “lightweight” hypervisor virtual machine which can be launched within Windows to isolate apps and suspect websites from doing bad things to the user’s PC.

All the talk of ML updating can’t avoid the impression that the Windows 10 age is turning out to be more complex than Microsoft expected.

Once upon a time, Windows users upgraded Windows infrequently and then, security updates aside, stayed on that version until the next big upgrade years later.

These days, Windows changes itself twice a year, with the odd feature update between times, and life has become more fraught. Are there Windows engineers who long for the old days – just like the users who reckon they’re happy with Windows 10 version 1803?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/a9zjyMIsdiM/

Customers of 3 MSPs Hit in Ransomware Attacks

Early information suggests threat actors gained access to remote monitoring and management tools from Webroot and Kaseya to distribute malware.

UPDATE: 06/21/2019 This story has been updated to reflect the fact that customers of at least three MSPs were impacted in the attacks, not just one MSP as previously reported.

Computers belonging to customers of at least three managed services providers have been hit with ransomware after attackers somehow gained access to tools used by the MSPs to remotely manage and monitor client systems.

Details of the attacks are still only emerging, and the full scope of the incidents or even the names of the MSPs are still not currently available. But early information suggests that attackers likely used two remote management tools at the MSPs — one from Webroot, the other from Kaseya — to distribute the ransomware. Both vendors have said the attackers appear to have used stolen credentials to access their tools at the MSP locations.

Comments on an MSP forum on Redditt, including from security researchers claiming close knowledge of the incidents, suggest one MSP is a large company and that many of its clients have been impacted.

A researcher from Huntress Labs, a firm that provides security services to MSPs, claimed on Reditt to have confirmation that the attackers used a remote management console from Webroot to execute a PowerShell based payload that in turn downloaded the ransomware on client systems. Webroot describes the console as allowing administrators to view and manage devices protected by the company’s AV software.

According to the Huntress Labs researcher, the payload was likely ‘Sodinokibi‘, a ransomware tool that encrypts data on infected systems and deletes shadow copy backups as well.

Kyle Hanslovan, CEO and co-founder of Huntress Lab says a customer of one MSP that was attacked, contacted his company Thursday and provided their Webroot management console logs for analysis.  “We don’t know how the attacker gained access into the Webroot console,” Hanslovan says.

But based on the timestamps, the Webroot console was used to download payloads onto all managed systems very quickly and possibly in an automated fashion. “This affected customer had 67 computers targeted by malicious PowerShell delivered by Webroot,” Hanslovan says. “We’re not sure how many computers were successfully encrypted by the ransomware.”

What’s also not clear is how the attackers are managing to gain access to the Webroot console so efficiently he says.  “We’ve yet to see anything that would suggest the issue is a global Webroot vulnerabilty.” However, three MSP incidents in less than 48hrs involved compromised Webroot management console credentials, he notes.

One Reditt poster using the handle “Jimmybgood22” claimed Thursday afternoon that almost all of its systems were down. “One of our clients getting hit with ransomware is a nightmare, but all of our clients getting hit at the same time is on another level completely,” Jimmybgood22 wrote.

Huntress Labs posted a copy of an email that Webroot purportedly sent out to customers following the incident, informing them about two-factor authentication (2FA) now being enforced on the remote management portal. The email noted that threat actors who might have been “thwarted with more consistent cyber hygiene” had impacted a small number of Webroot customers. The company immediately began working with the customers to remediate any impact.

Effective early morning June 20, Webroot also initiated an automated console logoff and implemented mandatory 2FA in the Webroot Management Console, the security vendor said.  Chad Bacher, sebior vice president of products at Webroot says the comapny’s product has not been compronised. “We all know that two-factor authentication (2FA) is a cyber hygiene best practice, and we’ve encouraged customers to use the Webroot Management Console’s built-in 2FA for some time,” Bacher says.

Meanwhile, another researcher with UBX Cloud, a firm that provides triage and consulting services to MSPs, claimed on Reditt to have knowledge that the attacker had leveraged a remote monitoring and management tool from Kaseya to deliver the ransomware.

“Kaseya was the only common touch point between the MSPs clients and it is obvious that the delivery method leveraged Kaseya’s automation by dropping a batch file on the target machine and executing via agent procedure or PowerShell,” the researcher claimed. As with the Webroot console, the MSP did not appear to have implemented 2FA for accessing the Kaseya console.

In emailed comments, John Durant, CTO at Kaseya, confirmed the incident.”We are aware of limited instances where customers were targeted by threat actors who leveraged compromised credentials to gain unauthorized access to privileged resources,” Durant says. “All available evidence at our disposal points to the use of compromised credentials.”

In February, attackers pulled off an almost identical attack against another US-based MSP. In that incident, between 1,500 and 2,000 computers belonging to the MSP’s customers were simultaneously encrypted with GandCrab ransomware. Then, as now, the attackers are believed to have used Kaseya’s remote monitoring and management tool to distribute the malware.

MSPs and IT administrators continue to be targets for attackers looking to gain credentials for unauthorized access, Durant says. “We continue to urge customers to employ best practices around securing their credentials, regularly rotating passwords, and strengthening their security hygiene,” he says.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/customers-of-3-msps-hit-in-ransomware-attacks/d/d-id/1335025?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apply Military Strategy to Cybersecurity at Black Hat Trainings Virginia

This special October event in Alexandria, Virginia offers unique, practical courses in everything from data breach response to military strategy for cybersecurity.

Discover new opportunities to sharpen your skills as a cybersecurity expert at Black Hat Trainings in Virginia this October, where you can take part in practical, hands-on courses taught by some of the best in the business.

For example, Military Strategy and Tactics for Cyber Security offers a unique chance to learn how to apply military-grade cybersecurity strategy to your own work. Designed and taught by career Army officers with a combined 50+ years of experience, this 2-Day Training is for security professionals who would like to apply military cyber operations concepts to their own cybersecurity efforts.

You’ll learn how enemies can use military strategies to attack your network, and how you can use similar strategies to defend it. Ideally, this course will help defenders generate the information, disinformation, and intelligence strategies that will put you and your assets in a position of tactical strength, not weakness. Don’t miss it!

You may also want to check out Active Directory Attacks for Red and Blue Teams – Advanced Edition, an advanced 2-Day Training course in which you’ll master the art of attacking modern Active Directory environments using built-in tools like PowerShell and other trusted OS resources.

The training is based on real-world penetration tests and Red Team engagements for highly secured environments. It will help you hone your penetration skills with a fun mix of demos, lectures, and hands-on exercises.

If you’re more interested in masterfully responding to (and preventing) data breaches, check out the 2-Day Data Breaches – Detection, Investigation and Response Training. In this practical class you’ll dig into different types of breach scenarios, including cloud account breaches (using Office365 as an example), internal compromises, lost/stolen devices, and ransomware.

You’ll also learn strategies for detection and evidence preservation, and techniques for quickly scoping/containing a breach. Each module includes a hands-on lab where you actually analyze and scope the breach yourself, making this a prime hands-on learning opportunity.

These cutting-edge Black Hat Trainings and many more will be taking place October 17th and 18th at the Hilton Alexandria Mark Center in Alexandria, Virginia. From malware development to incident response, there’s a course for hackers and security pros of all experience levels, so register today!

Article source: https://www.darkreading.com/black-hat/apply-military-strategy-to-cybersecurity-at-black-hat-trainings-virginia/d/d-id/1335020?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Patrolling the New Cybersecurity Perimeter

Remote work and other developments demand a shift to managing people rather than devices.

The consumerization of IT has eroded the traditional line between “work” and “play.” Propelled by the bring-your-own-device (BYOD) era, our personal devices are commonly used for work.

This is especially true as more companies embrace the flexibility of working remotely, and as new devices and networks are used for work purposes. Personal smartphones are loaded with business email accounts, and personal computers and laptops used for remote work have business software, email, and documentation that may contain confidential information.

To top it all off, we aren’t just using work devices in the office. We’re using them on airplanes, at client offices, in coffee shops, and at home. All this means that the idea that protecting a perimeter is outdated. Instead, as “the workplace” becomes impossible to define as a physical location, technology professionals and IT teams must shift from managing devices to managing people, in order to stay one step ahead of such a rapidly evolving reality.

Protect the Crown Jewels
One easy way to begin implementing this new risk management strategy is to follow the Pareto principle (also known as the 80/20 rule), where companies treat 80% of the people one way while treating the riskier 20% of users with a higher level of security. Access should only be allowed via corporate devices, where multifactor authentication is mandatory, behavioral analytics is applied, and full auditing must be carried out regularly.

For example, the head of HR will be able to access data on all employees within an organization — and accessing this information from an untrusted, insecure device presents a huge risk. In this scenario, an organization’s IT team will want to ensure that the device is controlled and that it hasn’t been compromised.

Essentially if a person within an organization has the keys to the kingdom, it’s crucial to make sure that his or her device isn’t dirty, the network isn’t compromised, and activity is completely monitored. There then needs to be a division between most of the staff and the VIPs, and between most data and the “crown jewels” (in other words, the most important and most sensitive parts of a business that would be most appealing to an attacker).

Zero Trust: Suspect Everyone
At the same time, by doing away with a perimeter-based security model, where those inside the perimeter are trusted, organizations now need to implement a new model that better matches the vulnerabilities inherent to today’s mobile workforce. We must suspect everyone — we can’t afford not to.

A Zero Trust policy assumes untrusted actors exist both inside and outside the network and, as a result, every user access request must be authorized. When implemented correctly, Zero Trust networks can improve security while also increasing productivity. What’s key to true Zero Trust environments are adaptive controls that are contextually aware. Without context, we always need to put the strongest possible security in place; with context, we can adapt the level of security based on risk.

For example, there should only be a prompt for additional credentials when a user comes from an unknown machine, an unknown location, or when performing a sensitive function. Businesses need to understand their user’s behavior, and if things are normal, allow for minimal authentication — if things have changed or the risk is greater, add additional checks.

Still, Zero Trust is a work in progress. Until it’s mainstream, password management products that offer complete privileged management systems to password vaults will help to reduce the complexity of users remembering multiple passwords while encouraging stronger password use.

What Comes Next: Cyberhygiene
We know the modern workplace is no longer in one fixed location. At the same time, the nature of cyberattacks are shifting because of how efficiently cybercriminals get paid. From a hacker’s perspective, fewer steps equals faster profitability — and all too often, organizations with remote work policies are ripe for attack. 

There are more devices to compromise, which means more machines that will likely be unpatched and not secure. Identities may be implemented in a weak fashion and allowed too much access. Similarly, the rise of collaboration tools such as Slack presents new opportunities to infiltrate networks and take advantage of liabilities. These types of accounts often do not get terminated — so when that user eventually leaves a company, their account remains active and open to infiltration or exploitation by cybercriminals. The more software there is, and the more people experiment with new ways of working, the greater the attack surface will be.

For these reasons, implementing basis cyber hygiene within your organization is critical as the workplace continues to evolve and become increasingly distributed. To meet the basic tenets of good cyber hygiene, organizations should always:

  • Understand the IT environment: Produce a comprehensive understanding of IT environments to uncover hidden data risks and help explain key elements to business leaders.
  • Educate business and IT leaders: Tell them about the risks to their data and implications of a breach — including showing data risk in financial terms.
  • Implement threat monitoring and detection: Deploy the right IT security management tools to detect and respond to potential threats.
  • Use data to show the value of IT efforts: Use data to understand an IT environment, get useful insights, solve problems faster, and demonstrate value.
  • Establish a solid security process: Ensure your organization is completing routine security updates such as managing and patching machines, ensuring a backup is in place, etc.

To stay ahead of this rapidly changing workplace paradigm, technology and security professionals alike should combine good cyberhygiene best practices in concert with additional strategies like Zero Trust and the 80/20 rule. Ultimately, employees need to be the new “endpoints,” with the risk they pose to the organization assessed rather than simply determining them as safe depending on whether they are inside or outside a perimeter.

Related Content:

Tim Brown is the VP of Security for SolarWinds, with responsibility spanning internal IT security, product security, and security strategy. As a former Dell Fellow, CTO, chief product officer, chief architect, distinguished engineer, and director of security strategy, Tim … View Full Bio

Article source: https://www.darkreading.com/perimeter/patrolling-the-new-cybersecurity-perimeter-/a/d-id/1334985?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Startup Raises $13.7M to Stop Breaches with Behavioral Analytics

TrueFort plans to use the funding to expand sales, marketing, RD, customer support, and go-to-market initiatives.

TrueFort, an application behavior security analytics company, has raised $13.7 million in Series A funding, which will be used to expand its business of protecting organizations from breaches.

Former bank technology executives developed TrueFort’s behavioral analytics technology, which it calls “a last line of defense” against threats to core business applications. Its system, based on transaction processing architectures in financial services, monitors baseline application behavior and end-to-end interdependencies to spot malicious activity as it happens.

The idea is to provide greater visibility in core enterprise applications, which are often a blind spot from a security and regulatory reporting standpoint — and a pain point for CIOs and CISOs. TrueFort aims to up breach defenses by protecting the apps that execute business processes.

“Until now, organizations had no way to map, understand and monitor the dependencies and behavior of their business applications, which explains why so many security breaches go undetected for so long,” said Nazario Parsacala, TrueFort’s CTO, in a statement. The goal is to automatically block activity outside the predefined operating parameters for each application.

This funding round was led by Evolution Equity Partners with participation from Lytical Ventures and Emerald Development Managers. As part of the round, Evolution partner Karthik Subramanian will join TrueFort’s board of directors, the company says. It plans to use its funding to add to sales, marketing, RD, customer support, and go-to-market projects.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/perimeter/startup-raises-$137m-to-stop-breaches-with-behavioral-analytics/d/d-id/1335028?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Government is exposing identities of child abuse victims

Investigators at the FBI and the US Department of Homeland Security (DHS) are exposing the identities of victims of child abuse, Forbes has discovered.

The discovery was made by looking closely at court filings, in which the government discloses teenagers’ initials and their Facebook identifying numbers – a unique code linked to each Facebook account. You can find yours here at this lookup service. After you plug in your user name, it will return a Facebook ID that links directly to your profile, along with, of course, your name, when you input the number after the URL https://www.facebook.com/.

I did it just now to my own profile. Well, that was easy. So much for protecting the identity of child abuse victims, or, for that matter, of any other victim whose identity is supposed to be redacted but whose Facebook profile ID – in other words, their identity – is there for all to see.

Forbes likewise found that it could find accounts belonging to minors, along with their names and other personal information, by simply entering the Facebook profile IDs entered into court documents.

The magazine did just that and found two identities that were supposed to be redacted from court cases unsealed this month. Thomas Brewster reports that it was as simple as copying and pasting Facebook IDs from the court filings into Facebook’s web address. The publication withheld their names and other identifying information in order to protect the victims’ identities.

In one such case, a recently unsealed case in Nebraska, the FBI search warrant gave away the initials and Facebook profile number of a then 14-year-old girl, plus the detailed, lewd conversations she had with a 28-year-old man.

Facebook had intercepted the chats, then sent them in as a tip to the National Center for Missing Exploited Children (NCMEC). From there, the chats went to the police, who requested and obtained details about the two accounts from Facebook.

Forbes reports that those same chats were disclosed in an unsealed complaint against the perpetrator, who’s since pleaded guilty and who was sentenced to 36 months in prison last year. That same complaint described how the victim sent 27 explicit images of herself to the guy who was grooming her.

Who knows how many federal law agencies and investigators are guilty of this casual, clumsy use of profile IDs in court documents?

The FBI for one, and the DHS for two, at the very least. Forbes came across another search warrant application from a DHS investigator in Wisconsin. The warrant disclosed three Facebook IDs of girls whom the investigators believed to be targets of grooming in the Philippines.

The warrant also included details of the 63-year-old man who used Facebook to find the girls and to groom them, as well as transcripts of private Facebook chats between him and what were presumably underage minors.

Anybody home?

Forbes contacted the Department of Justice (DOJ) regarding both cases. As of Wednesday, when it published its report, the publication hadn’t received a reply.

Nor had the DOJ sealed the court documents that revealed the Facebook profile IDs that are a snap to connect to victims’ identities. Those court documents were still displaying sexual abuse victims’ identities, along with the details of their victimization, serving it all up on a silver platter to anybody who has the interest to look for it… and the skills to navigate the Public Access to Court Electronic Records (PACER) system, which serves as an entryway to researching court documents.

You can see why the DOJ, FBI and DHS would be mired in inertia when it comes to responding to something like this. Who reads court documents, after all? Searching the finicky PACER portal isn’t easy, or consistent, and the results it renders up don’t come anywhere close to the easy-to-understand narrative you’d get from a news article.

Be that as it may, Forbes isn’t the first newshound that’s surfaced these blunders after sticking its arm into the PACER swamp of legal records. Nor is it the first to walk head-on into a wall of silence when trying to get the DOJ to stop inadvertently exposing people’s identities.

Brewster talked to Seamus Hughes, a part-time consultant and expert on searching PACER. Hughes said that he’s found similar cases. He’s a court records geek who teaches news reporters how to mine breaking news out of PACER. His expertise at doing so means that Hughes is constantly coming up with breaking news that’s escaped most people’s attention. Just one of the many news nuggets he’s dragged out: what was supposed to be the US’s secret charges filed against Julian Assange that prosecutors mistakenly revealed.

Hughes said that the inadvertent exposure of victims’ real identities happens “with too much regularity.” He thinks the DOJ should be taking their responsibility to redact victims’ personal information more seriously.

It’s not like he hasn’t tried to get somebody’s attention: Hughes told Forbes that over the past six months, he’s contacted attorneys’ offices across three different districts to alert them to the problem.

Nobody’s responded.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/g54k4Mwl9OM/

Florida city will pay over $600,000 to ransomware attackers

The small city of Riviera Beach, Florida, has agreed to pay attackers over $600,000 three weeks after its systems were crippled by ransomware.

The city council has authorised its insurance company to pay 65 bitcoins to the cybercriminals who infected their system on 29 May 2019.

The Palm Beach Post reported that an employee in the City Police Department infected machines across its network by opening an email.

The attack on the city, a suburb of West Palm Beach with a population of 35,000, took all its operations offline. Email went down and officials had to resort to hand-printed cheques to pay employees. 911 dispatchers were also unable to enter calls into computer systems, said reports.

On 5 June 2019 the City posted a terse online notice reporting a ‘data security event’. No further updates appeared on its website or Twitter account.

Councillors had already authorized $941,000 to pay for 310 new desktop computers and 90 laptops after the attack, expediting an already overdue refresh of old equipment.

In paying the ransom, the council is relying on advice from external security consultants, said spokesperson Rose Anne Brown, adding that there was no guarantee the files would be restored.

Waiting to make the payment has cost Riviera Beach even more money. On 30 May 2019, the day after the infection, the ransom equated to $540,765 at Bitcoin’s closing price (via CoinMarketCap). As of yesterday, 20 June 2019, it amounted to $619,265. Bitcoin’s volatility can make an already tense situation even more problematic for victims.

Coveware, which advises companies on ransomware recovery, said in its Q1 2019 report that 96% of companies paying a ransom received a working decryption tool, but the recovery success rate varied according to the type of ransomware used. GandCrab’s attackers issued the decryption tool reliably (their system was automated), while Dharma was much riskier. On average, decryption tools recovered 93% of the decrypted data, again varying by ransomware type.

Email phishing – the technique that nobbled Riviera Beach – accounted for 30.4% of ransomware infections during Q1, Coveware added.

This attack follows a ransomware attack on the City of Baltimore, which refused to pay its attackers 13 bitcoins (worth about US $100,000 at the time). The attack will ultimately cost over $18m, including lost or deferred revenue due to slowed payments.

Successful attacks on local governments in the US demonstrate a need for better cybersecurity. In 2016, the International City/County Management Association (ICMA) surveyed 2,423 local US governments and got 411 responses. It found that only 34% had a formal, written breach recovery plan and only 48% had a formal, written cybersecurity plan. The biggest barrier to effective cybersecurity was a lack of funds.

This isn’t the only cyberattack this month on a Florida community. The State’s Lake City suffered a malware infection on 10 June 2019, which used three attack methods in concert. The attack, which was not followed by a ransom demand, took down city email systems, landline phones and credit card services according to a statement from the City. Two days later, it was recovering from the attack and had emails back online.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BtouoJLPeio/

Good old British ‘fair play’ is the answer to vexed Huawei question, claims security minister

Solving the Huawei 5G security problem is a question of convincing the Chinese to embrace British “fair play”, security minister Ben Wallace said yesterday without the slightest hint of irony.

During a QA at Chatham House’s Cyber 2019 conference, Wallace said the issue of allowing companies from non-democratic countries access to critical national infrastructure was about getting them to abide by, er, Western norms.

The former Scots Guards officer explained: “I take the view: we’re British, we believe in fair play. If you want access to our networks, infrastructure, economy, you should work within the norms of international law, you should play fair and not take advantage of that market.”

Someone speaking later in the conference, who cannot be named thanks to the famous Chatham House Rule*, later commented: “If we don’t trust them in the core, why should we trust them in the edge?”

Nonetheless, he later expressed regret at Chinese dominance of the 5G technology world, saying: “The big question for us in the West is actually, how did we get so dependent on one or another? Who is going to be driving 6G? How are we, in our society, going to shape the next technology to ensure our principles are embedded in that tech? That’s a question we should ask ourselves: were we asleep at the wheel for the development of 5G in the first place?”

The security minister also doubled down on GCHQ’s controversial and deeply resented proposal to backdoor all encrypted communications by adding themselves as a silent third participant to chats and calls – thus rendering encryption all but useless.

“Under the British government,” he said, “there is an ambition that there is no no-go area for properly warranted access when required. We would like, obviously, where necessary, to have access to the content of communications if that is properly warranted, oversighted, approved by Parliament through the legislation, of course we would. We’re not going to give up on that ambition… there are methods we can use but it just changes our focus. As long as we do it within the law, well warranted and oversighted.”

This contrasts sharply with previous statements by GCHQ offshoot, the National Cyber Security Centre (NCSC), that the government needs a measure of public support before it starts harming vital online protections. At present, Britain’s notoriously lax surveillance laws allow police to hoover up the contents of your online chats and your web browsing history, including precise URLs. This is subject to an ongoing legal challenge led by the Liberty human rights pressure group.

As the minister of state for security and economic crime, Wallace’s wide-ranging brief covers all national security matters, from terrorism to surveillance powers to seeing hackers locked up.

In his keynote address to the conference, Wallace also declared he wants the British public “protected online as well as they are offline” as he gave the audience of high-level government and private sector executives a whistle-stop tour of current UK.gov policy and spending on cybersecurity. One part of that is a push to get better security baked into Internet of Things devices, part of which is the NCSC-sponsored Secure by Design quasi-standard.

The government has also begun prodding police forces to start setting up cyber crime units, with Wallace confirming that “each of the 43 forces [in England and Wales] now have a dedicated cyber crime unit in place”. ®

* The Chatham House Rule states that what is said at a particular meeting or event may be repeated but not attributed.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/21/ben_wallace_speech_chatham_house/