STE WILLIAMS

Facebook Got Tagged, but not Hard Enough

Ensuring that our valuable biometric information is protected is worth more than a $550 million settlement.

On January 29, Facebook agreed to a $550 million settlement of a class-action suit based on violations of Illinois’ Biometric Information Privacy Act (BIPA). The settlement will compensate Facebook users in Illinois for Facebook’s use of facial recognition technology, known as “tagging,” without the user’s consent and in violation of BIPA. While many people were surprised by the amount of the settlement, more were shocked that Facebook agreed to pay it.

The technology at issue was the nearly automatic tagging of friends and acquaintances in photos that users uploaded to Facebook. During the uploading process, Facebook’s systems scanned the pictures, found matches using facial recognition technology, and suggested that users “tag” their Facebook friends who resembled those in the photographs. Given the number of photos that have been uploaded to Facebook, many speculate Facebook could have faced about $35 billion in fines under BIPA. Rather than balking at the $550 million settlement, perhaps we should ask why the amount wasn’t larger.

Over the past few years, there has been a substantial increase in the number of laws that protect personal information, including biometrics, throughout the world. However, there are relatively few specific biometric privacy laws in the United States. Biometrics is the measurement and analysis of unique physical or behavioral characteristics such as fingerprints, DNA, or voice patterns, particularly as a means of validating an individual’s identity. Accordingly, biometric privacy is the right of an individual to keep their biometric information private and to control how that information may be collected and used by third parties. This freedom arises out of a person’s general right of privacy.

The right of privacy is one of the most hotly debated topics in the Bill of Rights. Often, the debates over the right of privacy involve people’s religious beliefs, social mores, and opinions about what people can do in their own homes. But, in this instance, the right of privacy confronts something even more powerful and more difficult to overcome — the desire of businesses to make more money by using the resources available to them.

In this case, the resource is information: data about individuals and what makes each of them unique, including their DNA, facial features, fingerprints, and voices. Consequently, this right-to-privacy debate is over whether people get to control how businesses collect and use their personal information.

Facebook was using facial recognition to add a component to its product to keep people interested, stay on its site longer, and give its advertisers more opportunities to market products. And it worked. For instance, my friends and I troll Facebook the day after an event to see what pictures of ourselves have been posted. In doing so, we also view advertisements on our feeds, and many of us have purchased some items we’ve seen.

So, what’s so wrong with that? In reality, Facebook’s practice probably isn’t that offensive to many people. We expect our pictures to be posted and for other people to recognize us. We also accept that most companies are constantly trying to entice us to buy their products.

But what if you had to give your fingerprints to enter a building you were visiting, and the building manager sold those fingerprints to a third party on the Dark Web? Our fingerprints and other biometric information are specific to us; therefore, their unauthorized use can have disastrous effects. You don’t have to watch crime shows to imagine how these fingerprints could be used by nefarious actors.

It’s fair to say most people would not be happy about the sale of their fingerprints, but would that sale be illegal? It depends. Biometric privacy laws are meant to protect individuals from having their fingerprints and other biometric information stolen or used in an unauthorized manner, thus providing a definitive answer regarding the legality of such sales.

I believe I should be able to control all uses of my personal information. I don’t want people or businesses using my name, telephone number, or email address without my consent, but I’m even more protective of my biometric information. It is unacceptable to think that the DNA I provide to a genetic testing agency to learn about my ancestors could be used for other purposes. I just want to know if my family truly came from Ireland. I don’t want a pharmaceutical company reaching out because it got my results and wants to sell me a drug for a disease that runs in my family.

To avoid these types of liabilities, businesses that wish to utilize biometrics should first determine if BIPA or other biometric privacy law applies to their situation. Compliance under each of these laws is slightly different. If BIPA applies, then the business is required to give the type of informed consent referenced above. To that, businesses must:

  • Provide written notice to affected individuals of the collection and use of the biometrics, including the specific reason for collection and use of the information and how long it will use and retain the biometric information (before collecting the biometrics).

  • Obtain each individual’s written consent to such collection and use of the biometrics (again, before collecting the biometrics).

  • Keep the biometric information confidential and only disclose the information if the individual consents, it is required for the completion of a financial transaction requested by the individual, or disclosure is required by law, warrant, or subpoena.

  • Institute appropriate administrative, technical, and physical safeguards for the protection of biometric information in its care.

  • Implement retention and destruction policies documenting that the biometrics will only be retained for so long as they are needed or within three years of the individual’s last interaction with the business, whichever occurs first, and ensuring that the information is appropriately disposed of at the end of such period.

Businesses should be guided by the basic principle of “only collect that which you need and only keep it for so long as it is needed,” and they cannot sell, lease, or otherwise profit from another person’s biometric information.

I hold that more states should follow Illinois’ example and enact biometric privacy laws so individuals have control over the use of their biometrics and companies that use biometric information without consent can be held accountable. Furthermore, states that have enacted these laws should be more proactive in enforcement. A $35 billion fine will have a far greater deterrent effect than a $550 million settlement. I say, tag a few companies hard. The others will fall in line, and our information will be protected.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Billee Elliott McAuliffe is a member of Lewis Rice practicing in the firm’s corporate department. Although she focuses on information technology and privacy, Billee also has extensive experience in corporate law, including technology licensing, cybersecurity and data privacy, … View Full Bio

Article source: https://www.darkreading.com/risk/facebook-got-tagged-but-not-hard-enough/a/d-id/1337285?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

500,000 Documents Exposed in Open S3 Bucket Incident

The open database exposed highly sensitive financial and business documents related to two financial organizations.

An unprotected AWS S3 bucket exposed some 425 GB of data, representing approximately 500,000 documents related to MCA Wizard, an iOS and Android app developed by Advantage Capital Funding and Argus Capital Funding. According to vpnMentor researcher Noam Rotem, who led the team of researchers who found the open database, the app appears to be a tool for a Merchant Cash Advance (MCA), which provides relatively small, high-interest business loans typically made to small companies.

In a blog post, Rotem shared examples of the types of document found in the database, many of which seemed to have no relationship to the app itself. Information in the documents included credit reports, bank statements, contracts, legal documents, driver’s license copies, purchase orders and receipts, tax returns, Social Security information, and transaction reports.

According to vpnMentor, its researchers tried to contact Advantage and Argus, which may actually be the same company, to inform them of the open bucket. When they were unable to do so, they informed AWS, which shut off access to the database.

For more, read here and here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/500000-documents-exposed-in-open-s3-bucket-incident/d/d-id/1337343?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Process Injection Tops Attacker Techniques for 2019

Attackers commonly use remote administration and network management tools for lateral movement, a new pool of threat data shows.

The threat landscape of 2019 was dominated with worm-like activity, researchers report in a new analysis of confirmed threats from the past year. Attackers are growing more focused on lateral movement, with an emphasis on using remote administration network tools to execute it.

Red Canary’s “2020 Threat Detection Report” contains an analysis of 15,000 confirmed threats to appear in customer environments throughout 2019. Researchers used the equivalent MITRE ATTCK data to determine which attack techniques were most prevalent over the past year. Their findings illustrate which methods are most common and how attackers are using them.

The popularity of automated lateral movement is largely driven by TrickBot, the data-stealing Trojan that contributed to thousands of detections. TrickBot, combined with the use of remote admin and network management tools, is not fully responsible for the frequency of common attack techniques, but the three play a major role in why cybercriminals choose specific tactics.

TrickBot is typically seen as part of a string of infections that starts with the Emotet Trojan and ends in a Ryuk ransomware infection. Emotet lands on a device and loads TrickBot, which steals credentials from infected devices as it moves laterally across a network. When TrickBot is done, it launches Ryuk, which encrypts the infected machines on a network and demands a ransom.

“Overwhelmingly, ransomware was the trend in 2019 in terms of payloads and what adversaries set out to do,” says Keith McCammon, co-founder and chief security officer at Red Canary, of a general pattern the research team noticed in analyzing the data. Another prominent trend is threats to confidentiality: Attackers will lock up target systems and demand money to return system access — or they threaten to publish the company’s data online.

“If someone takes system access away, you might not have great options for getting that access back, but you have some options,” says McCammon. This shift is “a different calculus” because organizations may not know what the adversary has. Without that insight, “you kind of have to assume the worst.” For many organizations, this data dump could pose an existential threat.

The most common attack technique researchers list is process injection, which TrickBot uses to run malicious code through Windows Server Host. Why isn’t an Emotet technique, used to land on a machine, more popular? As researchers explain in a blog post, a growing portion of their visibility comes from incident response, much of which brought them into environments where Emotet had completed its actions and TrickBot had arrived on a number of devices. As a result, they couldn’t detect initial access or early-stage payloads, only the threats left behind.

Many of the companies Red Canary worked with in incident response were “really large, well-established organizations with a high percentage of systems impacted,” says McCammon, noting this can be attributed to tactics, automation, and refinement that enable attackers to get into a complex enterprise and infect several systems at the same time. “We saw more big companies hit with very, very impactful attacks than we’ve seen before.”

Process injection, which makes up 17% of all threats analyzed, affects 35% of organizations and appeared in 2,734 confirmed threats in 2019, the researchers report. It was the top attack technique from 2018 into 2019 due to the widespread TrickBot and Emotet outbreaks that occurred throughout the same time frame. Using this method, attackers can conduct malicious activity in the context of a legitimate process, so they blend in.

The second-most-popular attack technique is scheduled task, which, like process injection, is seen in worm-like and TrickBot activity. This tactic, which schedules tasks to launch malicious binaries and persist on target devices, affects 33% of businesses and makes up 13% of threats overall. It’s handy for attackers because it allows them to schedule tasks remotely; it’s also useful for execution and persistence alongside common scripting languages such as PowerShell.

Tying with scheduled task is Windows Admin Shares, a technique that also made up 13% of total threats and affected 28% of organizations in 2019. This enables worm-like activity and falls under the category of remote/network admin tools. Self-propagating threats — in particular, those that used EternalBlue — drove Windows Admin Shares from the 10th-most-popular threat in 2018 to third place in 2019. Administrators often use them for remote host management, giving attackers a subtle means to move laterally throughout an environment.

Eight of the top 10 attack techniques involve features of a platform being misused, McCammon says. They’re not standout strategies that would normally put teams on alert.

“The [techniques] I think we are definitely starting to see more of, and will continue to see escalate and refined, are going to be a lot of the lateral movement techniques … almost entirely the ones that depend on living off the land,” says McCammon, listing PowerShell and WMI as examples. Attackers are “using the features of these platforms that businesses rely on to operate their network and can’t just turn off.” As it gets harder to put malware onto a system, the adversaries are getting better at using tools that are already there, he explains.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/process-injection-tops-attacker-techniques-for-2019/d/d-id/1337344?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Human traffickers use social media oversharing to gain victims’ trust

Does your life suck?

If so, like many of us, you may have posted about your money troubles, your low self-esteem, or your relationship problems on social media or dating sites. But while it may feel good to vent, and while such posts may garner sympathy that can soothe the pain, the FBI is warning that human traffickers are attracted to the details of our misery like bees to honey.

On Monday, the FBI’s online crime division – the Internet Crime Complaint Center (IC3) – issued a warning that human traffickers are increasingly using online platforms, including popular social media and dating platforms, to recruit and to advertise sex trafficking victims.

They’re also increasingly harvesting personally identifiable information (PII) by putting up fake job listings, the IC3 warned in January, and are recruiting labor trafficking victims who are “bought, sold, and smuggled like modern-day slaves,” the FBI says:

Human trafficking victims are beaten, starved, deceived, and forced into sex work or agricultural, domestic, restaurant, or factory jobs with little to no pay.

Many of us in the US unknowingly encounter trafficking victims as we go about our days, the FBI says, given that both the perpetrators and their prey come from all backgrounds and work in all areas. The bureau says that victims have been recovered in rural areas, small towns, the suburbs, and large cities.

Have you gotten an offer from somebody who said they were recruiting for a job? Or perhaps they claimed to be a modeling agent? Those are some of the fronts that traffickers hide behind, the FBI says, and it often starts with online grooming as they offer opportunities for a better life or a better job.

Human traffickers target vulnerable individuals by preying on their personal situations. Online platforms make it easier for traffickers to find potential victims, especially those who post personal information, such as their financial hardships, their struggles with low self-esteem, or their family problems.

Human traffickers target and recruit their victims by appearing to offer help, or pretending to be a friend or potential romantic partner. They leverage their victims’ vulnerabilities and coerce them to meet in person. After establishing a false sense of trust, traffickers may force victims into sex work or forced labor.

As the FBI warned in August 2019, it’s also seen an increase in recruitment of money mules through dating sites.

Forced into slavery and prostitution

The FBI gave a few examples of victims who were recruited through popular online platforms:

  • In July 2019, Ryan Russell Parks, a 26-year-old Baltimore man, was convicted on two counts of sex trafficking – of a 15-year-old girl and a 16-year-old girl. After he struck up a conversation with the 16-year-old online, she told Parks that she was hungry and homeless. He sent a car to collect her and within a day, her photo was being used online to advertise her as a prostitute. According to court testimony, Parks targeted both girls after they posted information online about their difficult living and financial situations.
  • In March 2019, a married couple was found guilty of conspiracy to obtain forced labor and two counts of obtaining forced labor, having looked for workers on the internet and through ads in newspapers based in India. The Department of Justice (DOJ) says that they lied about the wages and the work. Once the victims showed up at their home in California, Satish Kartan and his wife, Sharmistha Barai, forced them to work 18 hours a day, gave them scant food, paid them meager wages or not at all, and sometimes hit or burned them. It got worse for victims who said they wanted to leave.
  • In October 2017, a sex trafficker was convicted on 17 counts of trafficking adults and minors. Additional charges included child pornography and obstruction of justice. The perpetrator received a 33-year sentence. A victim from the Seattle area met the sex trafficker’s accomplice on a dating website. The trafficker and his accomplice later promised to help the victim with her acting career. After a few months, the victim was abused and forced into prostitution.

Save yourself

In the US, victims in immediate danger should call 911.

When the danger isn’t immediate, but if you or someone you know needs help, call the National Human Trafficking Hotline toll-free, 24 hours a day, 7 days a week, at 1-888-373-7888 (TTY: 711) or text 233733.

Specially trained Anti-Trafficking Hotline Advocates can give support in more than 200 languages. As of Tuesday, the hotline was fully operational, though the pandemic is forcing the hotline to refrain from answering general questions about trafficking or volunteering.

The FBI also provides a list of other agencies or centers you can contact. Find details in their bulletin.

In the UK, if you suspect human trafficking, Citizens Advice says you can…

  • Call the police. Call 999 if it’s an emergency, or 101 if it’s not urgent.
  • If you’d prefer to stay anonymous, call Crimestoppers on 0800 555 111.

Save your communications

The IC3 asks that victims keep all original documentation, emails, text messages, and other communication logs if you’ve been victimized or think you’ve been targeted. Don’t delete anything before law enforcement has a chance to review it.

If you file a complaint about online scams, they ask that you be as descriptive as possible in the complaint form by providing:

  • Name and/or user name of the subject;
  • Email addresses and phone numbers used by the subject;
  • Websites used by the subject; and
  • Descriptions of all interactions with the subject.

“It is helpful for law enforcement to have as much information as possible to investigate these incidents,” the FBI says, but you don’t need to provide all of that to submit a complaint.

Stay safe online

That nice person who reached out to you online and then offered you food and a place to stay? Or who told you you’re the love of their life? Or that they’ve got a great job for you, in a new country with great pay and opportunity galore? Or how about a chance at a modeling career?

They might not be nice at all. They might well be peddling moonbeams. They could even be a predator whose real aim is to sell you. Increasingly, that’s exactly what they’re after, as the IC3 relates.

It’s hard to choose where to start when it comes to offering advice on staying safe online, but here’s as good a spot as any: How to have the difficult stay-safe conversation with kids. Those of us adults who frequent dating sites, or who could use a decent-paying job, are also targets. Check out our tips for staying safe on social media and on dating sites.

Please remember, if it’s coming from the internet and sounds too good to be true, it likely is.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HfUZ3TW3ABg/

DDoS attack on US Health agency part of coordinated campaign

Just because a website offers critical public information about the COVID-19 virus pandemic doesn’t mean Distributed Denial of Service (DDoS) attackers won’t be out to get it.

It’s a point underscored by the news that on Sunday cybercriminals attempted to disrupt the US Department of Health and Human Services (HHS) website using an unidentified flood of DDoS traffic.

The HHS site is one of the first ports of call for US citizens looking for a range of health information, including HHS announcements and links to COVID-19 updates from the Centers for Disease Control and Prevention (CDC).

It seems attackers – later described by officials as a “foreign actor” – twigged its importance too.

According to a Bloomberg report, the attack slowed the site but didn’t cause it to go offline. DDoS attacks come in different sizes and types and it’s not been revealed which methods were used beyond the fact the attacks lasted for hours.

HHS spokesperson Caitlin Oakley told Bloomberg:

On Sunday, we became aware of a significant increase in activity on HHS cyber infrastructure and are fully operational as we actively investigate the matter.

These days, DDoS attacks are not the potent weapon they once were, primarily because large websites are protected by a newer generation of defences trained on a number of large attacks, hijacking a widening range of protocols.

It’s all relative of course, but downplaying it might be to miss the point because this attack was unusual in another way – officials said it coincided with a disinformation campaign carried out via SMS, email and social media that reportedly claimed that a national quarantine of the US was imminent. Again, few details of this campaign have been released and news of it only emerged when the National Security Council (NSC) tweeted its refutation:

This sort of coordination is something commercial cyberattacks would be unlikely to bother with, hence the claim that a nation-state was behind it.

The purpose, then, might have been to spread a rumour that citizens visiting the HHS site would not have been able to confirm thanks to the DDoS attack. That’s the ultimate purpose – spreading confusion and a mistrust of government.

To emphasise how seriously it was taking the attack, the US Government source told Bloomberg:

Secretary of State Michael Pompeo and other Trump administration officials are aware of the incident.

This reading is impossible to confirm, of course, but what matters on this occasion is that the attacks were detected and were not left unchallenged.

Far from deterring cybercriminals, clear major events such as a global pandemic act to enhance the effect of attacks by disrupting services in ways people are more likely to notice.

Of course, cyberattacks against health-related sites happen all the time but few people beyond those immediately affected pay them much heed. If Sunday’s DDoS attack on the HHS is only the start, COVID-19 might yet change this indifference.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eIFGE7W8MpI/

Uber to file federal suit against LA over users’ real-time location data

Uber is poised to file a federal lawsuit over Los Angeles’s demands for what the company (as well as privacy advocates and, presumably, state law) consider to be the city’s privacy-invading demands for real-time location data of its users.

Uber provided an embargoed draft of the lawsuit, which a spokesperson said the company will file later this week.

Uber had already threatened to sue the city in October 2019 after the LA Department of Transportation (LADOT) instituted data demands on ride-hailing, scooter/bike-sharing companies. Uber wound up delaying that suit as it tried to hash things out with the city. LADOT suspended Uber’s permit, but it still allowed Uber to operate its scooters during the discussions.

Uber had presented a compromise: we’ll give you location data, but only 24 hours after trips start and stop, it proposed. That will give LADOT data to use for traffic planning, but it won’t affect user privacy, Uber said. As well, it would, at least potentially, give the company at least a small window of time in which to challenge a specific LADOT request, which is impossible to do when the city demands data in real-time.

According to its federal lawsuit, that wasn’t good enough for LADOT. Uber’s counsel said in the suit that they suspect that the proposal merely galled LADOT. At any rate, on 25 October 2019, LADOT suspended Uber-owned JUMP’s permit and ordered its bikes and scooters off the streets lest they be swept up by the city’s trash collectors.

What’s so special about real-time data, unless – this is Uber’s speculation – perhaps for surveillance purposes?

This isn’t an answer – LADOT hasn’t been able to give one – but in general, LA wants the data for a new data standard called the Mobility Data Specification (MDS).

MDS is based on a standard set of application programming interfaces (APIs) through which mobility companies are required to provide real-time information about how many of their vehicles are in use at any given time, where they are at all times, their physical condition, anonymized trip start and stop times, destinations, and routes, among other data. Besides LA, other cities now using MDS to collect data to manage their own dockless vehicles include Seattle; Austin and San Jose in Texas; Santa Monica, CA; Providence, RI; and Louisville, KY.

LA, like other cities, is trying to pull data from newly chaotic traffic situations in which Uber and Lyft drivers are whizzing around, picking up, dropping off or waiting for fares, while city buses, bicyclists and scooter riders – some using rent-by-the-hour bikes and scooters – jostle for space.

The request for real-time location data is in a policy the city instituted in September 2018 for dockless scooters. While other companies in the industry – including Lime, Lyft, Bird and Spin – have complied, Uber has refused, saying that demanding real-time location data is taking it too far.

Privacy experts agree with Uber

Privacy experts have backed Uber up on this. While LA promises it’s anonymizing the data, not collecting personally identifiable information (PII) such as name, age, gender or address, that really doesn’t matter. As has been demonstrated time and time again, Big Data can be dissected, compared and contrasted to look for patterns from which to draw inferences about individuals. In other words, it’s not hard to re-identify people – or cats, for that matter – from anonymized records.

The Center for Democracy Technology (CDT) has said that LADOT’s collection of location data has the potential to seriously jeopardize riders’ privacy:

People’s movements from place to place can reveal sexual partners, religious activities, and health information. The US Supreme Court has recognized a strong privacy interest in location data, holding that historical cell site location information is protected by the Fourth Amendment warrant requirement […] Even de-identified location data can be re-identified with relative ease.

The Electronic Frontier Foundation has added to that list of sensitive PII that can be determined from tracking people:

Los Angeles riders deserve privacy in the bike and scooter trips they take – be they for work, medical appointments, social engagements, prayer, or other First Amendment-protected activities.

There are Fourth Amendment issues against unreasonable search at stake here as well, Uber claims: the company says that LA’s plan violates California’s Electronic Communications Privacy Act (CalECPA): a law passed in 2015 designed to prevent law enforcement agencies from accessing people’s data without a warrant.

Uber does share scooter location data with several cities. It’s the “real-time” part of LADOT’s demands that it’s balked at.

According to the draft of the federal lawsuit Uber is planning to file on behalf of its scooter division, JUMP, when LADOT had a chance to explain why it requires time-stamped geolocation data in real-time, it was stumped. It “dissembled,” as Uber’s legal counsel put it, and “its reasoning collapsed.”

That’s simply because there’s no good reason for LADOT to require it, the lawsuit maintains:

Real-time in-trip geolocation data is not good for planning bike lanes, or figuring out deployment patterns in different neighborhoods, or dealing with complaints about devices that are parked in the wrong place, or monitoring compliance with permit requirements

What it is good for is surveillance.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_S-lBWeihUQ/

VMware patches virtualisation bugs

Virtualisation company VMware patched two bugs this week that affected a large proportion of its client-side virtual machines (VMs).

VMware made its name offering server virtualisation products that recreate server hardware in software, allowing admins to run many virtual servers on the same physical box at once. Most ‘type one’ server hypervisors, including VMware’s, run directly on the bare metal instead of an installed operating system.

The company also has another strand to its business, though: ‘type two’ hypervisors that enable people to run guest operating systems in virtual machines (VMs) on their client devices, too. These let you run Windows or Linux on a Mac, for example. They work differently, running on top of the client operating system as applications, meaning that you don’t have to replace your core operating system to run VMs.

Finally, its desktop virtualisation system, called Horizon, puts the whole desktop environment on a server so that users can access it from anywhere.

Between them, these bugs affect all of these services in some way. CVE-2020-3950, which VMware gives as a CVSS v3 store of 7.3, affects version 11 of Fusion, its type 2 hypervisor for Macs. It’s a privilege elevation vulnerability stemming from the improper use of setuid binaries (setuid is a *nix tool that lets users run certain programs with elevated privileges). It also affects two other programs for the Mac: Versions 5 and prior of the Horizon client that lets Mac users log into virtual Horizon desktops, and version 11 and prior of the Virtual Machine Remote Console that lets Mac users access remote virtual machines.

CVE-2020-3951 is less dire, getting a CVSS v3 score of 3.2 and a low severity ranking (that comes from VMware, as the National Vulnerability Database entry hadn’t been updated at press time). It’s a denial of service vulnerability in Cortado ThinPrint, a third-party software tool that VMware integrates natively into virtual machines to give them printing functionality.

This bug affects version 15 of the VMware Workstation type 2 hypervisor for Windows, along with version 5 and prior of the Windows Horizon client. It’s a heap overflow problem that allows a non-administrative VM user to “create a denial-of-service condition of the ThinPrint service”.

Last week’s bugs

This is the second advisory in five days for VMware, which announced three other bugs on 12 March. These included a critical flaw, CVE-2020-3947, which affected Workstation on all platforms and the Fusion Mac software. This use-after-free flaw could enable code execution on the host computer from the guest OS, it said.

The other two bugs were ranked important. There was a privilege elevation in the Horizon, VMRC, and Workstation clients on Windows, ranked important (CVE-2019-5543, CVSS v3 7.3), which allowed a local system user to run commands as any user. Another bug in ThinPrint, ranked important (CVE-2020-3948, CVSS v3 7.8) also allowed a privilege elevation that could give non-administrative users root access on Linux VMs.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/s5oj-f3fq7U/

Small business loans app blamed as 500,000 financial records leak out of … you guessed it, an open S3 bucket

A now-defunct mobile app for loaning money to small business owners has been pinned down as the source of an exposed archive containing roughly 500,000 personal and business financial records.

The research team at vpnMentor said it traced an exposed database of financial records back to a former Android/iOS app called MCA Wizard, developed jointly by Advantage Capital Funding and Argus Capital Funding back in 2018.

The app, which has been pulled from both the Google and Apple stores, was apparently designed to allow businesses to apply for and manage merchant cash advance (MCA) short-term loans.

According to the vpnMentor crew, the app stored documents like bank statements, photocopies of driver’s licenses, credit checks, and even tax and social security information – all in an unsecured AWS S3 storage bucket. Though the app was defunct, that bucket remained online and configured for public access.

“These files didn’t just compromise the privacy and security of Advantage and Argus, but also the customers, clients, contractors, employees, and partners,” vpnMentor noted in its report.

While the exposure of information on thousands of people and small businesses is bad enough, there at least seems to be nothing to indicate that the database was found by criminals prior to being reported and taken down by AWS on January 9, more than two weeks after being discovered by the white hat researchers.

Interestingly, although the app is no longer available, the researchers noted that new documents were being added to the storage instance right up until its removal, suggesting another application could also be using the bucket.

More worrisome, though, is that the researchers were unable to reach either of the companies credited with developing the app (The Register was also unable to get comment from either Argus or Advantage), and they might in fact not even really be separate entities.

“While the database’s URL contained ‘MCA Wizard,’ most files had no relation to the app. Instead, they originated from both Advantage and Argus. Furthermore, throughout our research, files were still being uploaded to the database, even though MCA Wizard seems to have been closed down,” vpnMentor said.

“Information on all three entities is scarce, but they appear to be owned and operated by the same people. However, there is no clear connection between MCA Wizard and the two companies that own it anywhere online.”

Business owners and others who used the app and are concerned about their data being misused are advised to keep a close eye on their bank statements and, if they notice unauthorised activity or new accounts, to report this and consider a credit freeze. ®

Sponsored:
Webcast: Why you need managed detection and response

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/18/smb_loan_app_leaks/

Freedom of Information coverup clerk stung for £2k after deleting council audio recording

A town clerk in the English county of Shropshire has been the subject of the first ever successful Freedom of Information prosecution after lying to a member of the public who made an FoI request.

Nicola Young, clerk of Shropshire’s Whitchurch Town Council, was fined £400, ordered to pay legal costs of £1,493 and a victim surcharge tax of £40, leaving her with a total bill of £1,933.

She pleaded guilty last week to breaking section 77 of the Freedom of Information Act 2000 by deleting a recording of a council meeting that was requested under the Freedom of Information Act.

A member of the public sent an FoI request to Whitchurch council asking for the audio recording of the council’s March 2019 meeting. Council meeting minutes [PDF] from April 2019 showed one councillor claimed the previous month’s minutes were inaccurate, with the meeting chairman rebutting this by stating they were “written from an audio recording.”

That member of the public wanted to compare the recording with the published minutes. Yet, instead of doing her duty as the council’s “proper officer” in charge of responding to FoI requests, Young, of Shrewsbury Street, Whitchurch, Shropshire, deleted the recording once she became aware of the request – and then lied to the member of the public, saying that the file had been previously deleted in line with council protocol.

The Information Commissioner’s Office, once alerted to the suspicious disappearance of the recording, prosecuted Young – and on 11th March she pleaded guilty to her crime at Crewe Magistrates’ Court.

Mike Shaw, the ICO’s group manager in enforcement, said in a statement: “People should have trust and confidence that they can access public information without the danger of it being doctored, fabricated or corrupted in any way.”

A woman who answered Whitchurch Town Council’s public phone number yesterday said to The Reg: “no comment, no comment at all”.

Young’s criminal conviction (other breaches of the Act are unlawful but not criminal) marked the third anniversary of her appointment as town clerk, while council meeting minutes show that she stopped attending full council meetings from November 2019 onwards. In December, the minutes merely recorded that, under the heading “staffing matters,” Whitchurch’s mayor “gave an update on the Town Clerk’s absence.”

The Register understands the recording was recovered during the ICO’s investigation.

Destroying public records in response to FoI requests is a crime under section 77(1) of the Freedom of Information Act 2000, which makes it illegal to deliberately obstruct access to public records “with the intent to prevent disclosure.” A similar offence exists in section 100H(4) of the Local Government Act 1972, as amended, and, unlike the FoI offence, a prosecution can be brought by any public body or private person instead of being restricted to the Information Commissioner and the Director of Public Prosecutions alone.

The ICO has been flexing its regulatory muscles in recent times. Back in late 2018, a car repairman was prosecuted for stealing customers’ data from a previous employer and flogging it on to telemarketing scammers. Council corruption has previously featured in ICO prosecutions brought under the Data Protection Act, with a head of building control being convicted in 2019 of trying to skew a recruitment process by stealing job applicants’ CVs from council systems and sending them to his partner, who had applied for the same position.

Nearly a decade ago, the ICO complained that it has just six months from the date of the crime to prosecute public sector FoI criminals – and not six months from the time when someone complains. Oddly enough, this hasn’t changed in the intervening years.

Even with Brexit and the spread of the novel coronavirus looming large in the British national consciousness, data crimes are still being detected and perps prosecuted. ®

Sponsored:
Webcast: Why you need managed detection and response

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/18/first_successful_foi_act_prosecution/

What the Battle of Britain Can Teach Us About Cybersecurity’s Human Element

During WWII, the British leveraged both technology and human intelligence to help win the war. Security leaders must learn the lessons of history and consider how the human element can make their machine-based systems more effective.

The theme for this year’s RSA Conference was the “Human Element,” which explored the role of humans in the context of machine intelligence. The RSA Conference organizers described this year’s theme as follows:

New technologies like artificial intelligence and machine learning promise to fight the bad actors more efficiently than we ever could. And the wider, cheaper availability of advanced nefarious tools has democratized cybercrime. Humans, it seems, have been forgotten as key elements in this global fight.

Indeed, as our world grows more automated and our machines achieve greater intelligence, it’s only natural to wonder: What role will humans play in the cyber battlefield of tomorrow?

Which made me recall a seminal moment in world history with an analogous theme: The Battle of Britain, a turning point in World War II as well as one of the first and perhaps finest examples of how an emerging technology was paired with human intelligence that, in turn, changed the course of history.

Defending a Sprawling Perimeter
By the spring of 1940, Hitler’s army had run roughshod over much of Western Europe due in large part to the overwhelming superiority of the Luftwaffe, the largest and most powerful air force in Europe. Because the Nazis had taken considerable amounts of territory, the prospect of an invasion of the United Kingdom was no longer a question of if, but when.

The Nazi generals understood that occupying Britain would be far more challenging than the rest of the European continent because it was afforded protection by the English Channel. For a seaborne invasion to be viable, the Luftwaffe would have to soften the target through sustained air attacks with the goal of destroying the British Royal Air Force, its formidable Navy, and other critical infrastructure.

Meanwhile, the British forces were faced with a still more daunting challenge: How do you defend thousands of miles of unprotected coastline and quickly communicate verified air attacks back to central command in a coordinated fashion?

Machine + Human Intelligence
Unbeknownst to the Nazis, British intelligence had been secretly building and deploying a new early-warning radar system known as the Dowding System, named after Hugh “Stuffy” Dowding, the Commanding Officer of the Royal Air Force and the architect of Britain’s first fully coordinated air defense system.

The Dowding System comprised three interconnected layers, two of which were based on the latest innovations in radar while the third was perhaps the most crucial, yet also the most primitive. The first layer, dubbed Chain Home, consisted of a series of 360-foot radar masts that dotted the southern and eastern coasts and could detect enemy aircraft from 120 miles away. A second array of co-located smaller radar, Chain Home Low, was deployed to spot aircraft flying below the sight line of the taller Chain Home system.

While early radar systems were effective in providing advance warning of an approaching formation, they couldn’t provide important contextual information such as the altitude at which enemy aircraft were flying, or most critically, the types of planes being deployed.

To provide this critical context, the first two layers of radar were reinforced by the “human element” — a reconnaissance corps of 30,000 volunteers manning observation posts day and night, up and down the entire coast.

These observers were responsible for spotting and reporting enemy planes, providing essential intelligence to central command, including the distance and height of observed aircraft, their approximate bearings, and the types of planes in formation. This enabled confirmed reports of enemy raids to be relayed back to command headquarters in under 40 seconds, a remarkable feat that provided ample time for central command to scramble an appropriate response.

The genius of the Dowding System was not in its sophisticated radar capabilities but its ability to orchestrate these disparate machine and human intelligence feeds into a unified early-warning system. While the Germans were well acquainted with radar and were themselves utilizing it, they did not fully appreciate how the British were applying it within the context of an integrated air defense system.

Applying the Lessons of the Dowding System
So, what does all this have to do with cybersecurity and how might we as security leaders employ these lessons? There are a number of parallels that can be drawn from the Dowding System and applied to the modern application of real-time threat intelligence:

  1. Humans excel at providing context: Modern artificial intelligence (AI) engines can pattern match at a scale that humans simply cannot. But understanding context is something that even the most sophisticated AI struggles with.

  2. Orchestration enables self-learning: The ability to synthesize human insight and feed it back into the machine in an orchestrated manner is foundational for building a self-learning system.

  3. Crowdsourcing threat intelligence: A number of leading network and email security tools today are discovering the power of crowdsourcing threat intelligence by providing a mechanism to automatically share real-time threat intelligence across the network.

  4. A multilayered approach is key: No single system should be relied upon to protect your network. A defense-in-depth approach requires the layered application of multiple tools to ensure resiliency.

Interestingly, when we talk about cybersecurity, humans are often considered the “weakest link” in the cybersecurity chain. Whether it’s the user who carelessly clicks on a phishing link or a network admin who applies the wrong software patch, we are imperfect by nature and bound to make mistakes. By the same token, those individuals with specific domain expertise are able to understand and interpret nuance in a way that even the smartest machines cannot.

Some 80 years ago, the British leveraged a combination of technology and human intelligence to turn the tide of the war. Security leaders would be wise to learn the lessons of history and consider how the human element can make their machine-based systems smarter, more responsive, and ultimately, more effective.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Eyal Benishti has spent more than a decade in the information security industry, with a focus on software RD for startups and enterprises. Before establishing IRONSCALES, he served as security researcher and malware analyst at Radware, where he filed two patents in the … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/what-the-battle-of-britain-can-teach-us-about-cybersecuritys-human-element-/a/d-id/1337282?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple