STE WILLIAMS

There May Be a Ceiling on Vulnerability Remediation

Most organizations are doing all they can to keep up with the release of vulnerabilities, new research shows.

Security has no shortage of metrics — everything from the number of vulnerabilities and attacks to the number of bytes per second in a denial-of-service attack. Now a new report focuses on how long it takes organizations to remediate vulnerabilities in their systems — and just how many of the vulnerabilities they face they’re actually able to fix.

The report, “Prioritization to Prediction Volume 3: Winning the Remediation Race,” by Kenna Security and the Cyentia Institute, contains both discouraging and surprising findings.

Among the discouraging findings are statistics that show companies have the capacity to close only about 10% all the vulnerabilities on their networks. This percentage doesn’t change much by company size.

“Whether it was a small business that had, on average, 10 to 100 open vulnerabilities at any given time, they had roughly the same percentage of vulnerabilities they could remediate as the large enterprises, where they had 10 million-plus open vulnerabilities per month,” says Ed Bellis, CTO and co-founder of Kenna Security, 

In other words, Bellis said, the capacity to remediate seems to increase at approximately the same rate as the need to remediate. “The size thing that tipped us off that there might be some upper threshold on what organizations were able to to fix in a given time frame,” says Wade Baker, partner and co-founder of the Cyentia Institute., 

The time frame for remediating vulnerabilities differs depending on the software’s publisher. Microsoft and Google tend to have software with vulnerabilities remediated most quickly by organizations both large and small, Bellis says. The software with the longest remediation time? Legacy software and code developed in-house.

There are also dramatic differences in time to remediate between companies in different industries. Investment, transportation, and oil/gas/energy led the way in the shortest time to close 75% of exploited vulnerabilities, hitting that mark in as few as 112 days. Healthcare, insurance, and retail/trade took the longest for remediation, needing as much as 447 days to hit the milestone.

Data suggests that some organizations are able to do better than the averages — in some cases, remediating more vulnerabilities than were discovered and actually getting ahead of the problem. What the researchers don’t yet know is precisely what those high-performing companies are doing that is different. That, they say, is the subject of the next volume of their research.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/there-may-be-a-ceiling-on-vulnerability-remediation/d/d-id/1334142?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Case for Transparency in End-User License Agreements

Why it behooves technology companies to consider EULAs as an opportunity to accurately inform customers about privacy issues and other important information.

Imagine walking into your favorite coffee shop to make an order. Due to recent legislation, your baristas are now obligated to give you a 60-page booklet about the dangers of substances commonly found in caffeinated beverages. This includes lengthy warnings about caffeine, lactose, dairy substitutes, and flavored sugar syrups, among other things. You must agree to accept these risks before they can even begin grinding the beans.

The booklets are thick with medicolegal jargon; they’re intended to cover the shop’s compliance responsibilities more than they’re meant to help you make informed dietary decisions. You initially intend to read all the way through the booklet, but due to pressure from a crowd of cranky and undercaffeinated customers building up behind you, you’ll just skim a few paragraphs before giving up.

After that first visit, you’ll likely just hastily wave the booklet away to speed up the process and the arrival of your much-needed brew.

If you are in the cybersecurity business (or even if you’re not), it shouldn’t take a great leap to figure out I am making an analogy about end-user license agreements (EULAs) and how useless they are for gaining actual, informed consent about giving up potentially sensitive information. But let’s consider another example.

If you’ve had any sort of medical procedure done in the US during the last decade or so, you’re probably aware that you’ll be required to sign a scary-looking consent form first. The paperwork is all about informing you of the risk of medical procedures and may list possible negative outcomes or your after-care responsibilities.

On one level, they are meant to protect doctors against the risk of malpractice suits. Some doctors present these without any explanation at all, which can result in varying, sometimes terrifying, reactions depending on the seriousness of the procedure. But not all doctors leave it at this.

Better doctors will have someone explain these documents to you before you sign them. They’ll rephrase the document using easily understood language. They’ll include some context for the actual risk levels. Then, they’ll make sure all your questions are answered so that you fully understand what you’re agreeing to. When patients understand the situation completely, they are more likely to have a successful outcome.

Towards a Better EULA
As we’re seeing with the many recent privacy gaffes by global mega corporations, EULAs written only to be read or understood by lawyers are causing massive consumer distrust. These companies are fulfilling compliance obligations at the expense of their customers’ ability to fully understand what they’re agreeing to. While this may be a good corporate legal strategy, the approach makes many of us (myself included) unwilling to participate fully with their products.

The biggest problem with EULAs is that they are simply not readable. Part of this is due to their length, but even the shortest EULA can be written inscrutably. Formulas, such as the Flesch-Kincaid readability test, use the total number of words per sentence and syllables per word to score text. My first draft of the previous sentence was rated “grade 20,” which indicates it was written at a post-graduate level of complexity. It’s now rated “grade 11.”

I don’t have a graduate degree, much less a post-graduate degree, so this doesn’t indicate that I had initially applied some sort of master’s degree mojo. My first draft was just really convoluted. The score simply measures the complexity of a sentence and assigns a grade level that represents how challenging it is to understand. So, in applying readability to the creation of a sensible EULA, it is important to take under consideration the many variables that can affect people’s ability to comprehend text. For example:

  • Harry Potter books are written at a 7thto 9th grade level.
  • Newspapers typically are written at an 11th grade level.
  • Time magazine is written at undergraduate level.
  • Harvard Law Review is written at a graduate level.

Depending on the target audience, it’s entirely appropriate to vary the level of readability to the EULA audience. A variety of different organizations and industries already use these standards to evaluate text before it’s published. This usually occurs when there’s a specific concern for the reader’s welfare or understanding, such as with insurance policies and federal tax guides.

Right now. most people view EULAs both as meaningless and as a way to secretly “pull one over” on consumers. It would behoove more companies, particularly the largest and most omnipresent ones, to consider EULAs as an opportunity to accurately inform customers about privacy issues and other important information. This transparency could go a long way toward regaining the public’s trust.

It would be naive to think legalistic EULAs will ever completely disappear, but it’s my hope that one day the adversarial interaction we now have will cease to be a customer’s first impression of a new software product, application, or service. Technology has the power to make people’s lives better; we tech providers should interact with potential customers as if we believe that is the unequivocal truth.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Lysa Myers began her tenure in malware research labs in the weeks before the Melissa virus outbreak in 1999. She has watched both the malware landscape and the security technologies used to prevent threats from growing and changing dramatically. Because keeping up with all … View Full Bio

Article source: https://www.darkreading.com/endpoint/the-case-for-transparency-in-end-user-license-agreements/a/d-id/1334106?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘SimBad’: Android Adware Hits 210 Apps with 150M Downloads

Google has removed infected applications from the Google Play store after a form of adware potentially affected millions of users.

SimBad, a newly discovered form of adware, was discovered on 210 Android apps on the Google Play store. About 150 million people had download the apps, Check Point reports.

This particular malware exists in the RXDrioder software development kit, researchers report. They believe developers were tricked into using the SDK, unaware it was malicious. They also point out the campaign did not target a specific country or apps created by the same developer.

When a user downloads and installs an infected application, most of which are simulator games, SimBad can perform actions after the device finishes booting and while the user is on his device. The malware connects to a command-and-control server and can perform any of several capabilities. For example, researchers say, its operators could display background ads for their own profit.

SimBad’s authors also could open a given URL in a browser, a capability they could use to generate phishing pages across platforms and launch spear-phishing attacks. They also could open market applications (Google Play, 9Apps) to specific apps and increase their profits.

Google “was swiftly notified” and has removed the infected apps from the Google Play store.

Read more details here.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/simbad-android-adware-hits-210-apps-with-150m-downloads/d/d-id/1334145?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

IoT Anomaly Detection 101: Data Science to Predict the Unexpected

Yes! You can predict the chance of a mechanical failure or security breach before it happens. Part one of a two-part series.

Data science and artificial intelligence (AI) techniques have been applied successfully for a number of years to predict or detect all kinds of events in very different domains, including:

If you run a quick web search on “machine learning use cases,” you will find pages and pages of links to documents describing machine learning (ML) algorithms to detect or predict some kind of event group in some kind of data domain.

Generally, the key to a successful machine learning-based application is a sufficiently general training set. The ML model, during training, should have a sufficient number of available examples to learn about each event group. This is one of the key points to any data science project: the availability of a sufficiently large number of event examples to train the algorithm.

Applying Machine Learning to IoT Event Prediction
Can security teams apply a machine learning algorithm to predict or recognize deterioration of mechanical pieces, or to detect cybersecurity breaches? The answer is, yes! Data science techniques have already been successfully utilized in the field of IoT and cybersecurity. For example, a classic usage of machine learning in IoT is demand prediction. How many customers will visit the restaurant this evening? How many cartons of milk will be sold? How much energy will be consumed tomorrow? Knowing the numbers in advance allows for better planning.

Healthcare is another very common usage of data science in IoT. There are many sports fitness applications and devices to monitor our vital signs, making available an abundance of data available in near real time that can be studied and used to assess a person’s health condition.

Another common case study in IoT is predictive maintenance. The capability to predict if and when a mechanical piece will need maintenance leads to an optimum maintenance schedule and extends the lifespan of the machinery until its last breath. Considering that many machinery pieces are quite sophisticated and expensive, this is not a small advantage. This approach works well if a data set is available — and even better if the data set has been labeled. Labeled data means that each vector of numbers describing an event has been preassigned to a given class of events.

Anomaly Discovery: Looking for the Unexpected
A special branch of data science, however, is dedicated to discovering anomalies. What is an anomaly? An anomaly is an extremely rare episode, hard to assign to a specific class, and hard to predict. It is an unexpected event, unclassifiable with current knowledge. It’s one of the hardest use cases to crack in data science because:

  • The current knowledge is not enough to define a class.
  • More often than not, no examples are available in the data to describe the anomaly.

So, the problem of anomaly detection can be easily summarized as looking for an unexpected, abnormal event of which we know nothing and of which we have no data examples. As hopeless as this may seem, it is not an uncommon use case.

  • Fraudulent transactions, for example, rarely happen and often occur in an unexpected modality.
  • Expensive mechanical pieces in IoT will break at some point without much indication on how they will break.
  • A new arrhythmic heart beat with an unrecognizable shape sometimes shows up in ECG tracks.
  • A cybersecurity threat might appear and not be easily recognized because it has never been seen before.

In these cases, the classic data science approach, based on a set of labeled data examples, cannot be applied. The solution to this problem is a twist on the usual algorithm learning from examples.

Anomaly Detection in IoT

Anomaly detection problems do not offer a classic training set with labeled examples for classes: a signal from a normally functioning system and a signal from a system with an analogy. In this case, we can only train a machine learning model on a training set with 'normal' examples and use a distance measure between the original signal and the predicted signal to trigger an anomaly alarm.

In IoT data, signal time series are produced by sensors strategically located on or around a mechanical component. A time series is the sequence of values of a variable over time. In this case, the variable describes a mechanical property of the object, and it is measured via one or more sensors.

Usually, the mechanical piece is working correctly. As a consequence, we have tons of examples for the piece working in normal conditions and close to zero examples for the piece failure. This is especially true if the piece plays a critical role in a mechanical chain because it is usually retired before any failure happens and compromises the whole machinery.

In IoT, a critical problem is to predict the chance of a mechanical failure before it actually happens. In this way, we can use the mechanical piece throughout its entire life cycle without endangering the other pieces in the mechanical chain. This task of predicting possible signs of mechanical failure is called anomaly detection in predictive maintenance.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Rosaria Silipo, Ph.D., principal data scientist at KNIME, is the author of 50+ technical publications, including her most recent book “Practicing Data Science: A Collection of Case Studies”. She holds a doctorate degree in bio-engineering and has spent more than 25 years … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/iot-anomaly-detection-101-data-science-to-predict-the-unexpected-/a/d-id/1334090?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Enterprise Cloud Infrastructure a Big Target for Cryptomining Attacks

Despite the declining values of cryptocurrencies, criminals continue to hammer away at container management platforms, cloud APIs, and control panels.

The cloud-based infrastructures that enterprise organizations are increasingly using to run their business applications have become a major target for illicit cryptomining operations.

According to new research from ATT Cybersecurity, cryptomining has become the primary reason for most cloud infrastructure attacks these days. There’s no sign the attacks will let up soon, either, despite the drop in values of major cryptocurrencies, the vendor said in a report Wednesday.

Cryptojacking — or attacks where an organization’s (or an individual’s) computers are surreptitiously used to mine for Monero and other cryptocurrencies — has emerged as a major problem over the last 18 months or so.

Cybercriminals have been extensively planting mining tools such as Coinhive on hacked websites and quietly using the systems of people visiting the sites to mine for cryptocurrencies. They have also been deploying mining software on larger, more powerful enterprise servers and on cloud infrastructure for the same purpose.

“Hijacking servers to mine currency really picked up in 2017, at the height of the cryptocurrency boom when prices were at the highest and the potential rewards were very significant,” says Chris Dorman, security researcher at ATT Cybersecurity. “Even though bitcoin prices have dropped 80% since their peak, the prevalence of server cryptojacking continues.”

ATT Cybersecurity’s researchers examined cryptomining attacks against a range of cloud infrastructure targets. Container management platforms are one of them. The security vendor says its researchers have observed attackers using unauthenticated management interfaces and open APIs to compromise container management platforms and use them for cryptomining.

As one example, the researchers pointed to an attack that security vendor RedLock first reported last year, where a threat actor compromised an AWS-hosted Kubernetes server belonging to electric carmaker Tesla and then used it to mine for Monero. ATT Cybersecurity said it has investigated other similar incidents involving malware served from the same domain that was used in the Tesla attack.

Attackers have also been frequently targeting the control panels of web hosting services, as well. In April 2018, for instance, an adversary took advantage of a previously unknown vulnerability in the open source Vesta hosting control panel (VestaCP) to install a Monero miner on web hosts running the vulnerable software.

Container management systems and control panels are not the only cloud infrastructure targets. API keys are another favorite. ATT Cybersecurity says many attackers are running automatic scans of the web and of sites such as GitHub for openly accessible API keys, which they then use to compromise the associated accounts.

The trend requires due diligence on multiple fronts. Almost all server-side exploits in the cloud, for instance, stem from exploits in software such as Apache Struts and Drupal, Dorman says. “Typically, we see the attackers start scanning the Internet for machines to compromise within two or three days of an exploit becoming available,” he notes. So, keeping machines patched fairly quickly is key.

Similarly, ensuring complex password use and enforcing account lockouts is critical to preventing attackers from simply brute-forcing passwords to cloud servers, he says.

In terms of cloud accounts being compromised — when an attacker steals the root AWS key, for instance — there are free tools available to check all public source code and to verify if any credentials have been accidentally published, Dorman notes.

Malicious Docker images are yet another avenue of attack. Cybercriminals are hiding cryptominers in prebuilt Docker images and uploading them to Docker Hub, ATT Cybersecurity said. Prebuilt images are popular among administrators because they can help reduce the time required to set up and configure a container app. However, if the image is malicious, organizations can end up running a cryptominer as well. So far, though, only a relatively small number of organizations have reported downloading and running malicious containers, ATT Cybersecurity said.

For enterprises, cryptomining attacks in the cloud are a little trickier to address than attacks on on-premises systems. Deploying network detection tools, for instance, typically tends to be more difficult in the cloud. “You may have to rely upon your cloud provider letting you know if they see malicious traffic,” Dorman says.

It’s also important to centralize all logs provided by your cloud provider and to ensure that alerts are generated off of suspicious events. “For example, if you see someone log in to your root AWS account, and that isn’t normal for your environment, you should investigate immediately.”

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/enterprise-cloud-infrastructure-a-big-target-for-cryptomining-attacks/d/d-id/1334146?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New bill would give parents an ‘Eraser Button’ to delete kids’ data

Two US senators on Tuesday proposed a major overhaul of the Children’s Online Privacy Protection Act (COPPA) that would give parents and kids an “Eraser Button” to wipe out personal information scooped up online on kids.

The bipartisan bill, put forward by Senators Edward J. Markey (D-Mass.) and Josh Hawley (R-Mo.), would also expand COPPA protection beyond its current coverage of children under 13 in order to protect kids up until the age of 15.

The COPPA update also packs an outright ban on targeting ads at children under 13 without parental consent, and from anyone up until the age of 15 without user consent. The bill also includes a “Digital Marketing Bill of Rights for Minors” that limits the collection of personal information on minors.

The proposed bill would also establish a first-of-its-kind Youth Privacy and Marketing Division at the Federal Trade Commission (FTC) that would be responsible for addressing the privacy of children and minors and marketing directed at them.

“Rampant and nonstop” marketing at kids

Markey said in a press release that COPPA will remain the “constitution for kids’ privacy online,” and that the senators’ proposed changes would introduce “an accompanying bill of rights.”

As it is, Markey said, marketing at kids nowadays is rampant and nonstop:

In 2019, children and adolescents’ every move is monitored online, and even the youngest are bombarded with advertising when they go online to do their homework, talk to friends, and play games. In the 21st century, we need to pass bipartisan and bicameral COPPA 2.0 legislation that puts children’s well-being at the top of Congress’s priority list. If we can agree on anything, it should be that children deserve strong and effective protections online.

The right of kids to be forgotten

The proposed law has the flavor of the EU General Protection Data Regulation (GDPR), what with the greater control it grants citizens over how their personal data is obtained, processed, and shared, as well as visibility into how and where that data is used.

The citizens, in this case, would be children and their parents, who would be entitled to get their hands on any personal information of the child or minor that’s been collected, “within a reasonable time” after making a request, without having to pay through the nose to get it, and in a form that a child or minor would find intelligible.

The bill also requires that online operators provide a “clear and prominent means” to correct, complete, amend, or erase any personal information about a child or minor that’s inaccurate: in other words, what the senators are calling an Eraser Button.

What would change?

These are the specific privacy protections that the bill would strengthen:

  • Prohibiting internet companies from collecting personal and location information from anyone under 13 without parental consent, and from anyone 13 to 15 years old without the user’s consent.
  • Banning targeted advertising directed at children.
  • Revising COPPA’s “actual knowledge” standard to a “constructive knowledge” standard for the definition of covered operators. Here’s a discussion of the difference.
  • Requiring online companies to explain the types of personal information collected, how that information is used and disclosed, and the policies for the collection of personal information.
  • Prohibiting the sale of internet-connected devices targeted towards children and minors unless they meet robust cybersecurity standards.
  • Requiring manufacturers of connected devices targeted to children and minors to prominently display on their packaging a privacy dashboard detailing how sensitive information is collected, transmitted, retained, used, and protected.

Recently, the FTC has been flexing its COPPA bicep like never before. Last week, video-streaming app TikTok agreed to pay a record $5.7 million fine for allegedly collecting names, email addresses, pictures and locations of children younger than 13 – all illegal under COPPA.

These tech companies know too much about our kids, and we don’t know what they’re doing with that data, Senator Hawley was quoted as saying in Markey’s press release:

Big tech companies know too much about our kids, and even as parents, we know too little about what they are doing with our kids’ personal data. It’s time to hold them accountable. Congress needs to get serious about keeping our children’s information safe, and it begins with safeguarding their digital footprint online.

“Landmark legislation”

Markey’s press release quoted multiple children’s rights campaigners who lauded the bill. One was Josh Golin, Executive Director, Campaign for Commercial-Free Children, who called it “landmark legislation.”

The Markey-Hawley bill rightly recognizes that the internet’s prevailing business model is harmful to young people. The bill’s strict limits on how kids’ data and can be collected, stored, and used – and its all-out ban on targeted ads for children under 13 – would give kids a chance to develop a healthy relationship with media without being ensnared by Big Tech’s surveillance and marketing apparatuses. We commend Senators Markey and Hawley for introducing this landmark legislation and urge Congress to act quickly to put children’s needs ahead of commercial interests.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eXnlYEPK-OI/

New bill would give parents an ‘Eraser Button’ to delete kids’ data

Two US senators on Tuesday proposed a major overhaul of the Children’s Online Privacy Protection Act (COPPA) that would give parents and kids an “Eraser Button” to wipe out personal information scooped up online on kids.

The bipartisan bill, put forward by Senators Edward J. Markey (D-Mass.) and Josh Hawley (R-Mo.), would also expand COPPA protection beyond its current coverage of children under 13 in order to protect kids up until the age of 15.

The COPPA update also packs an outright ban on targeting ads at children under 13 without parental consent, and from anyone up until the age of 15 without user consent. The bill also includes a “Digital Marketing Bill of Rights for Minors” that limits the collection of personal information on minors.

The proposed bill would also establish a first-of-its-kind Youth Privacy and Marketing Division at the Federal Trade Commission (FTC) that would be responsible for addressing the privacy of children and minors and marketing directed at them.

“Rampant and nonstop” marketing at kids

Markey said in a press release that COPPA will remain the “constitution for kids’ privacy online,” and that the senators’ proposed changes would introduce “an accompanying bill of rights.”

As it is, Markey said, marketing at kids nowadays is rampant and nonstop:

In 2019, children and adolescents’ every move is monitored online, and even the youngest are bombarded with advertising when they go online to do their homework, talk to friends, and play games. In the 21st century, we need to pass bipartisan and bicameral COPPA 2.0 legislation that puts children’s well-being at the top of Congress’s priority list. If we can agree on anything, it should be that children deserve strong and effective protections online.

The right of kids to be forgotten

The proposed law has the flavor of the EU General Protection Data Regulation (GDPR), what with the greater control it grants citizens over how their personal data is obtained, processed, and shared, as well as visibility into how and where that data is used.

The citizens, in this case, would be children and their parents, who would be entitled to get their hands on any personal information of the child or minor that’s been collected, “within a reasonable time” after making a request, without having to pay through the nose to get it, and in a form that a child or minor would find intelligible.

The bill also requires that online operators provide a “clear and prominent means” to correct, complete, amend, or erase any personal information about a child or minor that’s inaccurate: in other words, what the senators are calling an Eraser Button.

What would change?

These are the specific privacy protections that the bill would strengthen:

  • Prohibiting internet companies from collecting personal and location information from anyone under 13 without parental consent, and from anyone 13 to 15 years old without the user’s consent.
  • Banning targeted advertising directed at children.
  • Revising COPPA’s “actual knowledge” standard to a “constructive knowledge” standard for the definition of covered operators. Here’s a discussion of the difference.
  • Requiring online companies to explain the types of personal information collected, how that information is used and disclosed, and the policies for the collection of personal information.
  • Prohibiting the sale of internet-connected devices targeted towards children and minors unless they meet robust cybersecurity standards.
  • Requiring manufacturers of connected devices targeted to children and minors to prominently display on their packaging a privacy dashboard detailing how sensitive information is collected, transmitted, retained, used, and protected.

Recently, the FTC has been flexing its COPPA bicep like never before. Last week, video-streaming app TikTok agreed to pay a record $5.7 million fine for allegedly collecting names, email addresses, pictures and locations of children younger than 13 – all illegal under COPPA.

These tech companies know too much about our kids, and we don’t know what they’re doing with that data, Senator Hawley was quoted as saying in Markey’s press release:

Big tech companies know too much about our kids, and even as parents, we know too little about what they are doing with our kids’ personal data. It’s time to hold them accountable. Congress needs to get serious about keeping our children’s information safe, and it begins with safeguarding their digital footprint online.

“Landmark legislation”

Markey’s press release quoted multiple children’s rights campaigners who lauded the bill. One was Josh Golin, Executive Director, Campaign for Commercial-Free Children, who called it “landmark legislation.”

The Markey-Hawley bill rightly recognizes that the internet’s prevailing business model is harmful to young people. The bill’s strict limits on how kids’ data and can be collected, stored, and used – and its all-out ban on targeted ads for children under 13 – would give kids a chance to develop a healthy relationship with media without being ensnared by Big Tech’s surveillance and marketing apparatuses. We commend Senators Markey and Hawley for introducing this landmark legislation and urge Congress to act quickly to put children’s needs ahead of commercial interests.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eXnlYEPK-OI/

“FINAL WARNING” email – have they really hacked your webcam?

Sextortion is back!

In fact, it never went away. Most of us get dozens of sextortion scam emails every month to our work and personal accounts, demanding us to PAY MONEY OR ELSE!! In the crime of sextortion, the “OR ELSE” part is a threat to release a video of a sexual nature in which you are visible.

For example:

FINAL WARNING. You have the last chance to save your social life. I am not kidding. I give you the last 72 hours to make the payment before I send the video to all your friends and associates.

How did the crooks obtain this x-rated film in which you’re the star? They apparently filmed you using malware planted on your computer when you visited a porn site:

I’ve been watching you for a while because I hacked you through a trojan virus in an ad on a porn website. If you are not familiar with this, I will explain this. A trojan virus gives you full access and control over a computer, or any other device. This means that I can see everything on your screen and switch on your camera and microphone without you being aware of it.

The good news is that it’s all a pack of lies, so you can relax.

But the bad news is that this sort of cybercrime is nevertheless confronting and scary, because of how the crooks claim to have spied on you.

Even if you don’t watch porn, what else might they know about you if they have spyware on your laptop?

Is it technically possible?

If you’ve ever heard of RATs, short for Remote Access Trojans, you’ll know that malware does exist that makes it possible for a crook to turn on your webcam remotely.

Indeed, in a high-profile criminal case back in 2014, US youngster Jared James Abrahams, a college student in California who was studying computer science, was sentenced to 18 months in federal prison for spying on women via their webcams.

Abrahams pleaded guilty to hacking and extortion charges relating to 150 women, including Miss Teen USA, Cassidy Wolf, who went public about the threats made against her.

(As an aside, Wolf also said that she had risky habit of using the same password everywhere, which may well have been how she got attacked and infected in the first place – so if you aren’t smart about passwords, change yours now!)

So the sextortionists could be telling the truth?

No. If you receive a sextortion email like the one we showed above (without any stills from the video as proof or a link to view the file), rest assured the crooks don’t have malware on your computer and they don’t have a video of you – they’re just trying to scare you into paying them something.

Remember, they send out these sextortions by the million – in the last 24 hours, SophosLabs received 1,700 samples of just one new sextortion spam campaign in its spamtraps.

So even if only a few recipients get scared enough to pay, the crooks end up making thousands of dollars with almost no outlay.

Our simple advice is: DON’T PAY, DON’T REPLY.

Delete the offending emails, and don’t engage with the crooks at all.

But they seem to know all about me!

We’ve had numerous emails from readers who never watch porn, don’t even have a webcam, and yet get scared by some of the claims made in these emails.

That’s because the crooks often try to convince you that they really do have “insider knowledge” about you.

They include personal details in the email that allegedly “prove’ that there must be some sort of active spyware infection on your computer.

For example:

  • The crooks include one of your passwords. Often, it’s an old password, but usually it is (or was) genuinely yours. That’s scary, but don’t panic – these stolen passwords come from data breaches, where your data was lost by someone else. The crooks didn’t steal the password directly from you.
  • The crooks include your phone number. Same again – the crooks use phone numbers, paired up with email addresses, acquired through a data breach. The data wasn’t lifted directly from your computer.
  • The crooks send the email from your own account. Except that they don’t – the name that shows up in the From: field in an email is actually part of the email itself. Crooks can put anything they like in there, in just the same way that they could send you a snail-mail and sign off the “Yours sincerely” part in your name.

What to do?

Nothing!

OK, delete the email – but don’t panic, don’t reply to the crooks, and certainly don’t pay up.

As several of our readers have pointed out, if the crooks really wanted to prove they had a “sex tape” of you, they’d send you a still image, or a link where you could preview the file they claim to have.

But they don’t – they just threaten you and present vague and unconvincing evidence that they know something about you.

So, don’t panic, delete the email and don’t let the crooks trick you into contacting them at all.

For further information

Sextortion scams are nothing new. Learn how these crooks spoof your own email address to make you think they have access to your computer. And then read about more about recent sextortion emails.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9KfD15jODqk/

“FINAL WARNING” email – have they really hacked your webcam?

Sextortion is back!

In fact, it never went away. Most of us get dozens of sextortion scam emails every month to our work and personal accounts, demanding us to PAY MONEY OR ELSE!! In the crime of sextortion, the “OR ELSE” part is a threat to release a video of a sexual nature in which you are visible.

For example:

FINAL WARNING. You have the last chance to save your social life. I am not kidding. I give you the last 72 hours to make the payment before I send the video to all your friends and associates.

How did the crooks obtain this x-rated film in which you’re the star? They apparently filmed you using malware planted on your computer when you visited a porn site:

I’ve been watching you for a while because I hacked you through a trojan virus in an ad on a porn website. If you are not familiar with this, I will explain this. A trojan virus gives you full access and control over a computer, or any other device. This means that I can see everything on your screen and switch on your camera and microphone without you being aware of it.

The good news is that it’s all a pack of lies, so you can relax.

But the bad news is that this sort of cybercrime is nevertheless confronting and scary, because of how the crooks claim to have spied on you.

Even if you don’t watch porn, what else might they know about you if they have spyware on your laptop?

Is it technically possible?

If you’ve ever heard of RATs, short for Remote Access Trojans, you’ll know that malware does exist that makes it possible for a crook to turn on your webcam remotely.

Indeed, in a high-profile criminal case back in 2014, US youngster Jared James Abrahams, a college student in California who was studying computer science, was sentenced to 18 months in federal prison for spying on women via their webcams.

Abrahams pleaded guilty to hacking and extortion charges relating to 150 women, including Miss Teen USA, Cassidy Wolf, who went public about the threats made against her.

(As an aside, Wolf also said that she had risky habit of using the same password everywhere, which may well have been how she got attacked and infected in the first place – so if you aren’t smart about passwords, change yours now!)

So the sextortionists could be telling the truth?

No. If you receive a sextortion email like the one we showed above (without any stills from the video as proof or a link to view the file), rest assured the crooks don’t have malware on your computer and they don’t have a video of you – they’re just trying to scare you into paying them something.

Remember, they send out these sextortions by the million – in the last 24 hours, SophosLabs received 1,700 samples of just one new sextortion spam campaign in its spamtraps.

So even if only a few recipients get scared enough to pay, the crooks end up making thousands of dollars with almost no outlay.

Our simple advice is: DON’T PAY, DON’T REPLY.

Delete the offending emails, and don’t engage with the crooks at all.

But they seem to know all about me!

We’ve had numerous emails from readers who never watch porn, don’t even have a webcam, and yet get scared by some of the claims made in these emails.

That’s because the crooks often try to convince you that they really do have “insider knowledge” about you.

They include personal details in the email that allegedly “prove’ that there must be some sort of active spyware infection on your computer.

For example:

  • The crooks include one of your passwords. Often, it’s an old password, but usually it is (or was) genuinely yours. That’s scary, but don’t panic – these stolen passwords come from data breaches, where your data was lost by someone else. The crooks didn’t steal the password directly from you.
  • The crooks include your phone number. Same again – the crooks use phone numbers, paired up with email addresses, acquired through a data breach. The data wasn’t lifted directly from your computer.
  • The crooks send the email from your own account. Except that they don’t – the name that shows up in the From: field in an email is actually part of the email itself. Crooks can put anything they like in there, in just the same way that they could send you a snail-mail and sign off the “Yours sincerely” part in your name.

What to do?

Nothing!

OK, delete the email – but don’t panic, don’t reply to the crooks, and certainly don’t pay up.

As several of our readers have pointed out, if the crooks really wanted to prove they had a “sex tape” of you, they’d send you a still image, or a link where you could preview the file they claim to have.

But they don’t – they just threaten you and present vague and unconvincing evidence that they know something about you.

So, don’t panic, delete the email and don’t let the crooks trick you into contacting them at all.

For further information

Sextortion scams are nothing new. Learn how these crooks spoof your own email address to make you think they have access to your computer. And then read about more about recent sextortion emails.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9KfD15jODqk/

Misconfigured Box accounts leak terabytes of companies’ sensitive data

If your company uses Box for cloud-based file sharing, security researchers are advising you to stop reading right now and immediately disable public file sharing: vanity-named subdomains and URLs are “easily brute-forceable,” leaving companies’ publicly shared data open to extremely easy attacks.

Security firm Adversis published a report on Monday after using a “relatively large” wordlist to uncover hundreds of Box customers’ subdomains, through which they could access hundreds of thousands of documents and terabytes of extremely sensitive data.

A sampling of what the researchers found:

  • Hundreds of passport photos
  • Social Security and bank account numbers
  • High-profile technology prototype and design files
  • Lists of employees
  • Financial data, invoices, internal issue trackers
  • Customer lists and archives of years’ worth of internal meetings
  • IT data, VPN configurations, network diagrams

Adversis says its initial impulse was to reach out to all the affected companies, but the scale of the task ruled that out. After finding that a large percentage of Box customer accounts that it tested had thousands of exposed, sensitive documents, the firm alerted some of those companies, gave Box a heads-up – that was on 24 September – and published its report.

As Box Chief Customer Officer Jon Herstein said in a blog post on Sunday, Box offers various ways for its customers to allow content sharing both between employees and outside the company.

Data stored in Box enterprise accounts is private by default. But in order to make it easy for its customers to share content with large groups – be it privately or publicly – Box offers the “Custom Shared Link” feature, which enables its customers to customize the default secure shared links so they’re easier to find. Box gives the example of a car company that wants to distribute public press releases for a product launch: you can see where the car company would like the idea of customizing the URL to read something like this: https://carcompanyname.app.box.com/v/pressrelease

This is neither a bug nor a vulnerability, mind you. It’s simply a way to easily make data publicly accessible with a single link. In fact, Adversis noted, it was called out as an easy attack method back in June 2018:

The problem: with this type of predictable URL formulation, these “secret” links are easy to discover. So that’s what Adversis did: its researchers whipped up a script to scan for and enumerate Box accounts with lists of company names and wildcard searches. It easily found Box customer accounts by checking https://companyname .account.box.com. If that link returned a target company’s logo, that meant it’s a paying customer and is “probably susceptible,” the firm said.

Then, the researchers sat back and watched the wave come in:

At that point, we began brute forcing folder and file names which began returning results faster than we could review them.

Much of the data, found leaking in subdomains of dozens of companies, was harmless, in that it was meant to be public. But then too, there was all that “oh, dear!” data:

These included passport photos, prototype details with raw CAD files for some very prominent new and coming tech, Social Security Numbers, financial documents, internal IT data including network diagrams and asset information, and innumerable “confidential” slide decks.

Who’s leaking data?

Adversis says it contacted a “small minority” of affected companies and vendors, most of which promptly closed the leak. Box acknowledged the issue and updated its file-sharing guidelines.

Adversis gave TechCrunch a list of some of the exposed Box accounts, and the publication contacted several of the big names on that list. Those big companies represent a smorgasbord of industries: from a flight reservation system maker, on to a nonprofit that handles corpse donations, a TV network, Apple (though the tech behemoth apparently only exposed what looked like non-sensitive internal data, such as logs and regional price lists), and more.

The data exposed included default passwords and, in some cases, backdoor access passwords in case of forgotten passwords; a PR firm’s detailed proposal plans and more than a dozen resumes of potential staff for the project, including names, email addresses, and phone numbers.

That list of exposed accounts included even Box itself. From TechCrunch:

Box, which initially had no comment when we reached out, had several folders exposed. The company exposed signed non-disclosure agreements on their clients, including several U.S. schools, as well as performance metrics of its own staff, the researchers said.

What to do

Box recommended making these changes to deal with the issue of URL guessing and subsequent leakage:

  • Administrators configure Shared Link default access to ‘People in your company’ to reduce accidental creation of public (open) links by users.
  • Administrators regularly run a shared link report (as described here) to find and manage public custom shared links.
  • Security Administrators leverage third-party SIEM or log tools to consistently review suspicious content activity across your enterprise.
  • Users do not create public (open) custom shared links to content that is not intended for public consumption.
  • Users only post shared content with open shared links on public web pages if you want the content to be indexed and available by Google and available for public consumption.

Box says it’s working on improving Box security by…

  • Adding more user education to the link settings tool on Box to make the potential implications of public link access even more clear, and advising that no sensitive content ever be shared with this level of permission.
  • Improved admin policies for public shared links, including changing the default setting in the Box Admin console to disabled public custom shared link URLs until a company Box Admin decides to enables it; and setting the default access level for shared links in Admin console to “people in your company.” That default can only be changed by a company’s Box Admin. As a result, in a default configuration of Box, end users will need to expressly change the shared link setting to “people with the link” to make the link externally accessible.
  • More stringent controls to reduce unintended content access. Box says it’s working on a variety of methods to limit the unintended discovery of open/public links and prevent content access by external parties.

For its part, Adversis has open-sourced and published the scanning tool it used to find the exposed accounts. Aptly enough, the tool’s name is PandorasBox.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DQNVPkrRHYM/