STE WILLIAMS

Equifax peeks under couch, finds 2.4 million more folk hit by breach

Embattled credit-reporting company Equifax has done some data crunching and discovered another 2.4 million people that had their information slurped by hackers.

The biz, which was subject to one of the biggest data breaches in US history last May, has already had to revise up the number of affected individuals.

The total stood at 145 million in the US and hundreds of thousands in the UK and Canada – but it’s now found a few more people that previously escaped its “forensic” testings.

In a statement released today, Equifax said that ongoing analysis of the stolen data had allowed it to confirm the identities of an additional 2.4 million US consumers who had partial driving licence information taken.

“This information was partial because, in the vast majority of cases, it did not include consumers’ home addresses, or their respective driver’s license states, dates of issuance, or expiration dates,” the statement said.

Doh image via Shutterstock

Equifax hack worse than previously thought: Biz kissed goodbye to card expiry dates, tax IDs etc

READ MORE

The business said that it had – as recommended by forensic experts – focused its initial assessments on Social Security Numbers and names as a way of identifying who was affected by the hack.

The newly identified batch of people, however, did not have their SSNs stolen so weren’t picked up by the previous investigations.

Equifax’s interim boss Paulino de Rego Barros said in a statement that the announcement “is not about newly discovered stolen data”.

Rather, he said, it was about “sifting through the previously identified stolen data” and comparing it with information in the business’s database that was not stolen to identify people who had previously slipped through the net.

These people – who had presumably until now thought they were off the hook – will be contacted directly and offered free identity theft protection and credit file monitoring services.

The company is to report its fourth quarter results tomorrow. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/01/equifax_discovers_24_million_more_customers_hit_by_breach/

What Enterprises Can Learn from Medical Device Security

In today’s cloud-native world, organizations need a highly distributed approach that ties security to the workload itself in order to prevent targeted attacks.

Recently, I had an enlightening conversation with a customer who works at a medical device manufacturer of laboratory diagnostic equipment. This company has thousands of medical devices in the field — visualize racks of test tubes, all computerized with a large instrument and a Windows system that’s running the test equipment in the hospital.

Scott T. Nichols is responsible for product privacy and security at this company, which means it’s his job to figure out how that data — each patient’s name, Social Security number, and test results (basically, the most sensitive data there is) — remains protected.

The interesting thing about this situation is that these devices are computers that sit on a trustworthy network in someone else’s data center or network. That means the company doesn’t control the firewall, so there’s a lot of risk involved in keeping its devices secure. Think about the latest outbreak of ransomware. What can this company do to assure customers and shareholders that its equipment (and everyone who uses it) is not vulnerable to these attacks?

And in this case, it’s not just malware. There’s the very real threat of targeted attacks. In the hospital environment, we see attacks that are directed at individual doctors, hospital administrators, and other staff members. These attacks come from within — not a bad attacker coming over a firewall. Consider this common scenario: A field tech comes in to get a report using a USB stick and that drive is infected, so even though it’s separate from the network, it provides a way for a sophisticated attack to get in. Therefore, even in a closed-circuit system with an isolated network for medical devices, malware can still get in.

“The threat is people,” says Nichols. “People are the weakest link.” He’s right: People are always the weakest link and always will be. They are trusting, hardworking, and earnest — they don’t realize what they are doing is oftentimes propagating infection.

How can this company respond? As Nichols figured out, it needs a new approach to security — one that doesn’t protect the network alone or rely on a physical perimeter.

Thus, the company implemented an “onion” strategy, with several layers of protection attached to an individual workload. At the heart of this strategy is the data layer, where it uses encryption of data on the device itself. Think of it as the crown in a castle that needs to be protected. Imagine building a safe for the crown inside the castle and then a moat all around the castle. Protecting the data layer is the network layer, where firewalling turns network security on and off. After the network layer is the server layer, which allows only applications that are recognized. On top of that is the user layer, where access controls allow the company to see who logged in and who logged out, check their user ID, and add password complexity requirements. They also put protections on the back end.

Why was I so fascinated with this example? It’s obvious: The parallel is very similar to what the enterprise faces as it moves to the cloud. The workload is put in an environment that the enterprise doesn’t control. The traditional controls for security are dissolving and the self-service model has made it even worse, igniting a blurred separation of duties.

The enterprise needs a new model. It needs to rip a page from the playbook of this medical device company and implement the same kind of highly distributed security approach that’s tied to the workload itself. I’m hardly the only one who’s thinking this. A recently published Gartner report says security needs to be attached to the workload and to be multilayered — looking at data, network, computing, and users.

Migrating a workload to the cloud is like moving from one house to another: If you simply box up everything and move it to the new address, you are missing a major opportunity to clean up the old and make way for the new, an opportunity to streamline operations and to improve the effectiveness of your defenses. In the worst case, migrating a workload without revisiting the security controls can expose new vulnerabilities that were never even possible before, such as the often-experienced data leakage that comes from a misconfigured S3 bucket on Amazon that publishes sensitive data to the public Internet.  

In a cloud-native world, we have an opportunity to implement security controls that are:

  1. Fully automated
  2. Host-centric
  3. Auto-scaling
  4. Immutable
  5. Independent of infrastructure

The multitenant public cloud has revolutionized IT. For the security team, it’s a new world with a new set of constraints and a new set of possibilities. The medical device community has been operating in this mindset for some time, and there are lessons to be learned from them on building a cloud-native security architecture.  

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Tom Gillis co-founded Bracket Computing with the goal of delivering enterprise computing driven by business needs, not hardware limitations. Prior to Bracket, Tom was vice present/general manager of Cisco’s Security Technology Group, leading business units responsible for … View Full Bio

Article source: https://www.darkreading.com/cloud/what-enterprises-can-learn-from-medical-device-security-/a/d-id/1331145?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Equifax Finds 2.4 Million Additional US Victims of its Data Breach

Total of victims now at 147.9 million customers.

For the second time in several months, Equifax has discovered additional victims of its record-breaking data breach: the firm found 2.4 million more customers had their data exposed in the 2017 hack.

In October of last year, Equifax revealed that forensics investigators had concluded that some 2.5 million more US consumers were affected by the data breach it revealed in September, bringing the total number at that time to 145.5 million. 

Today’s new revelation raises the number of victims in the US to 147.9 million people. According to Equifax, the latest list of victims had their names and partial driver’s license information stolen in the hack.

“This is not about newly discovered stolen data,” said Paulino do Rego Barros, Jr., interim CEO of Equifax. “It’s about sifting through the previously identified stolen data, analyzing other information in our databases that was not taken by the attackers, and making connections that enabled us to identify additional individuals.”

Equifax – which last month hired former Home Depot CISO Jamil Farshchi as its new chief information security officer – said it will directly alert these consumers and provide them with free identity theft protection and credit file monitoring services.

Read more here.

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/equifax-finds-24-million-additional-us-victims-of-its-data-breach/d/d-id/1331165?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

ICS Under Fire in 2017

New Dragos report finds rising number of public vulnerability advisories around ICS with not enough reasonable guidance around how to deal with these flaws.

The security of industrial control systems (ICS) had its nose bloodied considerably in 2017 with several high-profile targeted malware outbreaks and an alarming set of vulnerability trends arising around these systems. So says a new report out by Dragos, which laid out the lowlights of ICS security vulnerabilities from last year.

Dragos last year tracked 163 vulnerability advisories that impacted ICS products. Among these vulnerabilities, 61% made it possible for attackers to inflict a scary double-whammy of both loss of view and loss of control of the impacted asset.

“This means that a large percentage of ICS-related vulnerabilities will cause severe operational impact if exploited,” the report explained.

One of the perennial problems with vulnerabilities in ICS products is the great difficulty organizations face in patching them. The touchy and critical nature of these systems tends to delay patch cycles – sometimes indefinitely. Dragos believes that to get over this hump organizations need to work harder to develop better test systems that can reliably vet patches so that impacted organizations can roll them out more quickly with confidence.

In order to implement these test environments, getting executive buy-in for the investment is the most fundamental first step, says Reid Wightman, senior vulnerability analyst for Dragos and author of the report. It may not only require new software and computers, but potentially additional controllers.

However, it may be easy to argue for this capital given that test environments provide benefits beyond the security realm.

“Engineers are likely to benefit from it in that they can test new setups prior to a maintenance window, and it can really speed up the time that it takes to repair software systems during that maintenance window,” Wightman explains. “A test system can really boost profit in a lot of ways, it isn’t just a cost sink.”

Nevertheless, even if organizations work hard to shrink the patch window, they need better support from vendors and the security community to deal with the risk between disclosure and patching. According to Wightman, public flaw advisories don’t do enough to provide information about alternative mitigations of the risk beyond applying the patch or isolating systems.

“When end users can’t patch – and they often can’t patch, at least not right away – they absolutely should be told what they can do to reduce their risk,” he says. “They aren’t getting that information from ICS-CERT nor from the vendors in many cases.”

There also needs to be more acknowledgement that patching won’t necessarily zero out the risk equation. One of the more startling statistics from this report is that of the crop of ICS-related vulnerabilities last year, 64% impacted components that were insecure by design. In other words, the patch wouldn’t fully eliminate the risk of compromise.

Wightman believes that one of the single-most important things an organization could do to strengthen its risk posture on the ICS front is to “know thyself.” Organizations need to do a better job gaining understanding about what’s in their control systems networks, detailing which assets communicate with one another and specifically what services are used. This is the only way to set very specific access controls that minimize the most risk.

“A prime example is understanding that the engineering protocol for a field device almost always uses a different service from the data access,” Wightman says. “Let your engineering systems have access to the engineering service, and let your operator systems have access to the data service. Vendors can provide this information, and should give it to you for free.”

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/ics-under-fire-in-2017/d/d-id/1331163?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Phishers Target Social Media

Financial institutions still the number one target, according to a new report by RiskIQ.

Social media platforms now rank as one of the most phished brands, right behind financial institutions, according to a new report.

The RiskIQ Q4 Phishing Roundup report also has some good news: The number of blacklisted phishing domains declined 2% from Q3. The number of brands targeted was also down to 259, a drop of 7% in the same time.

But social media has graduated to the Big Time: this was the first quarter where social media platforms were among the top targeted brands for threat actors.

Sam Curcuruto, head of product marketing at RiskIQ, says that phishing actors will always stay “close to the money,” but that they’re finding that money is following the social media trail. Curcuruto says phishers use social media as an attack vector and to phish log-in credentials: they use the stolen credentials to log into the social media accounts to learn more about the consumer so they can craft more targeted and effective spear-phishing attacks.

Phishing attacks targeting social media accounts aren’t new, however. In RiskIQ’s Q3 Report, 10% of phishing targets were social media platforms, Curcuruto says. By Q4, though, a full 20% of attacks were against social media targets, putting social media on an equal footing with large tech company and digital transaction provider targets.

Financial institutions still lead the target pack with 40% of targeted domains, but the rise of social media is striking.

Curcuruto says that social media’s contribution to a full picture of the phishing target is one reason for the increase in phishing activity. “You can learn a lot about someone by looking at the information on one of those accounts, so you can start to do things like get around authentication questions,” Curcuruto says. The other is the rise in social media capabilities for transferring funds from one party to another, making social media “close to the money,” in Curcuruto’s words.

While broad media attention has shifted to computer actors’ involvement with political and election issues, Curcuruto says that he doesn’t see anything in the trends to indicate that phishing attackers will re-focus their attention. “When you think about phishing, it all comes down to the money, so I’m not entirely sure we’ll see a lot of phishing affected by things like the upcoming elections,” he says. “I think that in the next quarter we’ll see financial institutions remaining the top target.”

New research from Kaspersky Lab seems to confirm that financial targets are at the top of phishing criminals lists. Attacks against banks, payment systems, and online retailers made up the top three categories in overall phishing attacks in 2017, a sign that criminals are now focusing on accessing money directly, according to the security vendor.

RiskIQ’s Curcuruto says there’s another new phishing target emerging as well: cryptocurrency wallets. “We’re seeing an increase in cryptocurrency, so I think we’ll see more phishing aimed at cryptocurrency wallets,” Curcuruto says. “I think we’ll see people using these [wallets] who aren’t familiar with the security behind them, so we’ll see more phishing.”

Phishing remains an active attack method, with 27,285 uniquely blacklisted phishing domains spotted in Q4. This despite consumers and businesses becoming more diligent in defending against phishing attacks. “People are becoming more and more educated about the phishing threat, so fewer are being successful,” Curcuruto says, though phishers are becoming more sophisticated in their attacks in response.

Two-factor authentication is another defense strategy making an impact on phishing success. “The more the platforms move to 2FA, the more difficult it is for a [phishing] attempt to be successful,” Curcuruto says.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Curtis Franklin Jr. is executive editor for technical content at InformationWeek. In this role he oversees product and technology coverage for the publication. In addition he acts as executive producer for InformationWeek Radio and Interop Radio where he works with … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/phishers-target-social-media/d/d-id/1331162?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How & Why the Cybersecurity Landscape Is Changing

A comprehensive new report from Cisco should “scare the pants off” enterprise security leaders.

Cisco recently published its 2018 annual cybersecurity report. The study is far more comprehensive than previous surveys and includes threat research from its Talos group and a number of technology partners, along with a survey of 3,600 chief security officers and security operations managers from all over the world. Even more important: the report underscores the need to change the way cybersecurity is done. It should scare the pants off today’s security leaders.

Highlights of the study include four key assertions:

1. Malware is becoming self-propagating. 
Historically, malware required a user to click on a link, open an attachment, or take some other kind of action before it could spread. Today, newer forms of malware, like ransomware cryptoworms, are network-based, which obviates the need for humans to spread it. Self-propagating malware is much more difficult to find and can propagate at network speeds. Cisco warns that self-propagating malware has the potential to take down the Internet.

2. Ransomware isn’t only for ransom. 
In security circles, 2017 may well be remembered as the year of ransomware. It made headlines as it wreaked havoc on companies large and small. One healthcare security professional I talked to admitted his company had been breached several times and now just pays the ransom as a “cost of doing business.” Ransomware even hit pop culture as shown by a recent episode of one of my favorite TV shows, Homeland, when the ex-CIA agent and heroine Carrie Mathison got stung by it. The Cisco security report indicates that some hackers aren’t just looking to make a few bucks from the ransom threat. Rather, their main goal is the destruction of systems and data. The recent Nyetya (NotPetya) threat posed as tax software but was actually something called “wiper malware” that killed organizations’ supply chain systems.

3. Adversaries are stepping up their evasion capabilities.
The ability to skirt sandboxes has been something that the bad guys have been getting steadily better at executing. In no way am I saying sandboxes don’t work. They do, but some malware has gotten smart enough to evade detection. A growing technique is to hide the threat in encrypted traffic. The use of encryption has grown as a way of protecting payloads but it can also conceal bad traffic from security systems. Threat actors are also using popular cloud services for command and control, making malware very difficult to find with traditional security tools because it looks like normal traffic. The graphic below, from Anomali, offers some examples of cloud services that were abused by malware for command and control.

4. The Internet of Things (IoT) is becoming a significant threat vector.
Businesses of all sizes are deploying IoT devices at a furious rate. This may be a critical component of digital transformation, but it also poses a number of new security problems, according to the study, because of the following:

  • 60% of IoT devices are deployed by operational technology not IT.
  • Many IoT devices are unmonitored.
  • IoT devices created “back doors” to other systems.
  • Patching for IoT devices is often done poorly.
  • IoT endpoints often have no inherent security capabilities.

Any one of these points can be problematic, but put them together and it spells disaster for many companies. Mirai and Reaper may have been the first couple of highly publicized IoT botnets, but they certainly won’t be the last.

The lesson of the report is that the bad guys are getting smarter, are creating more damage, and have more tools at their disposal. But the big issue for security professionals is what to do about it. Clearly, doing what you did before isn’t going to protect your business. If the hackers and threats keep evolving, so must an organization’s security strategies. Here are four “no-brainer” recommendations.

Recommendation 1: Implement segmentation. One of the facts of life in security is that businesses will get breached. The challenge is to minimize the blast radius when this happens. Segmentation ensures that when a breach occurs, damage is limited to a small, confined area. 

Recommendation 2: Use machine learning-based tools. Machine learning can connect dots faster than people can. That’s why the bad guys are using it. Signature-based anti-malware was once state-of-the-art, but the industry changes too fast. Instead, shift to machine learning-based tools that can sift through the massive amounts of data and find those small anomalies before they become big security problems.

Recommendation 3: Automate the onboarding and security of IoT devices. This requires coordination of IT, security operations, and operational technology. There are a number of access control solutions that enable the entire IoT onboarding process to be automated so the devices can reach only the other devices they need to and, if a breach occurs, it won’t impact other critical systems.

Recommendation 4: Extend security to the cloud. The concept that security teams are only responsible for traffic and data on the company network is a complete fallacy today. Security tools must extend to public cloud services. The first step in this is to implement a process of doing periodic audits to ensure that all the cloud services that are being used are actually known.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Zeus Kerravala provides a mix of tactical advice and long term strategic advice to help his clients in the current business climate. Kerravala provides research and advice to the following constituents: end user IT and network managers, vendors of IT hardware, software and … View Full Bio

Article source: https://www.darkreading.com/cloud/how-and-why-the-cybersecurity-landscape-is-changing/a/d-id/1331157?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GitHub Among Victims of Massive DDoS Attack Wave

GitHub reports its site was unavailable this week when attackers leveraged Memcached servers to generate large, widespread UDP attacks.

GitHub has informed users of a distributed denial-of-service (DDoS) attack, which brought down the site from 17:21 to 17:26 UTC and made it sporadically unavailable from 17:26 to 17:30 UTC. The incident did not compromise the confidentiality or integrity of users’ data, it reports.

In a first, attackers last month exploited unsecured Memcached servers to amplify DDoS attacks against target organizations. Memcached is open-source software used among many businesses to increase servers’ performance speed; however, it’s not always used securely. Organizations often deploy Memcached hosts so they’re accessible to the public Internet and all attackers have to do is search for hosts and use them to direct high-volume DDoS traffic.

GitHub identified and mitigated the Feb. 28 attack, which came from more than 1,000 unique autonomous systems (ASNs) across tens of thousands of different endpoints. The amplification attack used the memcached approach and peaked at 1.35Tbps via 126.9 million packets per second. One facility had an increase in inbound transit bandwidth exceeding 100Gbps.

The site tells users it’s investigating the use of monitoring infrastructure to automate enabling DDoS mitigation providers.

Read more details here.

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/github-among-victims-of-massive-ddos-attack-wave/d/d-id/1331169?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Right to be Forgotten requests stagnate, Google refuses most anyway

Is the EU’s much-vaunted Right to be Forgotten (RTBF) privacy law working as intended?

Studying Google’s latest transparency report, it’s clear that demand by Europeans to have URLs removed (or “delisted”) from the search giant’s results has dropped off.

Since its launch in May 2014, the number of URLs in delisting requests totalled 2.44 million from 655,000 individual requests, or roughly 3.7 URLs per request.

Year one accounted for 40% of these requests, year two 25%, year three 22% and, in seven months of year four, 13% (which indicates a similar rate to year three).

It could be that the early period saw a rush and requests will settle at around the same number each year, or perhaps it’s dawned on requesters that getting URLs removed isn’t as easy as filing in an online form, because Google can say no.

In fact, the percentage of URLs Google accepted for removal has remained steady at between 42% and 44% every year, which is to say the majority of requests are rejected.

Alternatively, a better measure might be the type of information people are asking to have delisted.

For example, the ‘personal information’ category was 6.9% of requests, 98% of which were accepted. Compare that with ‘professional wrongdoing’, which was 7.1% of requests, of which only 14.5% were delisted.

It also seems to matter which type of site a URL relates to – directories, and social media (especially Facebook) had URLs delisted over 50% of the time, while for news sites the rate dropped to 32%, with government sites on 19%.

Perhaps the most curious statistic, however, is the tiny number of individuals who seem to be generating lots of work for Google, with the top 1,000 requesters (0.25% of the total) accounting for nearly 15% of requests, 20% of which were accepted.

Google commented:

These mostly included law firms and reputation management agencies, as well as some requesters with a sizable online presence.

This doesn’t mean that RTBF isn’t working but does tend to suggest that a minority of Europeans who can afford to employ people to make requests for them account for a disproportionate amount of its activity.

What happens when Google refuses to delist a URL?

In a UK High Court case, an unnamed businessman is suing Google for refusing to remove search results relating to his conviction for false accounting during the late 1990s.

It’s the perfect stand-off between what RTBF is supposed to be about and what Google says is the public right to know.

It also reminds us that even when URLs are delisted, RTBF has its limits. Should this individual win, Google will remove his name from a set of search results but not of course from the sites themselves. These can still become common knowledge in other ways.

At least Google has dramatically improved the speed with which it processes URL delisting requests. In 2014, these took an average of 85 days (causing it some trouble), but by 2017 this had dropped to four.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MAQhoXbUrgY/

Microsoft still refusing to hand over private email data stored in Ireland

Remember the US government’s years-long battle to get Microsoft to cough up a customer’s private email stored on its servers in Ireland?

It’s back.

Actually, it never went away. A US appeals court in January 2017 rejected a government appeal to rethink its denial of the government’s attempts to get Microsoft to hand over the Outlook.com email, which has something to do with a drug trafficking prosecution.

So the government took the case to the Supreme Court of the US (SCOTUS) for more legal wrangling in this case – a case that troubles Silicon Valley tech giants, who are worried that data stored abroad could be made vulnerable to government grabbing.

On Tuesday morning, liberal justices Sonia Sotomayor and Ruth Bader Ginsburg wondered why Congress isn’t regulating “this brave new world” of cloud storage, rather than expecting the nation’s top court to interpret the legality of a warrant obtained under the dusty old 1986 Stored Communications Act.

Justice Ginsburg, from oral arguments:

[In 1986,] no one ever heard of clouds. This kind of storage didn’t exist… If Congress takes a look at this, realizing that much time and… innovation has occurred since 1986, it can write a statute that takes account of various interests.

Wouldn’t it be wiser to say, ‘Let’s leave things as they are. If Congress wants to regulate this “brave new world,” let them do it’?

In fact, a bill was introduced last month, with bipartisan support, that seeks to address the potential problems that the government’s interpretation of the SCA would create – namely, in Microsoft’s view, the act of “unilaterally” reaching into a foreign land to “search for, copy, and import private customer correspondence” in spite of it being protected by foreign law.

It’s known as the Clarifying Lawful Overseas Use of Data (CLOUD) Act.

That bill won’t come into play here, however, as the DOJ pointed out. It’s still freshly hatched and not yet law, and that leaves SCOTUS stuck with interpreting the SCA. The issue can’t wait, according to DOJ lawyer Michael Dreeben: the 2nd Circuit Court of Appeals ruling has already “caused grave and immediate harm to the government’s ability to enforce federal criminal law.” It has to be resolved now, Dreeben said.

Department of Justice (DOJ) prosecutors have been claiming that allowing Microsoft to spurn its warrant is tantamount to encouraging companies to evade the law by keeping sensitive data overseas. No, Microsoft says, that’s how we keep our customers from ditching us over lack of data privacy.

Microsoft’s lawyer, E. Joshua Rosenkranz:

If people want to break the law and put their emails outside the reach of the US government, they simply wouldn’t use Microsoft.

Dreeben said that Microsoft’s creating a “mirage” with its insistence that complying with the government subpoena would break strong overseas privacy laws.

Justice Sotomayor’s response, in essence: Really? All the amici briefs from other technology companies telling us the same thing as Microsoft: are those companies also fabricating a mirage?

Such briefs include arguments similar to those made in amici briefs submitted in March 2017 from Apple, Amazon, Microsoft and Cisco as they lined up to support Google, which is also resisting a warrant to seize email stored on overseas servers.

Rosenkranz said that the old law just doesn’t jibe with our current technological reality:

This is a very new phenomenon, this whole notion of cloud storage in another country. We didn’t start doing it until 2010. So the fact that we analyzed what our legal obligations were and realized, ‘Wait a minute, this is actually an extraterritorial act that is unauthorized by the US Government.’ The fact that we were sober-minded about it shouldn’t be held against us.

Sober, as in, not summarily marching into Ireland and demanding information.

Justice Anthony Kennedy wondered about the “binary” choice of focusing on disclosure of the email (which would be against foreign law) vs. focusing on Microsoft’s compliance with the order to hand it over (given that Microsoft is located in the US and hence within jurisdiction):

Why should we have a binary choice between a focus on the location of the data and the location of the disclosure? Aren’t there some other factors, where the owner of the email lives or where the service provider has its headquarters?

Actually, we don’t know where the owner of the email lives, Justice Samuel Alito said. So what does Ireland have to do with anything?

Well, if this person is not Irish, and Ireland played no part in your decision to store the information there, and there’s nothing that Ireland could do about it if you chose tomorrow to move it someplace else, it is a little difficult for me to see what Ireland’s interest is in this.

On the contrary, Ireland’s interest in this case is the same as ours, Rosenkranz responded – or, for that matter, the same as any sovereign nation that has laws to protect data stored within its border:

Your Honor, Ireland’s interests are the same interest of any sovereign who protects information stored where – within their domain. We protect information stored within the United States, and we don’t actually care whose information it is because we have laws that guard the information for everyone.

SCOTUS is expected to return a decision within the coming months.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ben3JEyCH8E/

27% of under-18s have been sexted, and it’s on the rise

The chances that somebody younger than 18 has received a sext is now at 27.4%, according to a new meta-analysis of 39 studies published by JAMA Pediatrics on Monday.

The chances that they’ve sent a sext, though, is only 14.8%. So how is it that nearly twice as many young people have received explicit content than have sent it?

Some researchers have suggested that the discrepancy could have multiple causes: some respondents may underreport their active engagement in sexting, some sexters may send the same picture to multiple people, and/or those who receive a sext might not reciprocate.

At any rate, it’s one of the many nuances that need more study, according to researchers led by psychologist Sheri Madigan of the University of Calgary. The team sorted through dozens of studies to select the 39 they focused on: studies they deemed to be comparable and of high quality.

The studies, which were largely completed between 2008 and 2016, involved a total of 110,380 participants between the ages of 11.9 and 17, with a mean age of 15.2. Most studies were conducted in the US or Europe, along with one each from Australia, Canada, South Africa, and South Korea.

The researchers concluded that 1) with every passing year of age, it becomes more likely a young person will have received explicit images, videos, or messages; and 2) sexting is on the rise.

Conclusions from studies on the prevalence of youth sexting over the years have been all over the map: from 1.3% to 60%. That’s making it tough for healthcare professionals, schools, policymakers and parents to know how much they should worry about the behavior or to figure out what exactly they should do about it, the researchers said.

Likewise, research into who texts more – boys or girls? – has also been inconsistent, but the meta-analysis concluded that sexting appears to be evenly divided among genders.

Regarding sexting rising with age, here are some specifics picked up on by various studies both within and outside of the 39 studies the meta-analysis took under consideration:

  • Adolescent sexting is estimated to be between 10% and 16%.
  • Mean prevalence for young adults is approximately 48% to 53%.
  • Sexting among college students has risen from 27% in 2012 to 44% in 2015.
  • A study of Spanish adults found that a majority – 58% – of those studied reported sending erotic texts, while 28% admitted to sending photos, images, or videos with erotic or sexual content.
  • Another study found that 24% of participants aged 14 to 17 years admitted to sexting, compared with 33% for participants aged 18 to 24 years.

Psychologists Elizabeth Englander and Meghan McCoy from Bridgewater State University wrote in an editorial accompanying the meta-analysis that youth sexting, with the broad range of sexting behaviors, motivations, and outcomes it spans, might provide too much material for one review to cover.

They said that when sexting rose along with the ubiquity of mobile phones, its prevalence among the under-18 crowd was viewed as a critical topic. That was particularly true given various high-profile prosecutions of those who engaged in felonious sharing of nude images depicting minors.

You might remember some of the more ridiculous court dramas from those days: say, the sexting teen who was accused of sexually exploiting himself in 2015. (He and his girlfriend were eventually punished by being banned from using their phones for a year.)

Englander and McCoy noted that times have changed: at least in the US, sexting between minors is generally not prosecuted. Nowadays, it’s seen more as “a psychological and developmental concern rather than a legal risk,” they said.

But then again, it’s one thing to sext within a relationship. It’s another thing entirely when material is shared without the subject(s)’ permission: such abuse gets into the much uglier realms of peer bullying or harassment, trouble with parents or school authorities, or having the picture posted online.

The meta-analysis found that in a few studies, researchers noted that most people who sext felt it was a positive experience. Those positive feelings are apparently often associated with sexting within established relationships.

Unauthorized redistribution of sexts is, fortunately, not the norm: less than 5% of people who sext reported negative outcomes. One in eight, or 12.5%, of young people reported that they’ve forwarded a sext. And, not surprisingly, the likelihood of a negative outcome such as unauthorized sexting appears to increase when people who sext report that they did so under pressure.

While 5% may seem low, we know that the results of bullying associated with unauthorized sharing of sexual content can be tragic.

To prevent the negative outcomes that encompass everything from harassment to suicide, Englander and McCoy said that we’ll need more studies of rapidly evolving sexting behavior:

The accuracy of our understanding about it defines our prevention and intervention efforts.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cB9PM8t56AI/