STE WILLIAMS

3 Tips to Stay Secure When You Lose an Employee

Whether they leave for a better job or get fired, and whether they mean to cause problems or do so out of ignorance, ex-workers can pose a threat to your company.

Research indicates that January and February are the most popular job-hunting months, and that one of the main reasons people leave their jobs is for more money. Indeed, with unemployment currently low and the number of overall jobs growing, many workers are currently in a good position to land somewhere else in the next month or two and earn a higher salary. And if they have new jobs, that means they’re leaving their old companies — maybe yours.

With the annual job shuffle still hot, it’s important to note that nobody wants a bad professional breakup resulting in burned bridges, hurt feelings, and lost — sometimes stolen — property.

By “property” I mean valuable assets such as proprietary data, confidential information, and identity credentials. Unfortunately, that property can be compromised — often unintentionally.

Various studies agree on the following:

  • A large percentage of departing employees — including executives — take their company’s intellectual property with them because they created it and think it’s theirs.
  • A significant number of enterprises don’t have policies or technologies in place to prevent loss of intellectual property (IP).
  • Most IP loss from unintentional inside breaches occurs from employee ignorance or a lack of awareness of company policy and IP law.
  • Over half of small business owners say they have no user access policy for remote workers, according to information security company Shred-it.

At the time of year when so many people are leaving their jobs to start new ones, it’s a good time to be reminded how to protect your IP. Here are three tips, followed by a closer look at each:

  1. Have maximum visibility into who has access — and how much — into your network and systems.
  2. Vigilantly enforce that access with identity governance and administration (IGA) policies.
  3. Deprovision access when situations change, especially when employees leave.

Visibility
Long-standing best practices dictate that, as a default, users should be granted the least-entitled privilege relative to their roles and responsibilities. Generally speaking, most enterprises have three specific types of users:

  • Regular users who are provided with access only to the applications and data specific to their jobs.
  • High-end users who can provide additional access to regular users when necessary, but who are limited in the scope of their ability to grant that access.
  • Admins who can both access and provide other users with access to anything that exists in their domain.

Two primary challenges exist, however.

First, longtime employees often change positions without losing access to their former positions, which means a large number of workers have too much access. Enterprises are also known to duplicate those users’ access for new users, who then have too much privilege.

Second, access reviews are cumbersome and time-consuming. It’s not uncommon for resource-strained managers to automatically authorize access without paying close attention to what an individual user really should have access to.

In both cases, roles-based access control solutions are available to mitigate these issues.

Enforcement
Embracing IGA policies enables enterprises to define, audit, monitor, and enforce compliance with internal, industry, and government regulations. IGA provides automated access visibility, schedules automated access reviews, and helps manage third-party contractors, remote workers, and short-term interns.

On a personal/professional level, education is paramount. Employees should be reminded about enterprise IP policies on a regular basis. It’s also a good idea to post policies on message boards, in corporate newsletters, and on intranet portals. Right before employees do leave, remind them again what is OK and not OK to take with them. And make sure to monitor their activities within the network once they give notice. This is one reason that organizations should inspect all traffic in and out of their systems using some form of SSL decryption.

Deprovision Access
Here’s the tricky part: Before leaving, the former employee must relinquish access to the company’s network and IP. In addition to collecting all physical items, such as computers, access fobs, security badges, and keys, appointed officials need to deprovision all of the applications the employee has been using while also terminating email and network accounts. The same types of automated identity and access management (IAM) and IAG solutions that provision and enforce access can also handle deprovisioning activities.

Another issue not to be overlooked: When employees are terminated involuntarily, they’re sometimes allowed to return to their desks unsupervised to clear their belongings while they still have network access. This happens ostensibly to spare them from public shaming. But it should never be allowed, as it poses huge risks to all parties. Respectfully accompany them so they don’t do something emotional and potentially damaging.

Look on the bright side: You may be losing a valued employee, but you’ll be preserving the integrity of your most valuable assets.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Keys to Hiring Cybersecurity Pros When Certification Can’t Help.”

Bil Harmer is the CISO and chief evangelist of SecureAuth. He brings more than 30 years of experience in leading security initiatives for startups, government, and established financial institutions. He’s CISSP, CISM, and CIPP certified — and is recognized for … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/3-tips-to-stay-secure-when-you-lose-an-employee/a/d-id/1337220?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bitsight and Microsoft Disrupt Necurs Botnet

But roughly 2 million infected systems remain in the wild, and infected systems could be reactivated at any time.

Bitsight and Microsoft have taken joint action against the Necurs botnet, analyzing the client software and disrupting the command-and-control (CC) infrastructure. Necurs has been one of the largest botnets since it was first detected in 2012.

Necurs is known as a “dropper” botnet, acting as a carrier for malware including GameOver Zeus, Dridex, Locky, and Trickbot. According to researchers, 11 Necurs botnets were identified, with the four largest responsible for 95% of the total infections.

While the two companies say they have disrupted some known CC servers, they estimate that roughly 2 million infected systems remain in the wild and note that infected systems could be reactivated at any time. Bitsight and Microsoft are passing signatures and other information to other security professionals in the hope that many of the infected systems can be cleaned before any reactivation occurs.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Keys to Hiring Cybersecurity Pros When Certification Can’t Help.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/bitsight-and-microsoft-disrupt-necurs-botnet/d/d-id/1337286?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft Patches Over 100 Vulnerabilities

Patch Tuesday features several remote code execution flaws in Microsoft Word.

Microsoft today issued 26 CVEs for some 115 total vulnerabilities in a relatively hefty Patch Tuesday that includes 26 critical flaws that span browsers, Microsoft Word, and Media Foundation.

The critical flaws break down to 17 in browser and scripting engines, four in Media Foundation, two in GDI+, one in LNK files, one in Microsoft Word, and one in Dynamic Business. Among the more notable patches are ones for a remote code execution (RCE) flaw in Microsoft Word (CVE-2020-0852), which can be exploited via the Outlook Preview Pane and without even opening a Word file, according to Recorded Future. 

Microsoft also patched three other remote code execution flaws in Word (CVE-2020-8050, CVE-2020-8051, and CVE-2020-8055), which could be exploited via a malicious Word file or website.

“As Recorded Future has previously noted, Microsoft Office is among the most popular attack vectors for cybercriminals. We expect one or more of these vulnerabilities will be weaponized sooner rather than later,” said Allan Liska, intelligence analyst at Recorded Future.

Qualys, meantime, recommends prioritizing the patching of a RCE in Application Inspector (CVE-2020-0872), which is rated as “important” by Microsoft.

“The Scripting Engine, LNK files (CVE-2020-0684), GDI+(CVE-2020-0831, CVE-2020-0883) and Media Foundation (CVE-2020-0801, CVE-2020-0809, CVE-2020-0807, CVE-2020-0869) patches should be prioritized for workstation-type devices, meaning any system that is used for email or to access the internet via a browser,” said Animesh Jain, a vulnerability management research team expert at Qualys. “This includes multi-user servers that are used as remote desktops for users.”

Read more here

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Keys to Hiring Cybersecurity Pros When Certification Can’t Help.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/microsoft-patches-over-100-vulnerabilities/d/d-id/1337284?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Phone carriers may soon be forced to adopt anti-robocall tech

US carriers haven’t been doing enough to block robocalls, according to the Federal Communications Commission (FCC), so its chairman, Ajit Pai, has proposed a set of rules that would force carriers to block robocalls.

According to the FCC, spam robocalls cost $3bn in wasted time and money each year. That doesn’t even take into account the fraud part. The Commission estimates that scammers use robocalls to milk an annual $10bn from Americans. They are flooded with these calls – up to 200 million each day.

In November 2018, Pai asked the phone carriers to adopt a technology framework called SHAKEN/STIR to help solve the problem.

STIR (Secure Telephone Identity Revisited) defines a set of protocols used on SIP networks for applying digital signatures to telephone numbers from calling parties. SHAKEN (Signature-based Handling of Asserted information using toKENs) is a framework for STIR, providing implementation guidelines for carriers to roll out STIR so that it is compatible with all their networks and operates in real-time.

In a SHAKEN/STIR interaction, the originating caller’s phone sends an authentication request along with their phone number to a STIR authentication service (which would typically be operated by their carrier). The authentication server checks that the caller has the right to use that number, and signs a digital token which is sent to the recipient’s STIR verification service. That service checks the authentication service’s repository of digital certificates to ensure that the invitation is legit. If the certificate matches, the call goes through to the recipient. If not, the carrier can drop it.

The industry’s response to Pai’s request was muted, so in February 2019 he warned that if carriers didn’t step up, he’d introduce regulations to make them use the technology to block robocalls. Following a still-sluggish response, that’s what he’s done.

Pai said that the new rules would now make the technology mandatory:

It’s clear that FCC action is needed to spur across-the-board deployment of this important technology. There is no silver bullet when it comes to eradicating robocalls, but this is a critical shot at the target.

Carriers would have to adopt them by 30 June 2021, although a proposed extension would give small and rural providers an extra year.

He needed legislative support to propose these rules. It came in December 2019, when the Telephone Robocall Abuse Criminal Enforcement and Deterrence (TRACED) Act was signed into law. That will force carriers to implement the technology, and will also increase fines while making them easier to collect. That’s an important step for the FCC, which has drawn flak for failing to collect the penalties it imposes on robocallers.

The FCC will vote on the rules on 31 March 2020.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cpQV4IZHrdM/

Ex-Inspector General indicted for stealing data on 250k govt colleagues

A former acting Inspector General for the US Department of Homeland Security (DHS) was indicted on Friday for allegedly ripping off proprietary software and confidential databases and packaging it all up so that his company – Delta Business Solutions – could sell an enhanced version back to the government at a profit.

A federal grand jury in the District of Columbia returned a 16-count indictment against Charles K. Edwards, 59, of Sandy Spring, Maryland, and one of his underlings, Murali Yamazula Venkata, 54, of Aldie, Virginia, the Justice Department (DOJ) said.

They’re both looking at charges of conspiracy to commit theft of government property and to defraud the government, theft of government property, wire fraud, and aggravated identity theft. Venkata is also charged with destroying records.

They were working the scheme for a number of years, according to the indictment. It alleges that Edwards had a network of insiders working on stealing data from the DHS Office of Inspector General (OIG) from October 2014 to April 2017.

According to court documents, the stolen data included sensitive government databases that contained personal identifying information (PII) of DHS and Postal Service (USPS) employees. The plan was to sell the enhanced, so-not-free-anymore version of DHS-OIG’s software to the OIG for the Department of Agriculture. Prosecutors also allege that Edwards kept it all up even after he left DHS-OIG in December 2013, maintaining his relationship with Venkata and other DHS-OIG employees to keep the intellectual property (IP) and PII flowing.

His ex-colleagues did more than just keep up the alleged insider theft, the indictment says: Venkata and others are also said to have served as Edwards’ help desk. They allegedly reconfigured Edwards’ laptop so as to better upload the stolen software and databases, gave him troubleshooting support whenever he needed it, and helped him set up a testing server at his home with the stolen software and databases.

Edwards also allegedly hired software developers in India to help him develop his commercial alternative of DHS-OIG’s software. That’s an aspect of the alleged crimes that could make his penalties yet more severe if he’s convicted, given that he’s not only charged with stealing proprietary software and the PII of government employees, but also with sending it overseas.

The indictment is part of an ongoing investigation by DHS and Postal Service inspectors general and was announced by the two agencies, the DOJ and the US attorney’s office for the District of Columbia, which is prosecuting the case.

If Edwards and Venkata are convicted, they’ll be looking at a maximum of five years for conspiracy to commit theft, 10 years for theft of government property, 20 years for wire fraud, and a minimum of two years for a count of aggravated identity theft. Venkata also faces another 20 years for destruction of records. They would also face fines of up to $250,000 for each count. Having said that, maximum sentences are rarely handed down.

The Washington Post has been following this case for a while. As it reports, it looks like Friday’s indictment is connected to an April 2019 guilty plea from a DHS federal technology manager, Sonal Patel.

Ms. Patel admitted to conspiring with a former acting Inspector General to steal a data­base containing the PII of nearly 250,000 DHS employees, according to court filings. From The Washington Post:

Patel in court papers acknowledged instructing a subordinate to send her directions on how to install the copy, and steering the Agriculture Department inspector general’s office to using the commercial version instead of the free government version. In June 2016, her plea statement said, she handed two DVDs with copied data to Co-Conspirator 1 ‘on the side of the road’ before the latter boarded a flight from Dulles International Airport to meet with software developers in India,’ who would create the copycat program.

After pleading guilty to one count of conspiracy to commit theft of government property and agreeing to cooperate with prosecutors regarding “a scheme that ran from 2014 to 2017,” Patel was looking at a maximum of five years in prison.

Crime doesn’t pay, even if you have the audacity to try to gussy up and sell your employer its own, free software and – ouch! – PII from your own colleagues.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZDbJ_Jy5qD4/

Google data puts innocent man at the scene of a crime

You’ve assuredly heard this before about ubiquitous surveillance, or perhaps even said it yourself: “If you have nothing to hide, and you’ve done nothing wrong, why should you worry?”

Zachary McCoy, of Florida, offers this answer:

If you’re innocent, that doesn’t mean you can’t be in the wrong place at the wrong time, like going on a bike ride in which your GPS puts you in a position where police suspect you of a crime you didn’t commit.

As NBC News reports, McCoy, an avid cyclist, got an email from Google in January.

It was from Google’s legal investigations support team. They were writing to let the 30-year-old know that local police had demanded information related to his Google account. He had seven days in which to appear in court if he wanted to block the release of that data, Google told him.

He was, understandably, terrified, in spite of being one of those innocent people who should have nothing to hide. NBC News quotes him:

I was hit with a really deep fear.

I didn’t know what it was about, but I knew the police wanted to get something from me. I was afraid I was going to get charged with something, I don’t know what.

How is it that McCoy didn’t know what police were inquiring about? Because his Android phone had been swept up in a surveillance dragnet called a geofence warrant – a type of warrant done in secret.

McCoy’s device had been located near the scene of a burglary that had taken place near the route he takes to bicycle to his job. Investigators had used the geofence warrant to try to suss out the identity of people whose devices are located near the scene of a crime around the time it occurred.

As NBC News reports, police hadn’t discovered his identity. The first stage of data collection doesn’t return identifying information – only data about devices that might be of interest. It’s during the next stage, when police sift through the data looking for suspicious devices, that they turn to Google to ask that it identify users.

Like many of us, McCoy had an Android phone that was linked to his Google account, and he used plenty of apps that store location data: Gmail, YouTube, and an exercise-tracking app called RunKeeper that feeds off of Google location data and which helps users to track their workouts.

You can look up your location history to find out exactly what Google knows about you, by date. On the day of the burglary – 29 March 2019 – Google knew that McCoy had passed the scene of the crime three times within an hour as he looped through his neighborhood during his workout.

It was a “nightmare scenario,” McCoy said:

I was using an app to see how many miles I rode my bike and now it was putting me at the scene of the crime. And I was the lead suspect.

How McCoy fought his way out of the dragnet

When it receives a request about a user from a government agency, Google’s general policy is to email that user before disclosing information.

There wasn’t much of anything in that notice about why police were asking about him, McCoy said. However, there was one clue: a case number.

McCoy ran a search for that case number on the Gainesville, Florida, police department’s website. What he found was a one-page investigation report on the burglary of an elderly woman’s home 10 months earlier. She lived less than a mile from where McCoy was living.

He knew he had nothing to do with the break-in, but he had very little time – seven days – in which to prove it. So McCoy hired a lawyer, Caleb Kenyon, who did some research and learned that Google’s notice had been prompted by a geofence warrant: one that swept up the GPS, Bluetooth, Wi-Fi and cellular connections of everyone nearby.

After they figured out why police were trying to track McCoy down, Kenyon told NBC News that he called the detective on the case and told him, “You’re looking at the wrong guy.”

On 31 January, Kenyon filed a motion in civil court to render the warrant “null and void” and to block the release of any further information about McCoy, identifying him only as “John Doe.” If he hadn’t done so, Google would have turned over data that would have identified McCoy. In his motion, Kenyon argued that the warrant was unconstitutional because it allowed police to conduct sweeping searches of phone data from untold numbers of people in order to find a single suspect.

Kenyon’s motion gave investigators pause. Kenyon told NBC News that not long after he filed it, a lawyer in the state attorney’s office assigned to represent the Gainesville Police Department told him there were details in the motion that led them to believe that his client wasn’t the culprit. The state attorney’s office withdrew the warrant, saying in a court filing that it was no longer necessary.

Even after police acknowledged that McCoy wasn’t a suspect anymore, Kenyon wanted to make sure they wouldn’t harbor suspicions about his client, whom they still only knew as “John Doe.” So the lawyer met with the detective in order to show him screenshots of McCoy’s Google location history, including data recorded by RunKeeper. The maps showed months of bike rides past the burglarized home, NBC News reports.

McCoy was lucky. He and his family are also a bit poorer because of the incident. If his parents hadn’t helped him out by giving him thousands of dollars to hire a lawyer, things could have turned out differently, he says.

I’m definitely sorry [the burglary] happened to her, and I’m glad police were trying to solve it. But it just seems like a really broad net for them to cast. What’s the cost-benefit? How many innocent people do we have to harass?

Geolocation data: It’s hit or miss

Geolocation data sometimes gets it right when it comes to tracking down criminals. For example, last year, a homicidal cycling and running fanatic known for his meticulous nature in tracking his victims was undone by location data from his Garmin GPS watch.

Other convictions based on location data have included the pivotal Carpenter v. United States, which concerned a Radio Shack robbery – the legal arguments from this case have gone on to inform subsequent decisions, including one from January 2019 in which a judge ruled that in the US, the Feds can’t force you to unlock your phone with biometrics.

Geofence warrants, however, are a whole other thing.

Privacy and civil liberties advocates have voiced concerns about the warrants potentially violating constitutional protections against unreasonable search. Police have countered by insisting that they don’t charge somebody with a crime unless they have evidence to go on besides a device being co-located with a crime scene.

These searches are becoming increasingly widespread, however. In December 2019, Forbes reported that Google had complied with geofence warrants that, at that time, had resulted in what the magazine called an unprecedented data haul for law enforcement.

Google had combed through its gargantuan Sensorvault database to find 1,494 device identifiers for phones in the vicinities of multiple crimes. Sensorvault is where Google stores location data that flows from all its applications. If you’ve got the Location History setting turned on in your Google account, you’re feeding this ocean of data, which is stuffed with detailed location records from what The New York Times reports to be at least hundreds of millions of devices worldwide.

To investigators, this is gold: a geofence demand enables them to pore through location records as they seek devices that may be of interest to an investigation.

Geofence data demands are also known as ‘reverse location searches’. Investigators stipulate a timeframe and an area on Google Maps and ask Google to give them the record of each and every Google user who was in the area at the time.

When police find devices of interest, they’ll ask Google for more personal information about the device owner, such as name, address, when they signed up for Google services and which services – such as Google Maps – they used.

Google’s location history data is routinely shared with police. Detectives have used these warrants as they investigate a variety of crimes, including bank robberies, sexual assaults, arsons, murders, and bombings.

And it’s not just Google. As Fast Company reported last month, recently discovered court documents confirm that prosecutors have issued geofence warrants for data stored by Apple, Uber, Lyft, and Snapchat.

Fast Company reported that it didn’t know what data, if any, the companies had handed over (Apple, for one, has said that it doesn’t have the ability to perform these kind of searches). All it knows was that the warrants had been served.

How to turn off Google’s location history

If you don’t like the notion of Google being able to track your every movement, you can turn off location history.

To do so, sign into your Google account, click on your profile picture and the Google account button. From there, go to Data personalization, and select Pause next to Location History. To turn off location tracking altogether, you have to do the same for Web App activity in the same section.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2jBBtn5NhAM/

Watch out for Office 365 and G Suite scams, FBI warns businesses

The menace of Business Email Compromise (BEC) is often overshadowed by ransomware but it’s something small and medium-sized businesses shouldn’t lose sight of.

Bang on cue, the FBI Internet Crime Complaint Center (IC3) has alerted US businesses to ongoing attacks targeting organisations using Microsoft Office 365 and Google G Suite.

Warnings about BEC are ten-a-penny but this one refers specifically to those carried out against the two largest hosted email services, and the FBI believes that SMEs, with their limited IT resources, are most at risk of these types of scams:

Between January 2014 and October 2019, the Internet Crime Complaint Center (IC3) received complaints totaling over $2.1 billion in actual losses from BEC scams targeting Microsoft Office 365 and Google G Suite.

As organisations move to hosted email, criminals migrate to follow them.

As with all types of BEC, after breaking into the account, criminals look for evidence of financial transactions, later impersonating employees to redirect payments to themselves.

For good measure, they’ll often also launch phishing attacks on contacts to grab even more credentials, and so the crime feeds itself a steady supply of new victims.

The deeper question is why BEC scams continue to be such a problem when it’s well understood that they can be defended against using technologies such as multi-factor authentication (MFA).

One answer is that older email systems don’t support such technologies, a point Microsoft made recently when the company revealed that legacy protocols such as SMTP and IMAP correlated to a markedly higher chance of compromise.

Lacking that, such accounts immediately become vulnerable to password weaknesses such as re-use.

Turn on MFA

One takeaway is that despite the rise in BEC attacks on hosted email, this type of email is still more secure than the alternatives provided admins turn on the security features that come with it.

For organisations worried about BEC, the FBI has the following general advice:

  • Enable multi-factor authentication for all email accounts
  • Verify all payment changes via a known telephone number or in-person

And for hosted email admins:

  • Prohibit automatic forwarding of email to external addresses
  • Add an email banner to messages coming from outside your organization
  • Ensure mailbox logon and settings changes are logged and retained for at least 90 days
  • Enable alerts for suspicious activity such as foreign logins
  • Enable security features that block malicious email such as anti-phishing and anti-spoofing policies
  • Configure Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), and Domain-based Message Authentication Reporting and Conformance (DMARC) to prevent spoofing and to validate email

The FBI also recommends that you prohibit legacy protocols that can be used to circumvent multi-factor authentication, although this needs to be done with care as some older applications might still depend on these.

It’s a pity the IC3 sometimes puts out useful advice like this using Private Industry Notifications (PINs), a narrowcast version of the public warnings issued on the organisation’s website.

Report a BEC

Law enforcement agencies can’t fight what they don’t know about. To that end, please do make sure to report it if you’ve been targeted in one of these scams.

In the US, victims can file a complaint with the IC3. In the UK, BEC complaints should go to Action Fraud. If you’d like to know how Sophos can help protect you against BEC, read the Sophos News article Would you fall for a BEC attack?


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9k6271x43JA/

AMD, boffins clash over chip data-leak claims: New side-channel holes in decades of cores, CPU maker disagrees

AMD processors sold between 2011 and 2019 are vulnerable to two side-channel attacks that can extract kernel data and secrets, according to a new research paper.

In a paper [PDF] titled, “Take A Way: Exploring the Security Implications of AMD’s Cache Way Predictors,” six boffins – Moritz Lipp, Vedad Hadžić, Michael Schwarz, and Daniel Gruss (Graz University of Technology), Clémentine Maurice (University of Rennes), and Arthur Perais (unaffiliated) – explain how they reverse-engineered AMD’s L1D cache way predictor to expose sensitive data in memory.

To save power when looking up a cache line in a set-associative cache, AMD’s CPUs rely on something called way prediction. The way predictor allows the CPU to predict the correct cache location required, rather than test all the possible cache locations, for a given memory address. This speeds up operations, though it can also add latency when misprediction occurs.

The cache location is, in part, determined by a hash function, undocumented by AMD, that hashes the virtual address of the memory load. By reverse engineering this hash function, the researchers were able to create cache collisions which present observable timing effects – increased access time or L1 cache misses – that allow covert kernel data exfiltration, cryptographic key recovery, and weakening ASLR defenses on a fully-patched Linux system, the hypervisor, or the JavaScript sandbox.

Timing attacks of this sort allow the attacker to infer protected data based on the time the system takes to respond to specific inputs.

Chip

Cache flow problems continue for Intel: Yet more data-leaking processor design blunders discovered, patches due soon

READ MORE

The two attacks are called Collide+Probe and Load+Reload, in reference to the operations involved. The former exploits cache tag collisions while the latter exploits the way predictor’s behavior for virtual addresses are mapped to the same physical address.

“With Collide+Probe, an attacker can monitor a victim’s memory accesses without knowledge of physical addresses or shared memory when time-sharing a logical core,” the paper explains, noting that the technique has been demonstrated with a data transmission rate of up to 588.9 kB/s. “With Load+ Reload, we exploit the way predictor to obtain highly-accurate memory-access traces of victims on the same physical core.”

For Collide+Probe, the attacker is assumed to be able to run unprivileged native code on the target machine that’s also on the same logical CPU core as the victim. It’s also assumed the victim’s code will respond to input from the attacker, such as a function call in a library or a system call.

For Load+Reload, the ability to run unprivileged native code on the target machine is also assumed, with the attacker and victim on the same physical but different logical CPU thread.

Local access is not a requirement for these attacks; the researchers demonstrated their techniques on sandboxed JavaScript and a virtualized cloud environments.

The boffins said that at least the following AMD chips, manufactured over the past couple of decades from 2001 to 2019, have a way predictor that can be exploited:

  • AMD FX-4100 Bulldozer
  • AMD FX-8350 Piledriver
  • AMD A10-7870K Steamroller
  • AMD Ryzen Threadripper 1920X Zen
  • AMD Ryzen Threadripper 1950X Zen
  • AMD Ryzen Threadripper 1700X Zen
  • AMD Ryzen Threadripper 2970WX Zen+
  • AMD Ryzen 7 3700X Zen 2
  • AMD EPYC 7401p Zen
  • AMD EPYC 7571 Zen

“This is a software-only attack that only needs unprivileged code execution,” said Michael Schwarz, one of the paper’s co-authors, via Twitter. “Any application can do that, and one of the attacks (Collide+Probe) has also been demonstrated from JavaScript in a browser without requiring any user interaction.”

The researchers propose several mitigations: a mechanism to disable the cache way predictor if there are too many misses; using additional data when creating address hashes to make them more secure; clearing the way predictor when switching to another user-space application or returning from the kernel; and an optimized AES T-table implementation that prevents the attacker from monitoring cache tags.

In a response to the paper, AMD on Saturday suggested no additional actions need to be taken to prevent these attacks.

“We are aware of a new white paper that claims potential security exploits in AMD CPUs, whereby a malicious actor could manipulate a cache-related feature to potentially transmit user data in an unintended way,” the company said. “The researchers then pair this data path with known and mitigated software or speculative execution side channel vulnerabilities. AMD believes these are not new speculation-based attacks.”

Daniel Grus, another one of the researchers, said via Twitter that this side channel has not been fixed. But he also expressed skepticism that this technique presents an imminent threat, noting that Meltdown, a far stronger attack, doesn’t appear to have been weaponized by anyone. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/09/amd_sidechannel_leak_report/

Keys to Hiring Cybersecurity Pros When Certification Can’t Help

There just aren’t enough certified cybersecurity pros to go around — and there likely never will be enough. So how do you fill out your cybersecurity team? Executives and hiring managers share their top tips on recognizing solid candidates.PreviousNext

There’s a general acknowledgement that there aren’t enough trained cybersecurity professionals to go around. Conversations at cybersecurity conferences are often centered on where to find top pros, how much to pay them, and what string of letters behind their names means the most.

Even the organizations that provide cybersecurity certification admit that there aren’t enough certified pros to meet the need — and that there never will be enough. So what’s a manager charged with finding cybersecurity talent to do?

Many executives and hiring managers say the key to finding solid talent is flexibility in the search. “The process is very much like drafting professional athletes,” says Mike Jordan, vice president of research with Shared Assessments. When you can’t find a position player that you need, you look for individuals who have the skill sets relevant to the position. Find ones that are smart and hardworking and they should be able to fill the position nicely.”

Heather Paunet, vice president of product management at Untangle, says that it’s important to get it right. “Searching for candidates to fill cybersecurity positions beyond certifications and years of experience can seem counterintuitive, but there are many other interests and logical business skills that are just as important to consider,” she explains.

We asked executives what they would look for in filling cybersecurity positions. What they provided was less a checklist of specific skills than an indication of the broad skills, experiences, and personality traits that make someone a great candidate for the cybersecurity team. What they didn’t provide was a simple way to look for those on a resumé — but no one said that solving the hiring problem was going to be easy.

Of course, not everyone agrees that there is, in fact, a shortage of cybersecurity professionals.

“The premise that we are short of cybersec pros is BS spread by businesses with a vested interest in importing HB-1 workers,” says Colin Bastable, CEO of Lucy Security. “There is no shortage of cybersec pros — just a shortage of good ones, and that is a good thing. The market decides. Certification is a scam — it just gets us a load of talentless credentialed people who make the world less secure. You want to hire someone who understands how the enemy thinks but without the moral baggage of being a cybercrook. Most employers with a four-year degree will hire someone with a four-year degree, but zero talent.” All you have to do is find that elusive thinker.

What do you think — is it possible to hire a great cybersecurity professional in the absence of security certification? If it is, what do you look for in a great candidate? We’d like to know your thoughts; please talk to us in the Comments section, below.

Read on to see what other security hiring managers had to say.

(Image: chokniti VIA Adobe Stock)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/theedge/keys-to-hiring-cybersecurity-pros-when-certification-cant-help/b/d-id/1337272?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Over 80% of Medical Imaging Devices Run on Outdated Operating Systems

New data on live Internet of Things devices in healthcare and other organizations shines a light on security risks.

The state of real-world IoT: Printers and cameras sitting on enterprise networks harbor the most vulnerabilities, misconfigurations, and compromises. In healthcare networks, some 70% of medical imaging devices are based on mostly old Windows versions that have either been retired by Microsoft or are under limited support.

A study of 1.2 million Internet of Things (IoT) devices in thousands of healthcare and other enterprises in the years 2018 and 2019 underscores the reality of the IoT as one of the weakest links in organizations. Palo Alto Networks, using data gathered from its Zingbox IoT inventory and management service, studied some 73.2 billion network sessions of more than 8,000 different types of IoT devices.

Microsoft’s recent sunsetting of Windows 7 accounts for most of the older operating system data: Fifty-six percent of the imaging devices run on Win 7, which gets limited support and patching from Microsoft now, and another 27% of these devices run on the long-dead Windows XP, as well as old and decommissioned versions of Linux, Unix, Windows, and other embedded software.

Exacerbating the problem: Some 72% of virtual LANs contain a mix of IoT devices and other computing systems. “It’s concerning that IoT medical devices – such as an infusion pump, medical imaging systems like MRIs or CAT scans or X-rays – if they are on the same network as a doctor clicking on a phishing email, that’s a dangerous situation,” says Ryan Olson, vice president of threat intelligence for PAN’s Unit 42 research team. “That’s an indicator to us that these networks are not being properly managed.”

One positive sign is that the number of hospitals in the US that had more than 20 VLANs tripled from 2018 to 44% in 2019. “On the bright side, it’s getting better,” Olson says. The key is ensuring that IoT is on a separate VLAN than IT systems and that these devices don’t have unnecessary network connections or access.

“A lot of IoT device management today is from a static inventory perspective,” where the hospital or enterprise knows the device type and serial number, for example. “They are not tracking how long it’s on the network, how they secure it, update it, nor is it managed through its life cycle.”

That means knowing a device is retired or no longer needed and should then be removed from the network so it doesn’t expose it, he says. As it is, the report shows that 98% of all IoT device traffic travels unencrypted, leaving potentially sensitive patient and other data exposed to attackers. In addition, some 57% of the IoT devices contained vulnerabilities or misconfigurations, and some 20% of healthcare organizations at one time or another had been infected with the old-school Conficker worm.

The stakes are high for the cross-contamination from IT systems to IoT. Some 72% of healthcare organizations say they have experienced an email-based cyberthreat in the past year that resulted in downtime, according to a new Mimecast and HIMS Medi study. The main losses for these victims were productivity (55%), data (34%), and financial.

And according to a new Enterprise Strategy Group report also released today, 77% of organizations say they don’t have a full accounting or visibility into IoT devices on their networks. Less than half (47%) of organizations that have a strategy in place for getting a handle on this are confident about that initiative, according to the study, which was commissioned by asset management company Axonius.

A full-blown IT asset inventory of all computing, bring-your-own-devices (BYOD), and IoT devices can take more than two weeks, or some 89 person-hours of labor, and occur, on average, 19 times per year to stay on top of the ever-changing network population, according to the report.

Picture This
Outside of medical devices, the riskiest IoT devices in enterprises are (ironically) security cameras and printers, PAN data shows. While IP phones make up some 44% of all enterprise IoT devices, they only pose 5% of security issues, such as vulnerabilities, misconfigurations, default passwords, or compromises.

Security cameras, meanwhile, represent just 5% of the IoT devices in the study, but they harbor 33% of the security issues. “This is because many cameras are designed to be consumer-grade, focusing on simplicity of use and deployment over security,” the report said.

Printers are the second-most risky, with 24% of security issues, mainly because they also come with less baked-in security and also can be abused via browser access. Print logs can contain sensitive and valuable information for an attacker, and the devices also can be abused like many other IoT devices – as a stepping-stone to other systems in the network.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Keys to Hiring Cybersecurity Pros When Certification Can’t Help.”

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/iot/over-80--of-medical-imaging-devices-run-on-outdated-operating-systems/d/d-id/1337273?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple