STE WILLIAMS

How Network Metadata Can Transform Compromise Assessment

Listen more closely and your network’s metadata will surrender insights the bad guys counted on keeping secret

In the 1979 cult classic When a Stranger Calls, a babysitter receives numerous telephone calls from a strange man, only to discover the calls are coming from inside the house!

Indeed, the notion of a stranger lurking inside your home is terrifying. For the modern enterprise, however, it has become the new normal. Even more frightening, most businesses have no idea that their network has been compromised in the first place.

According to an IBM study, it takes the typical enterprise 197 days to identify a breach in its network and 69 days to contain it. Despite the profusion of network monitoring and traffic analysis tools on the market today, security teams are unable to distinguish the faint signal of a legitimate network incursion over the din of perpetual alerts.

But as any TV detective will tell you, a criminal always leaves something behind. And just like a CSI forensics team might use luminol to detect trace amounts of blood at a crime scene, security analysts can harness the vast amount of network metadata to identify and isolate a network compromise.

The Medium Is the Message
Taking the metaphor of a house a step further, doors and windows represent both points of ingress and egress for a potential intruder. Network IP addresses, proxy servers, and email boxes are the doors and windows of the enterprise network that digital prowlers exploit to gain access and exfiltrate data. But because these intruders must use the network itself, they also can’t help but leave traces of their presence in the form of network metadata.

Metadata is often defined as data about data, or information that makes data useful. Every digital photograph includes metadata that offers detailed information about the photo — when it was taken, the type of camera used, even its GPS coordinates, all attached to the digital file as metadata, providing us with a simple way to sort and organize our photo libraries.

Similarly, metadata is attached to the many various hardware devices and software that every network infrastructure needs to run. From email and application servers to network firewalls and cloud gateways, the attendant metadata of each system provides a strand of telling information. On its own, that individual thread of data may not tell you very much. But put enough of those dots together and take a step back, a clear picture begins to emerge.

Converting Network Metadata into Useful Intel
For security teams, network metadata represents a vital yet underutilized threat intelligence resource that analysts must begin to incorporate into their compromise detection toolbox. Some of the primary sources of network metadata that can be correlated into actionable threat intelligence include:

  • DNS data: Domain Name System (DNS) translates numerical IP addresses and maps devices and services to the underlying network. Metadata from DNS queries provide a crucial contextual layer that records every connection attempt from an adversary’s device to an organization’s infrastructure and can be used to discern the specific route an attacker is using to infiltrate a network.
  • Network flows: Understanding how packets move across the network can offer valuable insights into which devices are being controlled by an attacker and whether or not they are using the network to move laterally. 
  • Perimeter proxy and firewall access logs: In cases where an attack avoids domain resolution, the remnants of an adversarial connection can often be found buried in the access logs of network firewalls or proxies.
  • Spambox filter: Often overlooked, archived spambox filter metadata can provide valuable intelligence regarding the type of attack an organization is receiving; more telling, if end-users are being targeted by similar attacks then the organization is more likely to be compromised. 

While much of this network metadata has been available for years now, harnessing it into something useful has not been practical for a number of reasons. Until recently, the cost of storing and processing all of this data has been cost prohibitive. However, as public cloud services have matured, the cost of storage has dropped exponentially — from $12.40 per gigabyte in 2000 to less than $0.004 today.

Meanwhile, computing power has increased by a factor of 10,000 over this same time period, creating the perfect scenario for the collection and administration of large and growing volumes of metadata. The evolution of public cloud infrastructure has not only made storing and processing network metadata viable, but critically, can manage these complex workloads in real time.

When you combine these factors with the latest advancements in powerful artificial intelligence and machine learning algorithms that can correlate these data sets at scale, you can begin to recognize the enormous potential that can be realized by security teams who are under increasing pressure to quickly identify and isolate confirmed instances of compromise in their network.

It’s high time we stopped wondering if an attacker is hiding somewhere in the network — rather, we need to leverage all of the data and tools at our disposal to pinpoint these compromises in minutes, not months.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Keys to Hiring Cybersecurity Pros When Certification Can’t Help.”

Ricardo Villadiego is the founder and CEO of Lumu, a cybersecurity company focused on helping organizations measure compromise in real-time. Prior to LUMU, Ricardo founded Easy Solutions, a leading provider of fraud prevention solutions that was acquired by Cyxtera in 2017 as … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/how-network-metadata-can-transform-compromise-assessment/a/d-id/1337208?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

What Should I Do About Vulnerabilities Without Fixes?

With better tools that identify potential threats even before developers address them, a new problem has arisen.

(Image: VectorMine/Adobe Stock)

Question: What should I do about vulnerabilities without fixes?

Tsvi Korren, field CTO at Aqua Security: Security vendors are getting better at identifying vulnerabilities and making the results available earlier in the software development cycle. Shifting left by providing vulnerability data to application developers and making them an active part of risk remediation is a good thing. However, this introduces a new challenge for security practitioners: what to do about vulnerabilities in open source components where a fix is not available or when a fixed version cannot be used due to software dependencies?

When vulnerabilities were only examined after deployment, you may have been potentially at risk, but the fact that you didn’t know about them earlier at least meant you were not being negligent introducing risk into production. Now you face a few difficult options when presented with a vulnerability for which no fix exists or when a fix cannot be used:

  1. Stop the pipeline and potentially delay rollout of the application until a fix is available (or even bring down a deployed application).
  2. Task your own development team with creating a fix (assuming you have the code and the expertise) or finding a workaround.
  3. Move ahead accepting the risk, clearing it with appropriate compliance people, which certainly is not ideal.

Another option is to run the application in a cloud-native environment and closely control its runtime behavior. Since containers and functions are deterministic, it is possible to identify and stop execution of code that is not aligned with the workload’s intended purpose. By blocking access to specific users, commands, files, ports, or system calls, security can defang a vulnerability so that any attempt to exploit it is stopped or at least clearly identified. This ability bridges the gap, allowing application rollout to proceed, until a permanent code fix to become available.

Related Content:

 

The Edge is Dark Reading’s home for features, threat data and in-depth perspectives on cybersecurity. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/what-should-i-do-about-vulnerabilities-without-fixes/b/d-id/1337274?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

NSO Group fires back at Facebook: You lied to the court, claims spyware slinger, and we’ve got the proof

Facebook has been accused of lying to a US court in its ongoing legal battle against government malware maker NSO Group.

A series of filings from NSO lawyers lay out the Israeli security company’s reasoning for its no-show in court on 2 March, including the accusation that Facebook never properly served its lawyers with legal papers, despite telling the court that it had.

The accusations were made in court documents [PDF] in which NSO has asked the court to vacate the earlier default judgement entered at the start of last week after the security shop’s lawyers failed to turn up at the California US District Court. NSO’s legal team now say the Israeli government had told Zuck Co’s lawyers that they had made a mistake with the necessary documents.

“Friday’s filing was necessary because Facebook lied to the court in its February 27 application for default, saying that service was complete under the treaty governing international service of judicial documents known as the Hague Convention,” NSO said of its request.

Whatsapp running on an iPhone

WhatsApp slaps app hacker chaps on the rack for booby-trapped chat: NSO Group accused of illegal hacking by Facebook

READ MORE

“In fact, Facebook and its lawyers had been told two days earlier (February 25) by the Government of Israel that service under the Hague Convention was not complete — a fact Facebook concealed from the court. Facebook’s underhanded tactics deceived the court into entering an improper default, and created a false narrative in the news media that unfairly described NSO Group as unresponsive to the case.”

In addition to throwing out the default judgement, NSO is asking the court to give it additional time (another 120 days) to respond to the suit.

Facebook did not respond to a request for comment on the accusations.

The Social Network is suing NSO Group over accusations the security company had helped governments hack a number of accounts and devices on Facebook’s WhatsApp messaging platform.

Facebook has alleged that NSO aided its government customers in hacking some 1,400 accounts including those of journalists and activists. Facebook claims NSO developed and equipped the customers with exploits for a remote code execution flaw in WhatsApp that was then used to put surveillance software on the targets’ mobile devices. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/09/nso_facebook_lied/

AMD, boffins clash over chip data-leak claims: New side-channel holes in decade of cores, CPU maker disagrees

AMD processors sold between 2011 to 2019 are vulnerable to two side-channel attacks that can extract kernel data and secrets, according to a new research paper.

In a paper [PDF] titled, “Take A Way: Exploring the Security Implications of AMD’s Cache Way Predictors,” six boffins – Moritz Lipp, Vedad Hadžić, Michael Schwarz, and Daniel Gruss (Graz University of Technology), Clémentine Maurice (University of Rennes), and Arthur Perais (unaffiliated) – explain how they reverse-engineered AMD’s L1D cache way predictor to expose sensitive data in memory.

To save power when looking up a cache line in a set-associative cache, AMD’s CPUs rely on something called way prediction. The way predictor allows the CPU to predict the correct cache location required, rather than test all the possible cache locations, for a given memory address. This speeds up operations, though it can also add latency when misprediction occurs.

The cache location is, in part, determined by a hash function, undocumented by AMD, that hashes the virtual address of the memory load. By reverse engineering this hash function, the researchers were able to create cache collisions which present observable timing effects – increased access time or L1 cache misses – that allow covert kernel data exfiltration, cryptographic key recovery, and weakening ASLR defenses on a fully-patched Linux system, the hypervisor, or the JavaScript sandbox.

Timing attacks of this sort allow the attacker to infer protected data based on the time the system takes to respond to specific inputs.

Chip

Cache flow problems continue for Intel: Yet more data-leaking processor design blunders discovered, patches due soon

READ MORE

The two attacks are called Collide+Probe and Load+Reload, in reference to the operations involved. The former exploits cache tag collisions while the latter exploits the way predictor’s behavior for virtual addresses are mapped to the same physical address.

“With Collide+Probe, an attacker can monitor a victim’s memory accesses without knowledge of physical addresses or shared memory when time-sharing a logical core,” the paper explains, noting that the technique has been demonstrated with a data transmission rate of up to 588.9 kB/s. “With Load+ Reload, we exploit the way predictor to obtain highly-accurate memory-access traces of victims on the same physical core.”

For Collide+Probe, the attacker is assumed to be able to run unprivileged native code on the target machine that’s also on the same logical CPU core as the victim. It’s also assumed the victim’s code will respond to input from the attacker, such as a function call in a library or a system call.

For Load+Reload, the ability to run unprivileged native code on the target machine is also assumed, with the attacker and victim on the same physical but different logical CPU thread.

Local access is not a requirement for these attacks; the researchers demonstrated their techniques on sandboxed JavaScript and a virtualized cloud environments.

The boffins said that at least the following AMD chips, manufactured over the past decade from 2001 to 2019, have a way predictor that can be exploited:

  • AMD FX-4100 Bulldozer
  • AMD FX-8350 Piledriver
  • AMD A10-7870K Steamroller
  • AMD Ryzen Threadripper 1920X Zen
  • AMD Ryzen Threadripper 1950X Zen
  • AMD Ryzen Threadripper 1700X Zen
  • AMD Ryzen Threadripper 2970WX Zen+
  • AMD Ryzen 7 3700X Zen 2
  • AMD EPYC 7401p Zen
  • AMD EPYC 7571 Zen

“This is a software-only attack that only needs unprivileged code execution,” said Michael Schwarz, one of the paper’s co-authors, via Twitter. “Any application can do that, and one of the attacks (Collide+Probe) has also been demonstrated from JavaScript in a browser without requiring any user interaction.”

The researchers propose several mitigations: a mechanism to disable the cache way predictor if there are too many misses; using additional data when creating address hashes to make them more secure; clearing the way predictor when switching to another user-space application or returning from the kernel; and an optimized AES T-table implementation that prevents the attacker from monitoring cache tags.

In a response to the paper, AMD on Saturday suggested no additional actions need to be taken to prevent these attacks.

“We are aware of a new white paper that claims potential security exploits in AMD CPUs, whereby a malicious actor could manipulate a cache-related feature to potentially transmit user data in an unintended way,” the company said. “The researchers then pair this data path with known and mitigated software or speculative execution side channel vulnerabilities. AMD believes these are not new speculation-based attacks.”

Daniel Grus, another one of the researchers, said via Twitter that this side channel has not been fixed. But he also expressed skepticism that this technique presents an imminent threat, noting that Meltdown, a far stronger attack, doesn’t appear to have been weaponized by anyone. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/09/amd_sidechannel_leak_report/

Avast’s AntiTrack promised to protect your privacy. Instead, it opened you to miscreant-in-the-middle snooping

You’d think HTTPS certificate checking would be a cinch for a computer security toolkit – but no so for Avast’s AntiTrack privacy tool.

Web researcher David Eade found and reported CVE-2020-8987 to Avast: this is a trio of blunders that, when combined, can be exploited by a snooper to silently intercept and tamper with an AntiTrack user’s connections to even the most heavily secured websites.

This is because when using AntiTrack, your web connections are routed through the proxy software so that it can strip out tracking cookies and similar stuff, enhancing your privacy. However, when AntiTack connects to websites on your behalf, it does not verify it’s actually talking to the legit sites. Thus, a miscreant-in-the-middle, between AntiTrack and the website you wish to visit, can redirect your webpage requests to a malicious server that masquerades as the real deal, and harvest your logins or otherwise snoop on you, and you’d never know.

The flaws affect both the Avast and AVG versions of AntiTrack, and punters are advised to update their software as a fix for both tools has been released.

Eade has been tracking the bug since August last year.

“The consequences are hard to overstate. A remote attacker running a malicious proxy could capture their victim’s HTTPS traffic and record credentials for later re-use,” he said. “If a site needs two factor authentication (such as a one-time password), then the attacker can still hijack a live session by cloning session cookies after the victim logs in.”

A hacker inside a corporate network

Avast lobs intruders into the ‘Abiss’: Miscreants tried to tamper with CCleaner after sneaking into network via VPN

READ MORE

Eade said the three security holes were all related to how the Avast and AVG tools handle secured connections.

The first issue is due to AntiTrack not properly verifying HTTPS certificates, allowing an attacker to self-sign certs for fake sites. The second issue is due to AntiTrack forcibly downgrading browsers to TLS 1.0, and the third is due to the anti-tracking tool not honoring forward secrecy.

Avast has acknowledged the bug both in its own-branded AntiTrack and in the AVG version.

“Thanks to David reporting these issues to us, the issues have been fixed, through an update pushed to all AntiTrack users,” Avast said.

Separately, the Avast antivirus tool potentially has another vulnerability. This time, Googler Tavis Ormandy has found the antivirus suite running its JavaScript interpreter with system administrator-level privileges, which is like running around with a gun in your pocket and the safety off.

“Despite being highly privileged and processing untrusted input by design, it is un-sandboxed and has poor mitigation coverage,” Ormandy said of the process. “Any vulnerabilities in this process are critical, and easily accessible to remote attackers.” ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/10/avast_mitm_antitrack_bug/

How Microsoft Disabled Legacy Authentication Across the Company

The process was not smooth or straightforward, employees say in a discussion of challenges and lessons learned during the multi-year project.

As more organizations adopt modern authentication protocols, legacy authentication poses a growing risk to those who lag behind. The problem is, making a business-wide transition to modern authentication is no easy feat, as Microsoft employees learned when they tackled it.

“About half of a percent of the enterprise accounts in our system will be compromised every month,” Alex Weinert, director of identity strategy at Microsoft, said of its customer accounts. “Which is a really, really, really high number, if you think about it.” In a business of 10,000 users, for example, 50 of them will be compromised in a month if the business is average and doesn’t do anything additional, Weinart said in an RSA Conference talk on the topic last month. 

More than 1.2 million Microsoft customer accounts were compromised in January 2020, Weinert said. Of those, more than 99% did not have MFA enabled. “Multi-factor authentication would have prevented the vast majority of those one million compromised accounts last month,” he explained.

About 40% of those January compromises, or 480,000 accounts, were due to password spray attacks and nearly all (99% of) password sprays leveraged legacy authentication protocols. The second most-common attack method was brute-forcing credentials across platforms. Nearly all (97% of) these “replay” attacks also use legacy authentication protocols, Weinert noted, and the probability of compromise jumped for users who relied on SMTP, IMAP, POP, and others.

“We know about 60% of users [overall] will reuse passwords; it’s super common,” he continued, adding that “people do reuse their enterprise accounts in non-enterprise environments.”

“Legacy,” or “basic” authentication refers to older protocols like POP, SMTP, IMAP, and XML-Auth, which don’t allow for user interaction or MFA challenges, Weinert said. It is the predominant problem with deploying MFA and the preferred mechanism for attacking accounts. Attack tools are built on it; it works, and it’s easy, he said. But disabling basic authentication protocols can make a significant difference: controlling for other variables, Microsoft found a 67% reduction in compromise for tenants that turned off legacy protocols.

To help defend its own employees against attacks targeting these protocols, Microsoft has rolled out modern MFA options compatible for phone, cloud, and on-prem environments over the years. Still, while it invested in these tools, it “really didn’t pay attention to legacy authentication,” said Lee Walker, identity architect on Microsoft’s internal IT team. “We thought it would naturally go away.” Still, many internal Microsoft employees continued to use legacy protocols. In 2018, company executives called for legacy authentication to be shut down across the organization.

Trial and (A Big) Error

Taking a broader look at Microsoft’s environment, the team saw a few instances of legacy authentication but assumed the project wouldn’t be intensive. It was primarily used in Azure Active Directory, in small tools people used to directly talk with Microsoft Graph and do basic information gathering in Azure, as well as in SharePoint, Skype for Business, and Exchange.

The team thought most of the upgrades would be for old Office 2010 or 2013 clients. “We knew those were using legacy authentication, but we knew the vast majority of people had been upgraded,” said Walker. They expected these Office clients to be people with older personal machines at home, and they’d simply need to help the users upgrade.

There are several tools available to block legacy protocols; Lee and Walker demonstrated their process using one built into Azure Active Directory. It started out smoothly, they said. The IT and operations teams deployed legacy authentication disablement to 2,000 users in the organization and experienced minimal problems. “This gave us a lot of confidence that our deployment for legacy authentication blocking was going to proceed very quickly across Microsoft internally,” said Walker, noting they expected the process to take two months.

“It didn’t quite work out that way,” he added.

The team deployed this disablement policy across its 60,000-person sales force. They left their desks that day in October 2018 and soon started getting calls in the middle of the night: the TeleSales app, used to contact customers and take orders, wasn’t working among Australian users. “It’s a critical app for our sales force, and when we looked into this, we discovered there’s one account that was used to run the back end of all our TeleSales applications,” said Walker. This account, hidden in the data, was being blocked by the legacy authentication policy.

This policy caused the app to break, which took down the sales force for effectively a whole day, considering the time difference and the time it takes to escalate issues. “They could not make money for a day, and that was a big deal,” Walker noted.

Taking a New Approach

The team was told they couldn’t move forward with the policy until they were sure the incident wouldn’t happen again. “The reality is, we didn’t really know what we were doing,” said Weinert. They didn’t have the data they needed to show where legacy authentication was being used in their environment; more importantly, they didn’t have the insight to know what that data really meant. If they had, they would have seen the connection between the TeleSales app, the account behind it, and the hundreds of thousands of people who relied on it.

“We knew we needed more data, so we decided to keep a lot more data,” said Weinert. The team logged 90 days of sign-in history to identify specific apps using legacy authentication. This timeframe was large enough to give them visibility into apps used on a daily basis and weekly basis; they could also see financial apps only used once per quarter.

They also decided to simulate the legacy authentication policy instead of enforcing it outright. “Report-only” mode gave the ability to deploy a simulated policy without blocking anything. As a result, users would see “we would have blocked this” instead of losing app functionality.

Then came the tedious part: the team had to track down individual owners of the apps relying on legacy authentication protocols, work with them to find the API that was prompting them for passwords, and find the modern equivalent of that API to fix it. By March 2019 the policy was enabled for 94% of users, but they still faced several exception requests per week.

“This was probably the biggest driver of work for our team,” said Walker. Turning off legacy authentication didn’t take much time; neither did collecting or analyzing data. Talking to app owners also wasn’t time-consuming, but individual requests for rarely used apps “took a lot of time.” It took about a year to run through exceptions and secure legacy authentication users.

“Human processes here are super important,” said Walker. He advised IT and security teams to start testing with a small group, preferably their own, to learn the response process before rolling out a policy across the organization. He also encouraged RSAC attendees to start the process of eliminating legacy authentication as soon as possible: Microsoft has seen a ~3,000% increase in attack rate on Microsoft products and services in the past three years. Adopting modern authentication protocols can help defend against password sprays, credential reuse, and other common attack techniques.

“Organizations moving to a more secure protocol are getting out of harm’s way and letting attackers harvest from those who haven’t,” he said.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Out at Sea, With No Way to Navigate: Admiral James Stavridis Talks Cybersecurity.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/operations/how-microsoft-disabled-legacy-authentication-across-the-company/d/d-id/1337269?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

It’s not a breach… it’s just that someone else has your data

UK telephone, TV and internet provider Virgin Media has suffered a data breach.

Or not, depending on whom you ask.

TurgenSec, the company that alerted Virgin Media to the breached information – or, at least, to the inadvertently disclosed database – says that it “included personal information corresponding to approximately 900,000 UK residents.”

We’re not exactly sure where or how TurgenSec found the errant data, but it sounds as though this was either a cloud blunder, a marketing partner plunder, or both of those at once.

Cloud blunders are, unfortunately, all too common these days – typically what happens is that a company extracts a subset of information from a key corporate database, perhaps so that a research or marketing team can dig into it without affecting the one, true, central copy. In the pre-internet days, you often heard this referred to as a “channel-off”.

In the modern era, channelled-off data seems to leak out in two main ways:

  • The copied data gets uploaded to a cloud service that isn’t properly secured. Crooks regularly trawl the internet looking for files that aren’t supposed to be there – this process can be automated – and are quick to pounce if they find access control blunders that let them download data that should clearly be private.
  • The data gets sent to an outside company, e.g. for a marketing campaign, and it gets stolen from there. Data breaches from partner companies could happen for exactly the reason given above – poor cloud management practices – or for a variety of other reasons that the company responsible for the data can’t control directly.

We’re assuming, in Virgin Media’s case, that what happened was along the lines of the first cause above, given that the company insists that:

No, this was not a cyber-attack. […] No, our database was not hacked. […] Certain sources are referring to this as a data breach. The precise situation is that information stored on one of our databases has been accessed without permission. The incident did not occur due to a hack but as a result of the database being incorrectly configured.

Virgin Media hasn’t done itself any favours with this statement. What it seems to be saying is that, because the crooks merely wandered in uninvited, without even needing to bypass any security measures or exploit any unpatched security holes, this doesn’t count as a “hack” or a “breach”.

We don’t know about you, but to us, this sounds a bit like wrecking your car by driving into a ditch and then claiming that you “didn’t actually have a crash”; instead, you simply didn’t drive with sufficient care and attention to stay safely on the road.

What data went walkabout?

Whether you think it’s a breach or not, it’s certainly a pretty big leak, even though the 900,000 users impacted is well short of Virgin Media’s full customer list.

TurgenSec has published a list of the fieldnames (database columns) that appeared in the exposed data, although not every field contained data for every user listed.

These apparently include: name, email address, home address, phone number and date of birth.

TurgenSec is also claiming that some of the fields reveal “requests to block or unblock various pornographic, gore related and gambling websites,” although a report last Friday by the BBC suggests that this block/unblock data was present only for about 1,100 of the customers affected by the breach leak.

What to do

Virgin Media secured the errant database pretty quickly, so it’s no longer open for any more crooks to find and steal.

The company has also set about contacting customers whose Virgin Media accounts were affected, meaning that are probably millions of people in the UK who will be watching out for an email but ultimately won’t hear anything because they weren’t affected.

As we know, this is the sort of vacuum into which cybercriminals love to step – sending phishing scams that pretend to be security notifications.

Our recommendations, therefore, are as follows:

  • If you receive an email claiming to be from Virgin Media, ignore contact details in that email. Use an existing account or your original contract to find an official phone number or website, and get in touch that way. It’s slightly less convenient (assuming the email is genuine) but it makes it very much harder for the crooks to trick you into contacting them instead (making the more likely assumption that the email is fake).
  • Read our article, What you sound like after a data breach. We wrote it a few years ago as a satirical piece, but there’s a lot in there you can learn from. As Mark Stockley put it back in 2015, “Hopefully you’ve never had anything stolen in a data breach, but if you have, I hope you’ve been spared the salted wound of the non-apology.”
  • Learn how to build a cybersecurity-aware culture in your own business. Sophos CISO Ross McKerchar has six tips to bolster the “human firewall” that makes it less likely you’ll let data leak out in the first place.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QcX9dBG4bYc/

UK.gov is not sharing Brits’ medical data among different agencies… but it’s having a jolly good think about it

Who’d be a head of data policy for the British government? You spend all your time talking about data transparency, but it is so hard to be transparent.

Just ask Stephen Lorimer, head of public sector data at the Department for Digital, Culture, Media and Sport (DCMS). At an event run by think tank The Institute for Government last month, he was asked about proposals that could allow the sharing of medical and social care data across government bodies under the Digital Economy Act 2017.

His answer was very clear. “There are no moves right now to make medical records part of the Digital Economy Act within government,” he said. He was clear, that is, until a challenge from the floor pointed out that such a move was under consideration.

matt hancock

What has an ‘open-door policy’ with industry and puts the X into NHS? Brits, let app-happy Matt Hancock tell you

READ MORE

The Public Service Delivery Review Board is the body in charge of advising government organisations on sharing personal information for objectives set out in regulations and the Digital Economy Act 2017.

Minutes (PDF) for its meeting in July 2019 show that, while discussing ideas for its first report for ministers, it proposed “a work programme… which would include recommendations for proactively supporting expansion and uptake of the powers… [which]… could include ideas for progressing the necessary work with NHSX [the NHS digital agency] and other stakeholders to bring health and adult social care bodies within the scope of the PSD and other Digital Economy Act powers”.

Following the challenge, Lorimer took the opportunity to qualify his answer. “Those minutes are true: that they’ve been consulting on it and that’s something which has been was done during last year,” he said. He said the move was in response to a Public Accounts Committee report which voiced concerns that the law was preventing sharing of data across government.

But Lorimer went on to say that although it was being considered, “acting on that is still not something that is being done at the moment”.

If all this seems confusing and long-winded, that’s because it is. But it is important. The Public Service Delivery Review Board has suggested that it will ask government to extend the Digital Economy Act to include the sharing of medical data across government, something primary legislation currently prevents.

One the one hand, it might make sense for housing services to understand the medical needs of someone they are trying to support. On the other, patients might be very concerned about sharing medical details with a doctor if they think their records could be passed to the Department for Work and Pensions (DWP). Such fear could hinder proper medical treatment.

Sam Smith, coordinator at independent lobby group medConfidential, said Lorimer’s response undermines all statements from DCMS about its plans for data. “The first instinct was to say something utterly untrue, and when the public document published by his Department was referenced, the response was the exact opposite.”

We love our NHS

Revealed: NHS England bosses meet with tech and pharmaceutical giants to discuss price list of millions of Brits’ medical data

READ MORE

He said that patients have the legal right to know how data about them is used. “If people believe that what they tell their GP could get passed to DWP or the Home Office, public health will be harmed and costs to the NHS will go up due to delayed care.”

DCMS is preparing a Framework for Data Processing by government departments as required under the Data Protection Act 2018. However, it is only consulting with government departments and the Information Commissioner’s Office, and not with the public.

In 2016, the government’s hated Care.data project was canned over concerns about sharing NHS patients’ sensitive medical information with commercial entities without explicit consent.

Last year a broad group of lobby organisations – including the Institute for Government, the Royal Statistical Society, the Open Data Institute and the Policy Institute at King’s College London – wrote to DCMS with concerns about its approach to the National Data Strategy. “Without major and sustained effort, the UK risks falling behind other countries over the next decade and never being able to catch up,” it said.

The Register has contacted the DCMS for a response, but we fear the government wants to be allowed to share our data much more than it wants to share information about data sharing with us. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/09/uk_government_medical_data_sharing_plans/

UK Defence Committee probe into national security threat of Huawei sure to uncover lots of new and original insights

UK Parliament’s Defence Committee is to open an investigation into 5G and Huawei with a special focus on national security concerns.

The House of Commons committee, made up of MPs, wants to find out for itself whether or not Huawei poses a threat to national security, something that nobody has ever raised before and which is bound to uncover lots of new and original insights.

In a statement, the committee said: “Concerns have been raised in Parliament, relevant industries, academia and by the press regarding the use of equipment in 5G networks that has been supplied by foreign companies, focusing on Chinese telecoms supplier Huawei.”

Prime minister Boris Johnson’s official spokesman told the press today that the government’s ambition was to reduce Huawei’s involvement in Britain’s 5G networks, despite January’s cap on the Chinese firm’s involvement at 30 per cent. That cap will increase British mobile network operators’ costs quite a bit, or so they say.

Previous Parliamentary committee inquiries into 5G and Huawei ended in farce as competing groups of MPs came up with differing answers to the Huawei question. Telcos have been pretty consistent, in that they don’t care as long as they can crack on and build new networks with whatever’s cheapest and that they’d like to know before they start spending the readies.

The Defence Committee was recently reappointed following December’s general election. Its current chairman is Tobias Ellwood, a Conservative MP and former Army officer. It does not appear that any of Ellwood’s parliamentary colleagues on the committee have any special knowledge of national security matters.

The US reacted with predictable public outrage after January’s announcement but then largely returned to business as usual, although with whispered rumblings about reduced cooperation and intelligence sharing with the UK.

Anyone who cares strongly enough at this late stage can send emails to the committee as detailed on its webpage. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/09/more_huawei_parliament_inquiries/

Months-long trial of alleged CIA Vault 7 exploit leaker ends with hung jury: Ex-sysadmin guilty of contempt, lying to FBI

The extraordinary trial of a former CIA sysadmin accusing of leaking top-secret hacking tools to WikiLeaks has ended in a mistrial.

In Manhattan court on Monday morning, jurors indicated to Judge Paul Crotty they had been unable to reach agreement on the eight most serious counts, which included illegal gathering and transmission of national defense information: charges that would have seen Schulte, 31, sent to jail for most of the rest of his life.

They did however find him guilty on two counts – contempt of court, and making false statements to the FBI – although he has already spent more time behind bars awaiting trial than he would be required to serve under those counts.

The two sides will meet later this month to decide what to do next. Schulte’s lawyer, Sabrina Shroff, has already asked for an extended deadline in order to file additional motions.

CIA logo

Alleged Vault 7 leaker trial finale: Want to know the CIA’s password for its top-secret hacking tools? 123ABCdef

READ MORE

Some of those motions will ask for information from the prosecution that was kept from her during the trial, most controversially the case of “Michael,” a co-worker of Schulte who was put on administrative leave by the CIA when evidence emerged linking him to the theft of the Vault 7 hacking tools. Michael also refused to discuss the matter with the FBI.

The prosecution only informed Shroff that Michael had been suspended after he gave testimony in the courtroom – something she stressed heavily in her closing arguments, implying that there was a lot more going on behind-the-scenes than the jurors realized. It would appear that some jurors at least were persuaded by that line of argument, which is not surprising given the nature of the trial: a leak of classified exploit code and manuals from America’s top spy agency.

The mistrial will be a significant embarrassment for the US government which spent years pulling the case together, and spent most of the past four weeks walking the jury through what it said was a well-planned theft by Schulte of various software tools that can be used to snoop on a wide range of modern electronics from smartphones to laptop computers. The government is expected to push for a retrial.

Evidence

At the center of the case is the extraordinary fact that the CIA had a hard time proving it was Schulte who stole the tools from a secure server in the heart of spies’ headquarters.

The agency produced a complicated forensic explanation for how it believes Schulte did it – he saved a backup to a thumb drive and then reverted the system to a previous state to cover his tracks – but it couldn’t hide the fact there was only circumstantial evidence against him, and so the prosecution spent a lot of time highlighting his behavior before and after the theft to fill the gaps.

The irony, of course, is that Schulte was hired for the very skills that he may have employed to hide the theft in the first place. It didn’t help that during the course of the trial, the CIA was found to have appalling security measures in place: multiple people used the same admin username and password to access the critical servers. Not only that, but the passwords used were weak – 123ABCdef and mysweetsummer being the main two – and on top of that, they were published on the department’s intranet.

Schulte’s lawyer successfully argued that the evidence against her client was not sufficient for them to say, beyond a reasonable doubt, that he was the person who stole the materials. There is no evidence that Schulte had the tools on him outside the work environment, and no evidence that he sent them to WikiLeaks.

The prosecution pointed out, however, that Schulte downloaded the very software that WikiLeaks recommends people use to send it files because, you guessed it, it deletes any traces of the file transfer from your machine.

Jail time

The strongest evidence against Schulte was his behavior in jail while awaiting trial: he was clearly being closely watched, and one point his cell was raided and a contraband phone was seized along with a notebook; both of which made plain that he was trying to communicate confidential information to outside the jail.

The prosecution argued this was evidence that Schulte was willing to damage US interests to further his own goals. His defense argued he was simply trying to get word of his innocence out to the wider world after he’d been pulled in a government black hole.

Most compelling of the evidence against the CIA/FBI’s case was the fact that co-worker Michael had a screengrab of the very server the Vault 7 tools were stolen from at the time that they were allegedly being stolen. Even the government admits this was unusual.

Michael never mentioned the fact he was actively monitoring the server at the time, and the screengrab was found many months later in a forensic deep dive by the Feds. When asked about it, Michael refused to cooperate, and the next day the CIA suspended him.

That evidence raises all kinds of questions of what was really going on inside the Operational Support Branch (OSB) of the CIA: its elite exploit programming unit. Clearly those questions were sufficient for the jury to be unable to reach agreement on whether Schulte was guilty or not. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/09/cia_hacking_trial_verdict/