STE WILLIAMS

Assange fails to delay extradition hearing as date set for February

An emotional and clean-shaven Julian Assange has appeared in court to request more time and resources to prepare his defence against extradition to the US on espionage charges.

Assange said today he was unable to properly prepare his defence at the case management hearing at Westminster Magistrates’ Court, which aims to check on pre-trial progress.

The US is requesting his extradition to face 18 separate charges under the banner of espionage, related to assisting Chelsea Manning exfiltrate classified information. Being found guilty of these could theoretically and cumulatively result in a 170-year prison sentence for the WikiLeaks founder.

Mark Summers QC, acting for Assange, told the court the defence counsel needed three more months to prepare, but this was refused.

So the full hearing will go ahead starting on 25 February and will be heard in HM Prison Belmarsh. Newswire AP described Assange as looking healthy but thinner than at earlier hearings and said he acknowledged the public gallery stuffed with his supporters. He was described as being frail and struggled to give his name and date of birth, and said he was struggling to think.

His defence still relies on legal rights granted to journalists but lawyers also want the court to rule on two new issues.

Firstly they claim the charges against Assange are political in nature – political offences are specifically excluded from the 2003 Extradition Act under which Assange is being tried.

His defence also wants the court to consider ongoing legal battles in Spain where charges have been brought against the security company responsible for monitoring Assange while he was in the Ecuadorean embassy.

UC Global SL – based in Jerez de Frontera – is accused of spying on him, including snooping on conversations held with his legal representatives and handing information it collected to the US intelligence services.

Spanish police have seized documents and computer hardware from the firm and one of its directors was arrested and released on bail. The director has had his accounts frozen, his passport seized and must report to a local court every two weeks. More from El Pais’ English edition.

Defence lawyers also claim that a variety of issues are preventing Assange from being able to properly prepare his defence, including lacking access to necessary documents and that inadequate care is endangering his health.

Assange’s brief released this statement.

Following the hearing, Assange was returned to Belmarsh. ®

Sponsored:
How to get more from MicroStrategy by optimising your data stack

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/21/assange_fails_to_delay_extradition_hearing/

Glitching: The Hardware Attack That Can Disrupt Secure Software

Glitching (or fault-injection) attacks aren’t easy (yet). But get ready, because as the IoT grows, these attacks will be a big reason that hardware security should be part of your cybersecurity planning.

Modern computers expect a certain consistency in their operating environments. A nice, steady ticking of the electronic clock; smooth, consistent voltage to make everything run; and internal system temperatures that fall within a certain specified range. When their expectations aren’t met, weird things can happen.

If those “weird things” happen because of unanticipated power fluctuations, it can be annoying. If they happen because a malicious actor intentionally manipulated power or other environmental elements, they can be the beginning of a devastating attack.

Enter glitching.

(Image: Saktanong, via Adobe Stock)

Glitching attacks are defined as attacks that involve causing a hardware fault through manipulating the environmental variables in a system. When power, high-temperature sensors, or clock signals are interrupted, the CPU and other processing components can skip instructions, temporarily stop executing programs, or behave in other ways that can allow attackers to slip malicious instructions into the processing gaps.

Glitching is most useful for systems that serve special purposes (like encryption), or those that are “headless” — IoT computers that don’t have a standard user interface that can be manipulated by normal malware or social engineering techniques.

It’s an outlier technique in the threat actor’s toolkit, though. Glitching generally requires intimate knowledge of the hardware and software of the specific system under attack and it requires physical access to that system. It is, though, something that security professionals should know about, especially if they have IoT systems under their care.

It should be noted that glitching attacks are neither easy nor simple to pull off (although researchers recently made it easier by releasing chip.fail, a toolkit to bring glitching “to the masses”). The goal in glitching isn’t simply to stop a system from running — that could be done by simply cutting power in most cases — but to gain access to the system’s resources or damage its ability to effectively complete its given task, when a purely software approach isn’t effective.

Timing’s Leading Edge
Many glitch attacks are based on the shape of a signal. The electrical signals that move through a computer system tend to have sharp rises and drops. On an oscilloscope, the image is a series of square waves. The processor knows to start a new instruction when it detects a sharp rise in voltage — the “leading edge” of the wave. In a presentation given at Black Hat 2015, Bret Giller, a computer security consultant at NCC Group, provided steps for implementing an electrical glitching attack

In his presentation, Giller points out that each instruction takes a certain amount of time to execute; the execution time and the timing of those leading edges are in sync. If an attacker can inject a leading edge into the circuit so that it arrives too soon, then the processor can be tricked into executing a new instruction before the previous instruction has finished, or into skipping instructions altogether.

This kind of glitching can involve a power spike or manipulating the system’s clock by speeding it up (overclocking). Ricardo Gomez da Silva, a faculty member at Technische Universität Berlin Institut für Softwaretechnik und Theoretische Informatik, described these clock-glitching attacks and discussed how to protect against them in a paper published in 2014.

An attacker could gain access to the hardware and just inject stray signals to see what happens, but that’s unlikely to be productive. Instead, as Ziyad Alsheri pointed out in a presentation given at Northeastern University in the fall of 2017, the attacker needs to have intimate knowledge of the processor, the overall system, and the software in order to know precisely when to inject the spurious signal and what to do with the brief burst of resulting chaos.

Glitching the Fall
While instruction execution is triggered by the leading edge of a signal, there are some operations, such as writing data to a memory location, that can be triggered by the sharp voltage fall on the trailing edge of a wave.

A drop in the voltage supplied to the system can eliminate the sharp drop that triggers operations. In his Black Hat presentation, Giller said these “brown out” glitches can be responsible for data corruption and lost information, among other consequences. This sort of data corruption attack can be valuable when the system under attack is responsible for encryption or authentication. Disrupting the data in one part of the process can weaken the entire process to the point that the protection is ineffective.

Outliers
By now, it should be obvious that there are easier ways to hack most systems. The descriptions given in academic papers and research notes show a process that involves a great deal of research and physical access in order to compromise a single system.

However, researchers Thomas Roth and Josh Datko made it simpler and less expensive at Black Hat 2019, when they presented “Chip.Fail,” research conducted with their partner Dmitry Nedospasov. Not only did they demonstrate their glitching (fault-injection) attacks on IoT processors, they did so using less than $100 of equipment. They released this toolkit and framework at the conference, so researchers can test chips’ vulnerability to these types of attacks.  

Nevertheless, glitching may never replace social engineering as a way into office productivity computers. So far, it is not even a huge factor in compromising embedded control systems in the real world.

Yet cybersecurity professionals should remain aware of its possibilities because academic research can become a real-world attack with the breakthrough of a single dedicated security research team.

Related Content:

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/glitching-the-hardware-attack-that-can-disrupt-secure-software-/b/d-id/1336119?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Surviving Alert Fatigue: 7 Tools and Techniques

Experts discuss why security teams are increasingly overwhelmed with alerts and share tactics for lightening the load.

It’s an all-too-common problem for today’s security teams: Alerts stream from a range of tools (sometimes misconfigured) and flood operations centers, forcing analysts to analyze and prioritize which ones deserve attention. Suffice to say, major problems arise when critical alerts slip through the cracks and lead to a security incident.

“One of the biggest drivers of alert fatigue is the fact that people are unsure or unconfident about the configuration that they have or the assets they have,” says Dr. Richard Gold, head of security engineering at Digital Shadows. “What happens is you end up with a lot of alerts because people don’t understand the nature of the problem, and they don’t have time to.”

Dr. Anton Chuvakin, head of solution strategy at Chronicle Security, takes it a step further: Many businesses are overwhelmed by alerts because they have never needed to handle them.

“I think a lot of organizations, until very recently, still weren’t truly accepting of the fact they have to detect attacks and respond to incidents,” he explains. Now, those that never had a security operations center or security team are adopting threat detection and are underprepared.

The proliferation of security tools is also contributing to the alert fatigue challenge, Chuvakin notes. “Today we have a dramatically wider scope of where we are looking for threats,” he continues. “We have more stuff to monitor, and that leads alerts to increase as well.” The most obvious risk of alert overload, of course, is companies could miss the most damaging attacks.

Security staff tasked with processing an unmanageable number of alerts will ultimately suffer from burnout and poor morale, security experts agree. What’s more, overwhelmed employees may also be likely to simply shut off their tools.

It isn’t the technology’s fault, notes Chris Morales, head of security analytics at Vectra. “We don’t have a detection problem – we have a prioritization problem,” he explains. Any given person in a commercial security environment is tasked with multiple jobs: parsing data, writing scripts, knowing the ins and outs of cloud – and managing arrange of tech in their environment.

“The amount of data being pushed through corporate networks today is unlike anything we could have imagined 10 years ago,” says Richard Henderson, head of global threat intelligence at LastLine. Organizations are struggling, and the onslaught of alerts is putting them at risk.

Here, security experts share their thoughts on the drivers and effects of alert fatigue, as well as the tools and techniques businesses can use to mitigate the problem. Which strategies have you used to combat alert overload? Which are effective? Feel free to share in the Comments section, below.

(Image: VadimGuzhva – stock.adobe.com)

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/surviving-alert-fatigue-7-tools-and-techniques/b/d-id/1336128?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

SOC Operations: 6 Vital Lessons & Pitfalls

There is no one road to security operations success, but these guidelines will smooth your path.

Today’s modern security operations centers (SOCs) face a variety of challenges, ranging from organization and structure to technology and budgets. Third-party SOCs (such as Arctic Wolf, and other companies) are responsible for detecting and responding to threats, leaving the organizations that rely on them to focus on improving internal security operations. Here are six vital lessons about SOC effectiveness that we have learned in our operational journey with customers.

Lesson #1: Locate and Retain High-Quality SOC Talent
Finding good SOC analysts is difficult in the best of times and is particularly challenging in the present growth economy, where talent is scarce. Organizations need smart people to understand the threat surface, interpret security telemetry, and find and analyze threats. Today’s latest artificial intelligence (AI) and machine learning innovations will help these professionals operate more effectively. However, technology alone will never replace smart people who understand a company’s specific environment and threats. Organizations need to implement the right programs to locate, train, and retain the good people.

Lesson #2: Improve Your SOC Incrementally
The “big bang” theory of ramping up SOC operations is fraught with risk and has a high probability of failure. Organizations need to take time to analyze what they do well, then build from there. Incremental improvement always wins out over grandiose projects. 

Lesson #3: Coordinate SOC and Network Operations
Integrating your SOC and network operations center (NOC) will greatly improve success across the board. A NOC manages, controls, and monitors networks for things like availability, backups, ensuring sufficient bandwidth, and troubleshooting network problems. A SOC provides incident prevention along with detection and response for security threats. The two functions can overlap when, for example, events like a denial-of-service attack might manifest itself as a network outage but is in fact a security threat. While the two functions can be organizationally discrete, they need to coordinate to achieve an optimal outcome.   

Lesson #4: Realistic Goals
It’s critical for organizations to be realistic about what they want to achieve and clear-eyed on how to achieve it. Your first step: Get executive support, and then determine how much the effort will cost. This exercise includes thinking through all of the pieces you’ll need to put in place to establish an effective SOC, including people, processes, and technology. You will also face “build versus buy” decisions that will require a process for evaluating the best approach for your organization’s specific goals.

Lesson #5: Staffing Delusions
Consider the security challenges that your business faces, and then staff at an appropriate level to address those challenges. Referring to your two or three security professionals as “my SOC” is not the optimal solution. A handful of people will struggle to provide the 24x7x365 coverage required for an effective operation. Furthermore, relying on alerts sent to phones during off hours is a risky recipe for success when that middle-of-the-night alert beeps while someone is asleep. The “Gartner 2018 Market Guide for Managed Detection and Response Services” suggests that the minimum needed to provide 24x7x365 coverage is eight to 12 analysts. Consider what happens when an incident occurs when your staff is at home celebrating on New Year’s Eve and there isn’t eyes-on-glass coverage. 

Lesson #6: The “AI Cure-All” Fallacy
AI will not solve all of a business’s security problems, and organizations cannot automate their way out of the security monitoring challenge. Maintaining a well-functioning SOC requires finding, training, and retaining experienced personnel who can leverage sophisticated tools with AI to identify the threats that matter by providing feedback from which automation can learn. The major challenge in finding and retaining these professionals is in providing them with a variety of interesting and challenging work.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Surviving Alert Fatigue: 7 Tools and Techniques.”

Todd Thiemann is responsible for product marketing at Arctic Wolf Networks.  He is an information security veteran with over a decade of information security experience across a range of subjects including malware detection, SIEM, encryption, key management, and … View Full Bio

Article source: https://www.darkreading.com/operations/soc-operations-6-vital-lessons-and-pitfalls-/a/d-id/1336076?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Trend Micro Buys Cloud Conformity to Fight Cloud Competition

The cloud security posture management startup was acquired for a reported $70 million.

Trend Micro has acquired cloud security posture management vendor Cloud Conformity for a reported $70 million USD, furthering its competitive position in the cloud security market.

Cloud Conformity, founded in Australia in 2016, offers a platform that businesses use to maintain security, governance, and compliance in the public cloud. Its technology monitors activity in Amazon Web Services and Microsoft Azure Cloud, and alerts users of potential issues. The vendor has raised a total of $3.2 million in two rounds of funding, according to CrunchBase.

The company will become part of Trend Micro’s cloud security division, expanding the amount of cloud services it already provides and building on its strategy of integrating cloud security without disrupting business processes, it reports. All of Cloud Conformity’s employees will join Trend Micro, which making the startup’s technology immediately available to its customers.

Read more details here and here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/trend-micro-buys-cloud-conformity-to-fight-cloud-competition/d/d-id/1336129?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Researchers Turn Alexa and Google Home Into Credential Thieves

Eight Amazon Alexa and Google Home apps were approved for official app stores even though their actual purposes were eavesdropping and phishing.

“Alexa, steal my passwords.” It’s not a phrase a user is likely to utter, but security researchers in Germany have shown that it’s possible for malicious apps — Alexa “skills” and Google Home “actions” — to launch phishing attacks on users, forward the compromised credentials to criminals, and do it all in apps approved for use by the voice-assistant giants.

Security Research Labs, a white-hat research organization, developed a total of eight apps, four each for Amazon Alexa and Google Home, that masqueraded as horoscope checkers or a random number generator. The apps triggered malicious actions based on action words like “stop,” while continuing to operate after users thought they had closed.  

According to the researchers, both Amazon and Google removed the malicious apps when presented with evidence of their capabilities. Each of the companies also said they have adjusted practices and policies to prevent similar apps from being added to their stores in the future.

“At this point, consumers have devices that record audio, and often video, in their pockets and homes. We’re surrounded nearly 24/7 by devices with the capability to eavesdrop. It should be no surprise that such a broad target surface is attractive to attackers,” said Tim Erlin, vice president, product management and strategy at Tripwire, responding to the use of these voice assistants as an attack surface.

Read more here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/researchers-turn-alexa-and-google-home-into-credential-thieves/d/d-id/1336131?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mind your own business! CEOs who misuse data could end up in jail

CEOs who lie about misusing consumers’ data could face up to 20 years in jail under a new piece of US legislation proposed last week.

The Mind Your Own Business Act, authored by Senator Ron Wyden, would jail top executives for 20 years if their companies were found lying about misusing citizens’ information.

The legislation follows a draft version known as the Consumer Data Protection Act, released for consultation on 1 November 2018.

The bill requires companies to submit annual data protection reports confirming that they’ve complied with the regulations, and explaining any shortcomings. This applies to any companies holding data on more than 50m people, or over a million people if they make more than $1bn in revenue.

The CEO or chief privacy officer must personally certify that annual report. If they deliberately certify something that isn’t true, then the courts can fine them up to $5m, or a quarter of the largest payment they received from the company across the last three years. They can also face up to 20 years in prison.

Companies would have to describe to consumers what information they were collecting and what they were going to do with it. They would also have to provide a site that enables consumers to opt out of any personal data collection, either through a web form or an application programming interface (API) which would let them do this via a piece of software, like a mobile app.

These APIs would have to be standardised under the Act, presumably making it easier for developers to use them. That’s a measure that could make it easier in theory for developers to set up mass opt-out services targeting different platforms.

A company can make it a condition of their service that users don’t opt out of personal data collection, but it can only do this if it offers an alternative paid version that doesn’t monetise peoples’ data. That paid version can’t cost more than the company would have earned from the user’s personal data. Moreover, companies must offer privacy-friendly free versions of the service for low-income Americans under the proposed law.

Consumers would have the right to demand details of any data held about them, where the company got it, and what it is being used for.

Many measures in this bill correlate closely with the EU’s General Data Protection Regulation (GDPR), especially the requirement for companies to conduct regular data privacy assessments on high-risk information systems (containing sensitive information such as politics or sexual orientation). It singles out automatic decision-making systems which use AI to make decisions affecting consumers.

The bill, introduced on Thursday, would have to be referred to a committee as its next step. You can track its progress here.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AriST-U1iaQ/

Samsung Galaxy S10 fingerprint reader beaten by $3 gel protector

The fingerprint reader on Samsung’s flagship S10 and Note10 smartphones can be spoofed with a $3 screen protector.

That’s according to a British woman who claimed that after fitting the screen protector she was able to unlock her S10 using any one of her fingerprints, including ones not enrolled in the phone’s authentication system.

Then she reportedly asked her husband to try the same thing, and his thumbprints worked too, as did the same trick on her sister’s Samsung. Obviously, something was up.

She called Samsung:

The man in customer services took control of the phone remotely and went into all the settings and finally admitted it looked like a security breach.

The company’s initial response:

We’re investigating this internally. We recommend all customers to use Samsung-authorised accessories, specifically designed for Samsung products.

Then, last week in comments to Reuters, Samsung admitted the problem was real and said it would release a software patch:

We are investigating this issue and will be deploying a software patch soon. We encourage any customers with questions or who need support downloading the latest software to contact us directly.

Screen protection

South Korean online bank KaKaobank has reportedly told its customers to stop using the S10 and Note fingerprint system until the issue is fixed.

The issue of the S10 and screen protectors was first noticed when the smartphone was launched in February 2019.

Unlike older designs which use a dedicated sensor, the Qualcomm ultrasonic technology used by Samsung is embedded under the screen. It measures sound waves caused by the pressure of a user’s finger to analyse the fingerprint.

It was noticed, however, that covering the screen with a protector could in some circumstances create a minute air gap that could interfere with these sound waves – hence Samsung’s advice to use its branded screen protectors that use special adhesives that remove the possibility of that gap.

What to do

If you own an S10 or Note 10, we’d recommend turning off fingerprint security and using a PIN until the promised patch becomes available.

It’s not clear whether that will arrive as an out-of-band patch or will be part of November’s Android security update.

It’s not the first time the S10’s fingerprint reader has been in the spotlight. In April we reported the anonymous researcher who appeared to show themselves unlocking a Samsung S10 using a 3D printed-fingerprint.

But it could be worse – as Naked Security reported in April, the Nokia 9 PureView’s fingerprint reader was fooled by… a chewing gum packet.

All of which tells us, more than ever, that one form of identification might not be enough.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aBFtLqNUr_M/

Don’t look now, but Pixel 4’s Face Unlock works with eyes closed

Does it matter that Google’s Pixel 4 ‘Face Unlock’ works even if the owner has their eyes closed?

For those who’ve never encountered it, the Pixel’s proprietary Face Unlock works by enrolling a model of the user’s face, which is securely stored on a chip inside the phone.

It’s a rival to Apple Face ID, which appeared two years ago in the iPhone X. Google is so confident with the security this technology offers, it even ditched the fingerprint sensor alternative used on older products.

But a BBC reporter has discovered a potential issue – Face Unlock works when the user has his or her eyes closed, for example, when they’re asleep.

Google doesn’t have to confirm this because it’s already on the Pixel 4’s help pages:

Your phone can also be unlocked by someone else if it’s held up to your face, even if your eyes are closed. Keep your phone in a safe place, like your front pocket or handbag.

To spell it out, the risk here is that someone might get hold of a device and unlock it by holding the screen to the face of its sleeping or unconscious owner.

Now you see it

However, according to the BBC, images of the Pixel 4 leaked before it launched included a “require eyes to be open” setting in the setup menu, which disappeared when the product was sent for review.

It seems Google thought about adding this requirement but decided not to for reasons that aren’t clear.

It’s the sort of problem that might not be a problem at all, depending on your point of view.

Fix promised

Google told ZDNet that it plans to fix the issue discovered by the BBC within months, without being more specific. In the meantime, the company recommends using a PIN code or an unlock pattern.

Or, to put it another way, don’t use Face Unlock until the fix arrives if you’re worried about it being abused in limited circumstances.

But why have it at all then? As well as keeping up with Apple, it’s also likely that, like Apple, Google sees facial recognition as a potential second factor to use as a way of authenticating transactions, something it would like people to use their phones to do.

Coincidentally, Samsung is having problems this week with its embedded fingerprint reader, which it turns out can be bypassed using a simple gel screen protector.

Biometric authentication is turning out to be a rocky road where big companies find themselves regularly tripping over small stones.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AneJMxh1HJk/

Iran? More like Ivan: Brit and US spies say they can see through Turla hacking group’s facade

British and US spies have blamed Russian hacker group Turla for masquerading as Iranian hackers to launch recent attacks mostly on government systems in the Middle East.

The joint advisory comes from the UK’s National Cyber Security Centre (NCSC), part of GCHQ, and the US’s National Security Agency (NSA).

It warned that Turla adapted previously used Iranian tools, Neuron and Nautilus, and stole Iranian hackers’ infrastructure via a compromised account. They then attempted to access government systems, military organisations and universities in 35 countries across the Middle East.

Paul Chichester, NCSC’s Director of Operations, said: “Identifying those responsible for attacks can be very difficult, but the weight of evidence points towards the Turla group being behind this campaign.

“We want to send a clear message that even when cyber actors seek to mask their identity, our capabilities will ultimately identify them.”

The hackers pinched the Iranian tools then tested them against organisations they had already compromised using their Snake toolkit before going after fresh victims. They scanned systems using the Snake network of compromised machines to search for an ASPX shell vulnerability.

Once found, they sent commands to the ASPX shell in encrypted HTTP cookie values, which would have required knowledge of the cryptographic keys.

Turla, also known as Waterbug and VENOMOUS BEAR, has a long and grimy history of espionage-related hacks dating back to accessing US military systems in 2014.

The group was also blamed for breaking into the Czech Republic’s Foreign Office email systems in 2017 and is assumed to be backed and controlled by the Russian government.

Costin Raiu, director of the Global Research Analysis Team at Kaspersky, told The Register: “While we haven’t seen ourselves the incidents described in the NCSC advisory, there is no reason to doubt them, although multiple explanations could be possible. For instance, other Turla tools, such as LightNeuron, an Exchange backdoor, are quite similar to the ones from the advisory, yet way more sophisticated, relying on steganography to establish covert CC mechanisms. We have not seen these tools in use by any other actors besides Turla.” ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/21/british_spies_finger_russia_for_faking_iranian_hack_profiles/