Russia’s infamous Turla hacking crew looks to be gearing up for a new offensive, according to researchers with ESET.
The European security firm said that the fingerprints of the state-backed crew have been found all over previously unseen malware samples collected from compromised government websites in Armenia.
Data from ESET telemetry suggests that, for this campaign, only a very limited number of visitors were considered interesting by Turla’s operators
Though only recently discovered, the attack may have been active for some time and appears to be highly focused. The two compromised government websites and another pair of poisoned civilian websites have been active since early 2019.
Part of the reason the attack may have gone unnoticed for so long is the discerning nature of the infections. In the watering-hole attacks, the compromised sites carefully collect information on each user and only attempt to place the malware on the systems of high-value users like government officials.
“If the visitor is deemed interesting, the CC [command-and-control] server replies with a piece of JavaScript code that creates an IFrame,” explained ESET researcher Matthieu Faou.
“Data from ESET telemetry suggests that, for this campaign, only a very limited number of visitors were considered interesting by Turla’s operators.”
Once the target is singled out, the infection attempt itself is rather unremarkable. The trojan is delivered as a fake Flash Player update, a common but tried-and-true method of getting malware up and running on targeted PCs.
“A fake Adobe Flash update pop-up window warning to the user is displayed in order to trick them into downloading a malicious Flash installer,” said Faou.
“The compromise attempt relies solely on this social engineering trick.”
While a small, localised attack against an Eastern European government isn’t particularly earth-shattering news, given Turla’s history and reputation, it could be a sign of larger operations to come. The group has in the past targeted bigger fish, such as the US and Czech Republic, with similar operations.
The Turla crew, believed to be connected to Russian military and intelligence operations, has been active for more than a decade, carrying out targeted malware attacks and network intrusions.
The group is particularly well-versed in social engineering and manipulation, relying mostly on tricks such as the mentioned watering-hole attacks and fake installers, rather than complex technical operations.
Indeed, last year the crew was found trying to throw investigators off its trail by disguising one of its intelligence operations as an Iranian hacking campaign. ®
Threat intelligence needs the problem solvers, the curious ones, the mission seekers, the analytical minds, the defenders, and the fierce — whatever their gender.
“If you could go back in time, what advice would you pass on to your younger self?”
When a friend recently asked me this, for a brief moment the question gave me pause. It prompted me to reflect back on my first steps fighting cybercrime and a feeling that followed me at the start of my career and morphed over time. I remember thinking: Will I fit in?
Straight out of college, I took a job matching “technical wizards” to their dream jobs, a process that jump-started my intrigue with technology and ultimately led me to my own dream: becoming a threat intelligence analyst.
I had no prior background in cybersecurity — or technology, for that matter. My only experience with information technology was losing my college essays to computer or user error. But I was on a mission: I experimented with Web development, tinkered with database administration, and, fast forward a couple of years, I had a master’s degree in IT and landed a job with a security company on its vulnerability database team.
As I was about to embark on my first steps in a male-dominated industry, I also realized that in taking this job I would be the only woman on the team.
Would I fit in?
I realize today the palpable impact that lack of female representation in the threat intelligence community had on me 20 years ago — it hardwired me into questioning myself. At the time, I could count on one hand the number of women in the field; the industry’s existing homogeneity had seemingly introduced a sliver of doubt to my sense of belonging.
This is why gender representation matters. While I’ve been very fortunate to have incredible colleagues and be part of teams throughout my career that have supported me and empowered me, I didn’t really look like anyone in my environment.
Since then, the industry has made great strides to become more open, more inclusive, and more diverse. Just within IBM, formidable female leaders are leading in security and threat intelligence. Twenty years ago, the absence of female figures in the industry to relate to, model after, or look up to would confine one’s breadth of aspirations — what career paths she could follow and how far she could go. Today, women represent 24% of the security workforce, which is light years ahead of where we started.
There are visibly more women in the field, a greater number of women in leadership, and various industry initiatives committed to attracting, empowering, and mentoring talent to enter this fascinating field. For example, IBM Security founded a program called #CyberDay4Girls, where the company partners with local middle schools and hosts workshops to get middle-school girls interested in cybersecurity — a program that has expanded into nine countries.
While the future is looking brighter, there is still a ways to go, which is why representation matters now more than ever. For the industry to become more diverse and more inclusive, it’s essential to inspire our future cyber fighters and tomorrow’s potential leaders. In the presence of diverse references and female trailblazers, when young girls face the inevitable question, “What do I want to be when I grow up?” they can more easily envision themselves in this field and realize this too can be their calling. This too might be their dream.
Now in 2020, I reflect back on that feeling, that uncertainty my younger self had about fitting in. I now know I already had what I needed to fit it then. I just didn’t know it then. The traits that led me to this field were the ones that helped me thrive and excel in my career. They were the nontechnical skills — the attributes that had fueled my fire, passion, and persistence to break through into this industry when starting off.
Threat intelligence needs the problem solvers, the curious ones, the mission seekers, the analytical minds, the defenders, and the fierce — it’s not binary.
So, to answer my friend’s question, if I could go back in time I actually wouldn’t offer my younger self advice. I’d let her experience it all — the good, the great, the bad, the difficult. Because that’s the way you learn, evolve, and grow. However, as she prepared to take that first courageous step into threat intelligence and question whether she’d fit in, I’d whisper in her ear “Why wouldn’t you?”
Michelle Alvarez is the manager of the Threat Intelligence Production Team with IBM X-Force Incident Response and Intelligence Services (IRIS). She brings nearly 20 years of industry experience to her role, specializing in threat research and communication. In her current … View Full Bio
Microsoft has bragged of downing a nine million-strong Russian botnet responsible for vast quantities of email spam.
The Necurs botnet, responsible over the years for quite a considerable volume of spam – as well as being hired out to crims pushing malware payloads such as the infamous Locky ransomware and Dridex malware – was downed by Microsoft and its industry chums following a US court order allowing the private sector companies to go in hard and heavy on the botnet.
Redmond’s Tom Burt said in a blog post: “Necurs is believed to be operated by criminals based in Russia and has also been used for a wide range of crimes including pump-and-dump stock scams, fake pharmaceutical spam email and ‘Russian dating’ scams.”
Microsoft researchers figured out how an algorithm that generated new, unique domains for Necurs’ infrastructure operated and was able to correctly guess six million domain names that would be generated over a 25-month period, it said. These domains were then reported to registrars so they could be promptly blocked.
“By taking control of existing websites and inhibiting the ability to register new ones, we have significantly disrupted the botnet,” beamed Burt. “Interestingly, it seems the criminals behind Necurs sell or rent access to the infected computer devices to other cybercriminals as part of a botnet-for-hire service.”
He added: “For this disruption, we are working with ISPs, domain registries, government CERTs and law enforcement in Mexico, Colombia, Taiwan, India, Japan, France, Spain, Poland and Romania, among others.”
A rapid transition to remote work puts pressure on security teams to understand and address a wave of potential security risks.
Many companies, concerned for employees’ health amid the rapid spread of coronavirus, have begun encouraging them to work from home. The shift, rightly done to protect people from infection, could also potentially expose organizations to cyberattack if precautions aren’t taken.
Businesses ranging from tech giants to startups are clearing their offices in an effort to stop the spread of disease without interrupting day-to-day operations. Microsoft, Alphabet, Facebook, and Apple have all urged employees to work from home if they can. Several tech firms, including Google and Cisco, have begun to offer their collaboration tools for free as companies around the world quickly implement work-from-home policies and conferences are cancelled.
“The unfortunate spread of COVID-19 is forcing many employees around the world to work remotely,” says Bret Hartman, vice president and CTO of Cisco Security Business Group. “While necessary, this new level of workplace flexibility is putting a sudden strain on IT and security teams, specifically around the capacity of existing protections in place given surge in demand.” More than 30% of global enterprises have asked Cisco to help scale remote work, he says, and the company is seeing spikes in time spent in Webex across Japan, Singapore, and South Korea.
Security execs now have the issue at top of mind as companies move in the same direction, says Craig LaCava, global executive services director at Optiv Security. “Most CISOs are thinking about it, are being diverted to calls with executives briefing them about it, and just getting ready for worst-case scenarios,” he says. The problem, LaCava adds, is not everyone has the right devices, processes, and infrastructure in place to support a fully remote workforce.
Remote work fundamentally changes the dynamic, especially for teams accustomed to working side-by-side every day. People forced to change their behaviors may experience loss in productivity, communication challenges, and other unexpected roadblocks as they shift from corporate offices to home offices. An unexpected environmental change can drive security risk.
A Rapidly Growing Attack Surface Darren Murph, head of remote at GitLab, calls this trend “crisis-driven work-from-home,” which he says is “vastly different” from an intentional approach to remote work. Employees are now being thrust into remote work without preparation, warnings, or documented processes to guide them. “Not everyone is going to adapt to remote as second nature,” he explains.
Experts agree the attack surface will grow as more organizations encourage work-from-home policies. As workers start to connect from living rooms and coffee shops, they could be using personal smartphones, laptops, and tablets to send business data over unsecured networks. Those who prefer their home PCs might transfer critical data to them without considering the risk; those who visit other workspaces for a change in scenery may leave their devices unattended.
“More homes are becoming connected, and consumer IoT devices such as lightbulbs, refrigerators, Peloton bikes, and even Roombas are created without security in mind,” explains Armis CISO Curtis Simpson. “Putting corporate assets on the same Wi-Fi networks as these devices creates a new entry point for attackers to reach corporate targets.” Companies, which can’t control their employees’ home networks, are unprepared for these external challenges.
More than half (52%) of response in the “Cisco 2020 CISO Benchmark Report” said mobile devices are “very” or “extremely” challenging to defend. A Duo Security report found 45% of requests to access protected apps come from outside the business. “Organizations with increasingly remote workforces must support different types of users, including contractors, third-party vendors, and remote workers who connect to their corporate network,” says Cisco’s Hartman.
As employees bring corporate devices onto unsecured networks, they also face an increase in phishing attacks as cybercriminals bait them with coronavirus-related malware. Malware families, including Emotet and multiple RAT variants, are being sent with virus-themed lures.
What Security Teams Can Expect A key challenge for IT and security teams is providing and protecting devices for employees to take home. Drex DeFord, strategic executive for CI Security and former CIO for Scripps Health and Seattle Children’s Hospital, strongly encourages taking the time to ensure devices are properly configured. “In a crisis we have a tendency to take shortcuts,” he says. Security pros who rush to get devices set up and deployed “may lay land mines [they] may step on later.” It’s often simple misconfigurations that accidentally leave data exposed on the Internet, he adds.
“The big message for senior healthcare executives, and executives in general, is just to watch your team closely, and when it comes to IT, everything is connected to everything, including all your partners and third-party vendors,” DeFord says.
Infosec teams can expect additional challenges when employees neglect office habits outside of the workplace, says Mark Loveless, senior security engineer with GitLab, which has a remote workforce. Security basics, like using a locking screensaver or not writing down passwords, are “muscle memory” at work but may not feel as important when employees get home.
“At home there is a tendency to let one’s guard down as people feel safer in their own homes, so any bad computer security habits from home might translate into insecure actions with work tasks,” Loveless explains. “The biggest challenge is to remind and positively reinforce those good security habits while at home.” Most bad habits and the problems they introduce at home are not major, he notes, but a lot of them can add up and expand the attack surface.
Employees working from home may not have the same firewalls, network-based intrusion detection, and other office defenses they have at work, Loveless adds. Security teams can expect they may access risky websites from their work devices, adding more attack vectors.
CISOs should assume identities will be targeted at a higher rate than usual by attackers who know their activities will be hidden in a spike of remote traffic, Armis’ Simpson adds. Employees may also lose their credentials or accidentally share them on public Wi-Fi. If an attacker has them and logs into a business app, it will be difficult for security teams to determine inappropriate access.
“If an office is shut and there’s a state of emergency, what’s normal is now out the window … the SIEM might be seeing all sorts of things,” Optiv Security’s LaCava says. “How do I tell what’s normal and what’s not when nothing is normal?”
Steps You Can Take Right Now GitLab’s Murph and Loveless both agree documentation is critical. “It’s essential to have a single source of truth,” Murph explains. A distributed security team will spend their days implementing access requests and addressing alerts. If they don’t have access to the same documentation on how they should address a situation, there’s no guarantee the organization is secure. Murph also recommends a public security channel where remote infosec employees can communicate live.
“We document everything,” Loveless says. GitLab’s company handbook is public, as are its security policies, and it encourages active updates to improve security and productivity. Loveless also advises security teams to set up training materials designed for security and remote workers so employees know what to do and what to expect if they experience a security incident. If they do, employees should know to immediately share any security threats and concerns.
“Create a structure for people to report when things go wrong,” CI Security’s DeFord advises.
If your organization doesn’t already use multifactor authentication (MFA), now is the time to start, Simpson says. MFA should be enforced for privileged users accessing sensitive Internet-facing business services, including HR platforms, code repositories, remote access interfaces and solutions, and Internet- and software-as-a-service admin interfaces. Those who don’t already use MFA should prioritize its implementation among the highest risk users, not deploy for everyone at once.
Behavioral analytics tools for detecting suspicious activity should be optimized for admins and those who handle critical data. Organizations may also want to consider requiring remote staff to access legacy apps and services through a virtual desktop environment. Simpson advises testing the virtual desktop environment to ensure the user experience is as needed.
Businesses new to remote work should strategize how they will communicate, whether about security or any other topic. “Technology aside, it’s the people elements that’s really important,” says Adam Holtby, senior analyst for workplace mobility at Omdia. This demands a conscious effort for managers, who will need to ensure communications channels are in place for remote employees to connect. “Make sure people are still social, still in touch with one another,” he adds.
Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio
A patch for the flaw is not yet available, but there are no known exploits — so far.
Among the more critical vulnerabilities that Microsoft disclosed yesterday was one that ironically was not included in its scheduled Patch Tuesday update and for which a patch is still not available.
The vulnerability exists in Microsoft’s Server Message Block (SMB) protocol (SMBv3) and has prompted some concern about threat actors potentially using it to launch “wormable” exploits of the WannaCry variety.
The flaw is remotely executable. It allows attackers to gain complete control of vulnerable systems and execute arbitrary code on them within the context of the application, according to Fortinet, one of those that warned of the issue Tuesday.
A Microsoft advisory described the vulnerability as being of critical severity and impacting multiple versions of Windows 10 and Windows Server. “To exploit the vulnerability against an SMB Client, an unauthenticated attacker would need to configure a malicious SMBv3 Server and convince a user to connect to it,” Microsoft said.
Since no patch is currently available for the flaw, Microsoft is recommending organizations disable SMBv3 compression so unauthenticated attackers are prevented from exploiting the vulnerability. However, that particular workaround does not protect SMB clients against exploitation. For that Microsoft is recommending organizations block TCP port 445 at the enterprise firewall.
“Blocking this port at the network perimeter firewall will help protect systems that are behind that firewall from attempts to exploit this vulnerability,” the company said, though they would still remain vulnerable to attacks from inside the perimeter.
Microsoft is urging all organizations to install updates for the vulnerability as soon as possible after they become available, even those organizations that have implemented the recommended workarounds.
No exploit for the vulnerability is known to be current available. Even so, organizations with exposed SMB services — typically port 445 — are at immediate risk, says Jonathan Knudsen, senior security strategist at Synopsys.
The SMB protocol allows Windows systems to share files printers, for example. Organizations often leave the service enabled on Internet-connected systems, giving attackers a potential entryway to their networks. In recent years, attackers have used exploits like the NSA-developed EternalBlue to spread malware via one vulnerable system to the next in a very effective fashion.
“To mitigate this risk, they should either disable the service altogether or follow Microsoft’s advice to disable compression until a fix is available,” Knudsen says. “Client computers will be vulnerable until a fix is available, so concerned organizations should curtail or discontinue their use of SMB until that point.”
Unexpected Disclosure Microsoft declined to comment to Dark Reading on why the vulnerability was not disclosed with all the other bugs in the Patch Tuesday update or to provide any other details besides what’s contained in the security advisory.
Some, though, suggest Microsoft might have been forced to issue the advisory after a couple of security vendors — Cisco Talos and Fortinet — inadvertently disclosed details of the flaw this week. According to Duo Security, which is also part of Cisco, Microsoft shares information about its security updates with antivirus companies, hardware vendors, and other trusted third parties.
It’s possible that Cisco Talos and Fortinet had information about the SMBv3 issue and released it thinking it would be part of the Patch Tuesday release, the vendor said in a blog. “While Cisco Talos and Fortinet have updated their advisories to remove references to the vulnerability, enough people saw the descriptions,” Duo said. According to Duo, the two vendors identified the vulnerability as CVE-2020-0796 though Microsoft itself did not refer to a CVE identifier in its security advisory.
A Fortinet brief described the vulnerability as a buffer overflow issue in SMB server. “The vulnerability is due to an error when the vulnerable software handles a maliciously crafted compressed data packet,” the security vendor said in urging organizations to apply Microsoft’s update as soon as it becomes available.
“Ideally, a coordinated disclosure timeline would have researchers disclosing the vulnerability to the vendor, the vendor creating and publishing a fix, and then a coordinated public disclosure of the vulnerability,” Knudsen says. “For whatever reason, that process appears to have gone awry in this case.”
Thomas Hatch, CTO and co-founder at SaltStack, says news of the latest flaw highlights the need for organizations to properly secure SMB services. “SMB, like many such services, should never be exposed to the outside Internet. This is typically how these types of vulnerabilities get exploited,” he says.
Also, given the prevalence of SMB, if an exploit is made public, it could prove to be a large issue for companies to deal with, cautions Charles Ragland, security engineer at Digital Shadows. In addition to Microsoft’s recommended actions, organizations should follow security best practices.
“Disable unnecessary services, block ports at the firewall, and ensure that host based measures are in place to prevent users from accessing/modifying security controls,” Ragland says.
Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio
To get back up and running quickly, and because it’s cheaper, city and county governments often pay the ransom, especially if insurance companies are footing the bill. The result: More ransomware.
Small and local government have not only continued to pay ransoms to the criminals behind ransomware, but they have been doing so at an accelerating pace, according to a new report by consulting firm Deloitte.
In 2019, more than 163 ransomware attacks targeted local and county governments, with at least $1.8 million paid to the cybercriminals behind the attacks and tens of millions of dollars in recovery costs, according to data compiled by the Deloitte Center for Government Insight. In 2018, there were only 55 publicly reported attacks and less than $60,000 in ransom. In fact, local governments are seeing an increasing number of attacks at the same time attackers are also demanding higher ransoms — an average of 10 times higher than what they demand from private-sector companies.
Three unique facets of local governments are likely driving the increase in ransomware cases: The organizations tend to have insurance, they leave gaps in their networks and system security, and they need to maintain critical services. The result is a feedback loop, says Srini Subramanian, state and local government sector leader for Deloitte.
“The more they are paying out, the more money criminals are demanding,” he says. “The criminals like targeting governments because they pay. And cyber insurance is paying because it is the fastest way to recovery, and it is likely the most cost-effective way as well.”
Local governments became a favored target of ransomware in 2019. In August, local and county government organizations in Texas were disrupted by destructive attacks all at nearly the same time and with a variety of consequences — some towns lost the ability to accept payments, while others had emergency services disrupted. Major cities, such as Baltimore and Atlanta, suffered attacks as well.
Just this week, Durham, NC, acknowledged fighting a ransomware attack that compromised a system after employees clicked on phishing e-mails. The attack appears to have affected 1,000 systems at the county government offices; the IT staff reportedly plans to reimage the systems.
Because local governments have tight budgets and lack the ability to attract cybersecurity professionals, they are an easy target, says Cesar Cerrudo, chief technology officer of cybersecurity services provider IOActive.
“It’s easier mostly because they have poor backup practices in large part due to the lack of budgets and skills,” he says. “Local governments also need the systems for normal operations, and when they can’t restore from a backup, then they only have two options to continue with a nonfunctioning system or to pay the ransom.”
The critical nature of many government systems means that failure to recover quickly can result in significant costs. The city of Baltimore, for example, decided not to pay a ransom of $76,000. It was the right moral choice but one with a significant cost, says Deloitte’s Subramanian. Recovering from the incident cost the city more than $18 million.
Both private- and public-sector organizations should improve their system architectures to make fast recovery more likely, educate the workforce to improve cyber hygiene, and practice response drills and what-if scenarios to ensure proper insurance coverage, he says.
“You need to be able to go to your business leaders and with confidence say, ‘We will be able to restore within 24 hours, or 36 hours, or 48 hours,'” Subramanian says. “When people are able to have a response plan and test it, then we can stop paying.”
As the Durham, NC, incident has shown, phishing remains a significant vector for ransomware, making employee security education important. In 2019, 56% of public-sector organizations saw an increase in phishing attacks that included a malicious link or attachment, according to security firm Mimecast.
The federal government could help local agencies with cybersecurity education, best practices, and incident response expertise, says IOActive’s Cerrudo.
“The private sector has been investing in cybersecurity for a long time and continues to mature, while local governments haven’t invested much,” he says. “Knowing this, cybercriminals are turning their weapons and targeting local governments because they are easier and juicier targets.”
While local governments that pay a ransom can recover fairly quickly, the model is not sustainable. Eventually, serial local-government victims will no longer be acceptable risks for insurance companies, Subramanian says.
“It is going to happen sooner or later, and when it does, it remains to be seen if they are insurable after the second time or third time that they pay ransom,” he says. “The question is that, as the guardians of taxpayer money, what should the governments be doing today?”
Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio
The federal commission outlined more than 60 recommendations to remedy major security problems.
A new report released today from the federal Cyberspace Solarium Commission opens with a dire warning: “Our country is at risk, not only from a catastrophic cyberattack but from millions of daily intrusions disrupting everything from financial transactions to the inner workings of our electoral system.”
The report from the congressional commission chaired by Sen. Angus King and Rep. Mike Gallagher – based on a yearlong study – states that the country is “dangerously insecure in cyber.”
For the US public sector, the major threats are attacks on elections and other democratic institutions, espionage against both the military and its suppliers, targeting civilian agencies for espionage, and the loss of US leadership in key technology RD, according to the commission.
Primary threats against the US private sector are cybercrime and malware, intellectual property theft, and risks to critical infrastructure. To protect against both public and private threats, the report proposes a three-level defense in depth encompassing six pillars of action.
The three layers will be familiar to those who know broad military strategy: Shape behavior, deny benefits, and impose costs. To put it in very simple terms, the US strategy should be to make it more difficult and less profitable to attack its resources.
The tactics to enable those strategies are divided into five pillars: Reform the US Government’s Structure and Organization for Cyberspace, Strengthen Norms and Non-military Tools, Promote National Resilience, Reshape the Cyber Ecosystem toward Greater Security, Operationalize Cybersecurity Collaboration with the Private Sector, and Preserve and Employ the Military Instrument of Power. Within each of the pillars are very specific recommendations for legislation and action, more than 60 in all, including 48 recommendations for legislation.
Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio
Microsoft fixed bugs across a range of products on March’s Patch Tuesday, releasing patches for 115 distinct CVEs, with 26 rated critical.
All of the critical bugs related to remote code execution (RCE), and all of them stemmed from flaws in memory management.
The critical bug that cropped up in the most CVEs was in ChakraCore, the scripting engine that handles just-in-time compilation for its browsers. It’s a bug in the scripting engine’s object memory management that could corrupt memory to let an attacker execute their own code on the user’s behalf.
An attacker might exploit this bug by persuading the victim to visit a website, which could be a third-party site containing user-generated content like a blog comment or forum post. The attacker could also send them an ActiveX control in an Office document that uses the scripting engine. These bugs affected ChakraCore across 12 CVEs, which between them impacted Microsoft Edge and IE 11.
Microsoft detailed a similar object memory handling bug in Edge itself (CVE-2020-0816), along with four other similar CVEs in various areas of Internet Explorer 11 that included a bug in its VBScript engine.
The company also reported critical bugs in several versions of Windows. A flaw in the Windows Graphics Device Interface enables an attacker to control the system with full user rights; and a memory corruption bug in Windows Media Foundation, which is a COM-based multimedia framework pipeline and infrastructure platform for digital media in Windows. It’s exploitable via a malicious document or web page. This bug covers versions of Windows from 1607 through to 2019 Server and Server Core.
Another object memory management bug (CVE-2020-0852), this time in Microsoft Word, is exploitable via a malicious Word file, or by having the victim visit a website. It allows the attacker to run code as the user, and it’s also exploitable via the Outlook preview pane, Microsoft warned.
A final critical flaw (CVE-2020-0905) affects users of Dynamics NAV from 2013 up, along with Dynamics 365 BC On-Premise and Business Central 2019. This one allows attackers to execute code on the victim’s server by connecting a malicious Dynamics Business Central client.
Critical unpatched bug
One thing that wasn’t fixed in the collection of patches was a critical bug in Microsoft SMB servers that is triggered by a maliciously crafted data packet. News of the flaw appeared on some vendor pages on Tuesday, with the ID CVE-2020-0796, before being swiftly removed.
Microsoft has since issued an advisory for this flaw that says it affects Windows 10 release 1909 and Windows Server Core. There is no patch yet, but admins can disable SMBv3 compression and block port 445 on enterprise firewalls, Microsoft said.
Latest Naked Security podcast
LISTEN NOW
Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.
We’re not sure quite how dangerous this problem is likely to be in real life, but it has the most piratical name for a bug that we’ve seen in quite some time, me hearties.
TRRespass is how it’s known (rrrroll those Rs if you can!) – or plain old CVE-2020-10255 to the landlubber types amongst us.
Trespass is the legal name for the offence of going onto or into someone else’s property when you aren’t supposed to.
And TRR is short for Target Row Refresh, a high-level term used to describe a series of hardware protections that the makers of memory chips (RAM) have been using in recent years to protect against rowhammering.
So TRRespass is a series of cybersecurity tricks involving rowhammering to fiddle with data in RAM that you’re not supposed to, despite the presence of low-level protections that are supposed to keep you out.
Rowhammering is a dramatically but aptly named problem whereby RAM storage cells – usually constructed as a grid of minuscule electrical capacitors in a silicon chip – are so tiny these days that they can be influenced by their neighbours or near neighbours.
It’s a bit like writing the address on an envelope in which you’ve sealed a letter – a ghostly impression of the words in the address is impinged onto the paper inside the envelope.
With a bit of care, you might figure out a way to write on the envelope in such a way that you alter the appearance of parts of the letter inside, making it hard to read, or even permanently altering critical parts (obscuring the decimal points in a list of numbers, for example).
The difference with rowhammering, however, is that you don’t need to write onto the envelope to impinge on the letter within – just reading it over and over again is enough.
In a rowhammering attack, then, the idea is to be able to modify RAM that you aren’t supposed to access at all (so you are writing to it, albeit in a somewhat haphazard way), merely by reading from RAM that you are allowed to look at, which means that write-protection alone isn’t enough to prevent the attack.
One row at a time
To simplify the otherwise enormous number of individual control connections that would be needed, you can’t read just one bit at a time from most RAM chips.
Instead, the cells storing the individual bits are arranged in a series of rows that can only be read out one full row at a time.
To read cell C3 above, for example, you would tell the row-selection chip to apply power along row wire 3, which would discharge the capacitors A3, B3, C3 and D3 down column wires A, B, C and D, allowing their values to be determined. (Bits without any charge will read out as 0; bits that were storing a charge as 1.)
You’ll therefore get the value of four bits, even if you only need to know one of them.
Incidentally, reading out a row essentially wipes its value by discharging it, so immediately after any read, the row is refreshed by saving the extracted data back into it, where it’s ready to be accessed again.
Also, because the charge in any cell leaks away over time anyway, every row needs regularly refreshing whether it is used or not.
The RAM circuitry does this automatically, by default every 64 milliseconds (that’s about 16 times a second, or just under 1,000 times a minute).
That’s why this sort of memory chip is known as DRAM, short for dynamic RAM, because it won’t keep its value without regular external help.
(SRAM, or static RAM, holds its value as long as it’s connected to a power supply; Flash RAM will hold its value indefinitely, even when the power is turned off.)
Exploiting the refresh
One problem with this 64ms refresh cycle is, if a RAM row loses its charge or otherwise gets corrupted between two cycles, that the corruption won’t be noticed – the “recharge” will kick in and refresh the value using the incorrect bits.
And that’s where rowhammering comes in.
In 64ms you can trigger an enormous number of memory reads along one memory row, and this may generate enough electromagnetic interference to flip some of the stored values in the rows on either side of it.
The general rule is that the more you hammer and the longer the cell has been leaking away its charge, the more likely you are to get a bitflip event.
You can even do what’s called double-sided rowhammering, assuming you can work out what memory addresses in your program are stored in which physical regions of the chip, and hammer away by provoking lots of electrical activity on both sides of your targeted row at the same time.
Think of it as if you were listening to a lecture on your headphones: if attackers could add a heap of audio noise into your left ear, you’d find it hard to hear what the lecturer was saying, and might even misunderstand some words; if they could add interference into both ears at the same time, you’d hear even less, and misunderstand even more.
Reducing the risk
Numerous ways have emerged, in recent years, to reduce the risk of rowhammering, and to make real-world memory-bodging attacks harder to pull off.
Anti-rowhammering techniques include:
Increasing the DRAM refresh rate. The longer a bit goes unrecharged, the more likely it is to flip due to on-chip interference. But recharging the cells in a DRAM row is done by reading their bit values out redundantly, thus forcing a refresh. The time spent refreshing the entire chip is therefore a period during which regular software can’t use it, so that increasing the refresh rate reduces performance.
Preventing unprivileged software from flushing cached data. If you read the same memory location over and over again, the processor is supposed to remember recently used values in an internal area of super-fast memory called a cache. This naturally reduces the risk of rowhammering, because repeatedly reading the same memory values doesn’t actually cause the chip itself to be accessed at all. So, blocking unauthorised programs from executing the clflush CPU instruction prevents them from bypassing the cache and getting direct access to the DRAM chip.
Reducing the accuracy of some system timers. Rowhammering attacks were invented that would run inside a browser, and could therefore be launched by JavaScript served up directly from a website. But these attacks required very accurate timekeeping, so browser makers deliberately added random inaccuracies to JavaScript timing functions to thwart these tricks. The timers remained accurate enough for games and other popular browser-based apps, but not quite precise enough for rowhammering attackers.
A Target Row Refresh (TRR) system in the chip itself. TRR is a simple idea: instead of ramping up the refresh rate of memory rows for the entire chip, the hardware tries to identify rows that are being accessed excessively, and quietly performs an early refresh on any nearby rows to reduce the chance of them suffering deliberately contrived bit-flips.
In other words, TRR pretty much does what the name suggests: if a DRAM memory row appears to be the target of a rowhammer attack, intervene automatically to refresh it earlier than usual.
That way, you don’t need to ramp up the DRAM refresh rate for every row, all the time, just in case a rowhammer happens to one row, some of the time.
So, the authors of the TRRespass paper set out to measure the effectiveness of the TRR mitigations in 42 different DRAM chips manufactured in the past five years.
They wanted to find out:
How different vendors actually implement TRR. (There’s no standard technique, and most of those used have not been officially documented by the chip vendor.)
How various TRR implementations might be tricked and bypassed by an attacker.
How effective rowhammering attacks might be these days, even with TRR in many chips.
We’ll leave you to work through the details of the report, if you wish to do so, though be warned that it’s quite heavy going – there’s a lot of jargon, some of which doesn’t get explained for quite a while, and the content and point-making is rather repetitive (perhaps a side-effect of having eight authors from three different organisations).
Nevertheless, the researchers found that they were able to provoke unauthorised and probably exploitable memory modifications on 13 of the 42 chips they tested, despite the presence of hardware-based TRR protections.
Fortunately, they didn’t find any common form of attack that worked against every vendor’s chip – each vulnerable chip typically needed a different pattern of memory accesses unleashed at a different rate.
Even though you can’t change the memory chips in your servers or laptops every few days, this suggests that any successful attack would require the crooks to get in and carry out a fair bit of “hardware reconnaissance and research” on your network first…
…in which case, they probably don’t need to use rowhammering, because they’ve already got a dangerous foothold in your network already.
It also suggests that, in the event of attacks being seen in the wild, changes to various hardware settings in your own systems (admittedly with a possible drop in performance) might be an effective way to frustrate the crooks.
What to do?
Fortunately, rowhammering doesn’t seem to have become a practical problem in real-life attacks, even though it’s widely known and has been extensively researched.
So there’s no need to stop using your existing laptops, servers and mobile phones until memory manufacturers solve the problem entirely.
But at least part of the issue is down to the race to squeeze more and more performance out of the hardware we’ve already got, because faster processors mean we can hammer memory rows more rapidly than ever, while higher-capacity RAM modules gives us more rows to hammer at any time.
As we said last time we reported on rowhammering:
[Whenever] you add features and performance – whether that’s [ramping up memory and processing power], building GPUs into mobile phone chips, or adding fancy graphics programming libraries into browsers – you run the risk of reducing security at the same time.
If that happens, IT’S OK TO BACK OFF A BIT, deliberately reducing performance to raise security back to acceptable levels.
Sometimes, if we may reduce our advice to just seven words, it’s OK to step off the treadmill.
This week we talk about why Let’s Encrypt might have to celebrate its billionth certificate twice, wonder if James Bond could hack Siri with ultrasound, and make backups surprisingly interesting.
I’m joined by Sophos experts Greg ‘Fido’ Iddon and Peter Mackenzie.
LISTEN NOW
Click-and-drag on the soundwaves below to skip to any point in the podcast.