STE WILLIAMS

Overall Volume of Thanksgiving Weekend Malware Attacks Lower This Year

But ransomware attacks go through the roof, new threat data from SonicWall shows.

Somewhat unexpectedly, the overall volume of malware attacks leading up to and through this year’s Thanksgiving holiday weekend was actually substantially lower than for the same days last year.

Security vendor SonicWall, which tracks threat data on a continuous basis, says its customers encountered a total of 91 million attacks overall in the days preceding Thanksgiving and those immediately after: Black Friday, Small Business Saturday, and Cyber Monday.

The number represented a 34% decrease, or a third fewer attacks, compared with the same period in 2017. The decline was especially sharp on Cyber Monday, which by all early accounts was record-breaking both in terms of the number of online shoppers and sales.

Compared with the 22.6 million attacks that SonicWall shoppers encountered on Cyber Monday 2017, this year the number of malware attacks was 47% lower, at just under 12 million.

Each of the other days starting from Black Friday through Cyber Monday recorded smaller but still significant declines in malware attack volume. Malware attacks on Black Friday — when online purchases topped a record-breaking $6 billion — were 40% lower than in 2017.

SonicWall says one reason could simply be that cybercrooks are becoming more focused in their attacks. Instead of hitting consumers with an overly broad range of malware, they are narrowing their focus to the most profitable types of attacks.

One data point to support this theory was the sharp increase in ransomware attacks overthe online holiday shopping days. SonicWall says it recorded over 889,900 ransomware attacks on customers in the period between Nov. 19 and Nov. 27, a 432% increase over the 167,380 it recorded in 2017. The increase was especially dramatic on Black Friday (about 4,100 in 2017 versus over 113,300 this year) and Small Business Saturday (10,170 versus 103,600).

Phishing and cryptojacking attacks also increased this year compared with a year ago, SonicWall says.

Significantly, despite the decline in malware attacks during the Thanksgiving shopping season, malware attacks for 2018 as a whole are substantially higher than in 2017. Through the end of October 2018, SonicWall says, the total number of malware attacks was 44% percent higher than 2017 at the same point.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/overall-volume-of-thanksgiving-weekend-malware-attacks-lower-this-year-/d/d-id/1333373?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

MITRE Changes the Game in Security Product Testing

Nonprofit has published its first-ever evaluation of popular endpoint security tools – measured against its ATTCK model.

There were no grades, scores, or rankings, but today’s official release by MITRE of the results from its tests of several major endpoint security products could signal a major shift in the testing arena.

MITRE, a nonprofit funded by the US federal government, in its inaugural commercial tests pitted each product against the well-documented attack methods and techniques used by the Chinese nation-state hacking group APT3, aka Gothic Panda, drawn from MITRE’s widely touted – and open – ATTCK model.

Endpoint detection and response (EDR) vendors Carbon Black, CrowdStrike, CounterTack, Endgame, Microsoft, RSA, and SentinelOne played blue team with their products against a red team of experts from MITRE. Unlike traditional third-party product testing in security, the ATTCK (Adversarial Tactics, Techniques, and Common Knowledge) approach uses open standards, methods, and the vendors perform live defenses with their products.

The testing operates in a collaborative manner. “They invited the vendors in to help them drive the tool and show how they find [attacks],” says Mark Dufresne, vice president of research for Endgame. “MITRE was sitting right there, and the product wasn’t just chucked over the wall” and tested like in many other third-party tests.

“It was a collaborative and conversational, versus transactional, model,” he says.

MITRE tracks and documents how the tools are tuned and configured, and how they do or don’t detect an offensive move by its “APT3” red team, for example. The results get published on MITRE’s website for anyone to see and study.

“We really want this to be a collaborative process with the vendors; we want them to be part of the process,” says Frank Duff, MITRE’s lead engineer for the evaluations program. The goal is both to improve the products as well as share the evaluations publicly so organizations running those tools or shopping for them can get an in-depth look at their capabilities, according to Duff.

MITRE chose APT3’s methods of attack, which include credential-harvesting and employing legitimate tools used by enterprises to mask their activity. Each step of the attack is documented, such as how the tool reacted to the attacker using PowerShell to mask privilege-escalation.  ATTCK is based on a repository of adversary tactics and techniques and is aimed at helping organizations find holes in their defenses. For security vendors, ATTCK testing helps spot holes or weaknesses in their products against known attack methods.

The collaborative and open testing setup represents a departure from traditional third-party testing. Vendors and labs traditionally have had an uneasy and sometimes contentious relationship over control of the testing process and parameters. Longtime friction in the security product test space erupted into an ugly legal spat in September, when testing firm NSS Labs filed an antitrust lawsuit against cybersecurity vendors CrowdStrike, ESET, and Symantec, as well as the nonprofit Anti-Malware Testing Standards Organization (AMTSO), over a vendor-backed testing protocol.

The suit claims the three security vendors and AMTSO, of which they and other endpoint security vendors are members, unfairly allow their products to be tested only by organizations that comply with AMTSO’s testing protocol standard.

“The whole testing landscape is a real mess,” Endgame’s Dufresne says. “[But] as a vendor, it’s important to be there.”  

NSS Labs, which is a member of AMTSO, was one of a minority of members that voted against the standard earlier this year; the majority of members support it and plan to adopt it. “Our fundamental focus is if a product is good enough to sell, it’s good enough to test. We shouldn’t have to comply to a standard on what and how we can test,” said Jason Brvenik, chief technology officer at NSS Labs, in an interview with Dark Reading after the suit was filed.

Traditional third-party tests, such as those conducted by NSS Labs, AV-Test, and AV-Comparatives, focus mainly on file-based malware, Endgame’s Dufresne explains, looking at whether the security product blocks specific malware. “We do participate in those … but they truly miss a huge swath of overall attacker activity.”

MITRE’s Duff says there are different security product tests for different purposes. ATTCK is all about openness and providing context to the evaluation, he says. “All different testing services have their own purpose, value, and approaches. This is our approach, and we are hoping it resonates to the public.”

Even so, malware-based testing isn’t likely to go away. “I think some buyers like to see a number” like those tests provide, Endgame’s Dufresne says.

Greg Sim, CEO of Glasswall Solutions, says third-party testing was overdue for a change. Even when they garner high scores from the antimalware testing labs, some products continue to fail in real-world attacks, he says. “I think there’s going to be a different model,” he says, noting that his firm has run tests with MITRE, which it considers an example of a reputable third party for testing.

Another security vendor executive who requested anonymity says MITRE’s entry into the testing arena came just at the right time. “Emulating tradecraft of a known adversary, nation/state – for us on the vendor side, we’re saying, ‘Hallelujah, they are doing it the right way,'” he says.

MITRE’s new testing service represents new territory for the nonprofit, but that doesn’t mean its federal government work will subside. Vendors pay a fee, which MITRE would not disclose. “MITRE historically has focused on doing testing of solutions for US government customers or sponsors. That role is not going anywhere,” Duff notes.

MITRE hasn’t yet set a timeframe for its next series of tests, he adds, but the team will pick another APT group to emulate. 

“We are just trying to get at the ground truth on these tools,” Duff says.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/endpoint/mitre-changes-the-game-in-security-product-testing/d/d-id/1333374?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Big Blue shoos Db2 blues before rogue staff turn the screws in hijack ruse (translation: patch your IBM databases)

IBM is advising folks this week to check if they should update their Db2 database installations following the discovery of a potentially serious security vulnerability.

Big Blue says that the flaw, designated CVE-2018-1897, is an elevation-of-privilege flaw that, if exploited, would allow a logged-in attacker to execute code and commands as an admin. That’s bad news if you have rogue staff, or someone or some malware has been able to get a foothold in your enterprise.

The vulnerability lies in db2pdcfg, a configuration tool that allows administrators to troubleshoot performance problems with the database. If a hacker was able to send the tool a specially crafted command, a buffer overflow would be triggered, potentially leaving the door open for arbitrary code execution.

IBM has issued fixpacks for the Windows, Linux, and Solaris versions of Db2. Depending on the version being run, the updates will be known as V9.7 FP11, V10.1 FP6, or V10.5 FP10.

Discovery and private disclosure of the flaw was credited to researcher Eddie Zhu of Beijing DBSec Technology Co.

Word of the patch comes one day after IBM pushed out a patch for a separate security vulnerability in the AIX and Linux version of Db2.

That hole, CVE-2018-1723, describes a data disclosure flaw in the Spectrum Scale storage system used by Db2 that would potentially allow an unprivileged user with access to a single node the ability to view files that they would normally not have access to.

The vulnerability requires login access to the node, helping to reduce its scope and potential for attack.

IBM says the vulnerability is only present on versions 10.5 and 11.1 of Db2 for AIX and Linux that are also running pureScale. Admins with version 11.1.1 and 11.1.4 can obtain the needed patches for both versions from Big Blue’s Fix Central. Those running version 10.5 will need to get a separate eFix package from IBM tech support. ®

Sponsored:
Putting the Sec into DevSecOps

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/29/ibm_db2_security_bugs/

GCHQ pushes for ‘virtual crocodile clips’ on chat apps – the ability to silently slip into private encrypted comms

Analysis Britain’s surveillance nerve-center GCHQ is trying a different tack in its effort to introduce backdoors into encrypted apps: reasonableness.

In an essay by the technical director of the spy agency’s National Cyber Security Centre, Ian Levy, and technical director for cryptanalysis at GCHQ, Crispin Robinson, the authors go out of their way to acknowledge public concerns over government access to personal communication.

They also promise to get back to a time where the authorities only use their exceptional powers in limited cases, where a degree of accountability is written into spying programs, and they promise a more open discussion about what spy agencies are allowed to do and how they do it.

But the demand for backdoors is still there, this time couched in terms of “virtual crocodile clips” on modern telephone lines, namely the encrypted chat and call apps that have become ubiquitous on smart phones.

“For over 100 years, the basic concept of voice intercept hasn’t changed much: crocodile clips on telephone lines,” the authors note. “Sure, it’s evolved from real crocodile clips in early systems through to virtual crocodile clips in today’s digital exchanges that copy the call data. But the basic concept has remained the same. Many of the early digital exchanges enacted lawful intercept through the use of conference calling functionality.”

Strong end-to-end encryption has largely killed off the conference-call approach, but Levy and Robinson note that it still theoretically possible for companies to silently grant access to the authorities.

“It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call,” they argue. “The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call.”

Extra end-run

Such an approach would retain strong end-to-end encryption but introduce “an extra ‘end’ on this particular communication,” they argue. And it would be “no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorize today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.”

In effect, the super-snoops are proposing that they be allowed to subvert a cornerstone of encrypted apps – public key verification – to eavesdrop on conversations, and that the companies that develop the apps turn a blind eye to it. Rather than crack or weaken the underlying cryptography, the spies want to warp the software and user interfaces wrapped around it to let them silently eavesdrop on conversations.

The spy agencies would be allowed to order a company to silently add government snoops to conversations, presumably turning off any notifications that alert users to the fact that a new person has been added to the chat, or an existing one changed. And the companies would in turn refrain from improving their current systems, or making public key verification more visible and user friendly.

The key thing here, no pun intended, is that agents would be added to a chat just like any another conversation partner, with the correct public-private key exchanges, except there would be no notification and no way to spot or inspect the spies’ public keys.

To GCHQ’s mind, this is a perfect solution: it doesn’t require app developers to scale back security on their existing software; it only requires them to not continue to improve their systems. And, because the tapping would be at the vendor level, it would be hard for hackers and other malicious actors to exploit the same approach.

Plus, they reason, the big plus is that it would be hard to scale up to mass surveillance levels, and it wouldn’t undermine encryption. It’s a win-win… for the security services, anyway.

On the surface at least, this seems like a reasonable compromise that app developers could get behind. People who build their own chat software from source code they can inspect won’t be too bothered by this approach, either.

The truth is that while there is no shortage of fierce privacy advocates who insist that the government should never be granted access to any private conversation, those in positions of power who receive classified briefings about how such technologies are misused are open to granting reasonable access to the communications of dangerous people.

But scratch the surface, and the same GCHQ – the one that wants access to as much information as possible and which is wholly opposed to any form of real accountability – lurks beneath.

Open and honest?

While advocating for “open and honest conversations between experts that can inform the public debate about what’s right,” the authors completely ignore that fact that it was Edward Snowden’s revelations that the security services had massively abused their powers to create a body of secret and highly questionable law that gave them access to pretty much everyone’s communications that led to the current crop of encrypted apps.

Instead, every reference to the fact that no one trusts the spy agencies to do what they say they will do is painted as the public being hoodwinked.

“The public has been convinced that a solution in this case is impossible,” the authors argue, “so we need to explain why we’re not proposing magic.”

warrant

GCHQ opens kimono for infosec world to ogle its vuln disclosure process

READ MORE

Later: “Much of the public narrative on this topic talks about security as a binary property; something is either secure or it’s not. This isn’t true – every real system is a set of design trade-offs.”

Also: “The public will also want to know how these systems are used, as it has been convinced that governments want access to every single one of these encrypted things.”

The truth is that the spy agencies – GCHQ and the NSA in particular – have been dragged kicking and screaming to this point. Even after the scale of their misuse of systems and laws were exposed, they continued – and continue – to fight tooth-and-nail any effort to scale back their programs, reveal how they work, or add real accountability to their systems.

It is worth noting that just this week a number of organizations wrote to the US Department of Justice urging it not to authorize the UK authorities’ access to American corporate data because current UK law doesn’t adhere to human rights obligations and commitments.

That data sharing would happen under the CLOUD Act – the same legislation that today’s blog post holds up as a great example for how to introduce global accountability in data sharing.

And then there’s the fact that the European Court of Human Rights has heavily criticized the UK’s approach, in particular the lack of decent oversight when it comes to bulk interception of communications.

Sponsored:
Putting the Sec into DevSecOps

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/29/gchq_encrypted_apps/

Dunkin’ Donuts Serves Up Data Breach Alert

Forces potentially affected DD Perks customers to reset their passwords after learning of unauthorized access to their personal data.

Dunkin’ Donuts has alerted DD Perks account holders to a security incident after learning an unauthorized party accessed some of their usernames and passwords, NBC News reports.

DD Perks is a rewards program that lets Dunkin’ customers purchase food and beverages for pickup and receive free drinks via rewards points and on their birthdays. On Oct. 31, a security vendor detected a third party accessing users’ accounts. It believes these actors stole usernames and passwords from other companies and used them to attempt DD Perks logins.

Information exposed varies from user to user, depending on what was in their accounts. Dunkin’ reports third parties may have been able to access first and last names, email addresses (which are used as usernames), the 16-digit DD Perks account numbers, and DD Perks QR codes.

Dunkin’ reports its security vendor successfully blocked most of the attempted logins, but it is possible some accounts were accessed. It has launched an internal investigation and forced all potentially affected DD Perks users to reset their passwords and log back in with new ones. It has also taken steps to replace any stored DD Perks cards with new account numbers while retaining the cards’ values. Law enforcement is helping identify the parties responsible.

Users are advised to create unique passwords for their DD Perks accounts, as well as all online accounts, and to never use the same password twice.

Read more details here.

 

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/dunkin-donuts-serves-up-data-breach-alert/d/d-id/1333367?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Establishing True Trust in a Zero-Trust World

Our goal should not be to merely accept zero trust but gain the visibility required to establish real trust.

The term “zero trust” was coined by Forrester in 2010. The concept was also central to the BeyondCorp architecture that Google was designing around the same time. Traditionally, companies assumed their corporate networks were secure. Google provocatively stated that the corporate network was no more secure than the public Internet and that every organization needed a security architecture that did not take trust for granted. Forrester described it less as myth-busting about network security and more as a necessary framework for data and computing outside the perimeter.

Whether corporate networks are secure or not, it is true that the traditional arbiters of trust — next-gen firewalls, VPNs, web gateways, network access control, network data loss prevention, locked-down PCs — have minimal value outside the perimeter. This is a growing issue because all new enterprise application innovations happen in the cloud, not on-premises, so a company that cannot compute outside the perimeter will rapidly get left behind.

Every company must find its answer to the zero-trust problem.

What Is Zero Trust, Really?
Trust is based on visibility. If I can see where my data is going and assess the corresponding risk, then I can make an appropriate decision about whether to allow access to my data in that environment. If I have zero visibility, however, I must assume zero trust. I cannot trust what I cannot see.

Because traditional security solutions provide minimal visibility outside the perimeter, organizations have a rapidly growing blind spot as data spreads across an information fabric that spans mobile endpoints and cloud services.

Our goal should not be to merely accept zero trust but to gain the visibility required to be able to establish trust in what otherwise would be a zero-trust world. Without trust, you cannot enable your users. Without enablement, they cannot do their jobs. The challenge is to enable them with the services they need without putting your business data at risk.

Every company must implement a new model of trust.

Is User Trust Enough?
Outside the perimeter, there is one element of trust that traditional security infrastructure can still (mostly) validate: user trust. I can usually establish whether users are who they say they are. But is that enough? No.

User trust is an essential element of the modern trust model. It is necessary, but not sufficient. The reason is that a trusted user in an untrusted environment should not have access to company data. Context matters.

Here’s an example: Let’s say I owe you $1,000. We can decide where to meet so I can give you that money. We can meet at my home or we can meet on a street corner in a dangerous part of town. You, the person standing across from me, are still the same, trusted individual. But my willingness to hand you that money should absolutely be different in those two environments. In one, the transaction will be successful. In the other, you’ll likely get mugged within a block. User trust is not enough. Context is critical to establish trust in a zero-trust world.

3 Steps to Get Started
Risk and trust balance each other. Don’t assume that more risk means less access, because the outcome will be that your users won’t be able to do their jobs. The more risk that exists in an environment, the harder you must work to establish enough trust to justify access to corporate data.

Like almost everything else in security, starting with basic hygiene and establishing a foundational process and architecture are the most important steps:

Step 1: Start with the user.
Technology is secondary. First, understand the environment in which business users want to do their work, not the environment in which you want them to do their work. Otherwise, you will end up establishing trust in an environment that no one is using, while the real work and actual data flows are outside your vision, completely unprotected.

Step 2: Respect the edge.
Mobile devices and apps have become a primary means for employees to consume data and access business services. That means data will be resident on a constantly growing number of mobile devices. Organizations must establish a data boundary on the device that prevents business apps from leaking data to consumer apps while also protecting the privacy of personal information.

Step 3: Assume constant change.
Think of it as a “dynamic-trust” world instead of a “zero-trust” world. Context is dynamic in modern computing. Change is the nature of both mobile and cloud: Devices move across networks and locations; new apps are downloaded; and configurations are modified. The key is to establish an automated and tiered compliance model that monitors for contextual changes and then automatically takes appropriate actions, such as notifying the user, asking for a second factor, expanding or blocking access, and provisioning or retiring apps.

Establishing True Trust
Your goal is to protect data across an increasingly fragmented information fabric outside the comfort zone of traditional security approaches. The modern access decision requires constant assessment because context is constantly changing. The path forward is moving to this dynamic model of modern security versus the static “I’m in, you’re out” model of the traditional firewall.

True trust is the combination of user trust with contextual trust: OS, device, app, network, time, location. Establishing true trust in a zero-trust world as the centerpiece of an automated compliance model gives users the freedom they need to get on with their work without losing company data.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Ojas Rege is Chief Strategy Officer at MobileIron. His perspective on enterprise mobility has been covered by Bloomberg, CIO Magazine, Financial Times, Forbes, Reuters, and many other publications. He coined the term “Mobile First” on TechCrunch in 2007, one week after the … View Full Bio

Article source: https://www.darkreading.com/network-and-perimeter-security/establishing-true-trust-in-a-zero-trust-world/a/d-id/1333353?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Dell Forces Password Reset for Online Customers Following Data Breach

Move prompts questions about scope of intrusion and strength of company’s password hashing.

Dell has reset passwords for all customers of its online store following a data breach in which the names, email addresses, and hashed passwords belonging to an unknown number of people may have been exposed.

In an advisory this week, the computer maker said it had detected unauthorized activity on its network Nov. 9 involving an attempt to steal customer data. While there’s no conclusive evidence the attackers succeeded, it is possible that at least some information was removed from the Dell network, according to the company.

To mitigate risk, the vendor has implemented a mandatory password reset for all registered Dell.com users. Next time these users attempt to login to their Dell accounts, they will be prompted to change their passwords. The hardware maker is also asking users to reset passwords on any other accounts protected by the same password they had used for Dell.com.

On a newly established customer update website, Dell described the incident and the password reset as primarily impacting customers of Dell.com, Dell.com, Premier, Global Portal, and support.dell.com.

The attackers do not appear to have targeted credit card and other sensitive customer information. The incident did not impact any of Dell products or services either, the hardware maker said.

Dell’s statement provided no information on how many users of its online site might have been impacted by the incident. But based on Dell’s actions, the number could be quite large, says Ilia Kolochenko, CEO of High-Tech Bridge.

“Usually, a mass password reset is a hallmark of a data breach impacting all customers,” Kolochenko says. “If it’s not the case, it should be clearly explained and emphasized.” Leaving customers in the dark about any breach involving personal data is never a good idea, he says.

The mass password reset could also be an indication that Dell is not fully confident about the resilience of the hashed passwords against brute-force cracking attempts.

Typically, password hashing should make passwords unusable to criminals. But a lot depends on the strength of the algorithm that is used for the hashing, says Jarrod Overson, director of engineering at Shape Security. “Without details, it’s safer to err on the side of caution, which, in this case, is that the hashes were generated with an algorithm that is quickly crackable, like MD5,” Overson says. With such hashing, a hacker could use a free, open source tool like Hashcat to automate the testing of common or generated passwords, he says.

Ideally, organizations should consider storing password hashes generated by an algorithm such as bcrypt, which is generally considered to be very resistant to brute-force hacking, Overson notes.

Email account data and passwords have become increasingly hot commodities in underground markets. Because many users tend to use the same password across multiple accounts, attackers have been increasingly using breached username and password pairs to try and break into as many accounts as they can.

Credential-stuffing attacks, where criminals automatically enter large volumes of leaked credentials into e-commerce and other websites, have become increasingly common in recent years. In fact, underground chatter related to compromised accounts increased 150% year over year on Black Friday and Cyber Monday, according to new holiday shopping season cyberthreat stats from Cyberint. Chatter about attack tools, predominantly for credential stuffing, increased 20%, according to the vendor.

The trend has focused growing attention on the need for strong password and user authentication measures. Hashing and encryption are increasingly seen as basic steps to ensuring password integrity in the event of a data breach.

Since Dell has said the passwords for Dell.com users were hashed, it is likely the company is merely being extra careful in resetting them anway.

“It’s a matter of how sophisticated their hashing technique was and how unusable the passwords could be for cybercriminals,” says George Wrenn, CEO and founder of CyberSaint. Regardless, the fact that Dell pushed for a password reset would ideally block that risk. It is clear that Dell is aggressively dealing with the incident.”  

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/dell-forces-password-reset-for-online-customers-following-data-breach/d/d-id/1333369?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Anti-Botnet Guide Aims to Tackle Automated Threats

The international guide is intended to help organizations defend their networks and systems from automated and distributed attacks.

The Council to Secure the Digital Economy (CSDE) and Consumer Technology Association (CTA) today announced the International Anti-Botnet Guide, a new publication intended to help organizations block botnets and other automated, distributed cyberattacks.

USTelecom and the Information Technology Industry Council (ITI) were also involved in building the guide, which is the product of nine months of collaboration. IT stakeholders can use the guide for basic and advanced practices to reference when defending against bots. These aren’t mandates or requirements, the guide points out. IT and security leaders can use them according to the circumstances, processes, and teams specific to their organizations.

No single stakeholder controls the connected economy, where bots have been both damaging and expensive. As the number of people, businesses, and devices grow, so does the potential for botnets to drive phishing, ransomware, distributed denial-of-service (DDoS_ attacks, and other digital threats. With the Internet of Things (IoT) poised to reach 20 billion devices by 2020, the global cost of cybercrime could reach trillions of dollars, researchers state in their report. Botnets are a driver of these losses.

“The botnet threat is more severe today than at any previous point in history,” researchers point out, referring to threats ranging from the Storm Worm botnet of 2007 to the 2016 Mirai botnet that gained access to nearly 400,000 devices, including video cameras and recorders. While most botnets don’t quite reach this scale, smaller attacks can disable websites and services, spread disinformation on social networks, and distribute ransomware.

“A host of bad actors are exploiting a target-rich attack surface,” said Robert Mayer, senior vice president of cybersecurity at USTelecom, at an event held for the report today. Two elements are needed to “address this plague,” he added: government and industry players working together, and all ecosystem stakeholders adopting measures to make the Internet resilient.

It’s a threat that poses myriad challenges throughout the IT ecosystem. Report writers argue infrastructure providers could do more to protect customers, and smaller providers need guidance and resources. Increased software security drives bad actors to build more complex exploits. Many connected devices aren’t built, configured, or installed with security in mind.

“There is no higher cause we all share than to address the challenges of our digital economy,” said Jonathan Spalter, president and CEO at USTelecom. “We understand this is a shared responsibility across our industries … a compliance-led regulatory model is not going to get us closer to the security that we all seek. This is proof of concept that industry … is ready to lead.”

Dean Garfield, president and CEO of ITI, emphasizes the need to get everyone on the same page sans regulation.

“The threat is asymmetric,” he says of botnets, which are constantly evolving. “If you define a solution that’s fixed in time, it’s unlikely to be as flexible and fluid as the threat.”

The botnet mitigation guide breaks its practices down into five types of provider, supplier, and user stakeholders in these categories: infrastructure, software development, devices and device systems, home and small business systems installation, and enterprises.

As an example of the guidance provided in the report, consider its subsection on botnet risk and mitigation among cloud and hosting providers, as part of its infrastructure section: “Because cloud networks are decentralized, they can typically withstand the disruption of numerous network components,” experts explain. “This architectural feature makes the cloud more resilient to highly distributed botnets and provides additional mitigation capabilities.”

Cloud services offer an added layer of security outside the ISP’s infrastructure, they continue, and this protection is increasingly handy as the scale of botnet attacks continues to escalate.

Overall, for infrastructure providers planning to defend against bots, the guide advises first identifying which assets need to be defended and the potential vulnerabilities leaving them exposed. Companies should stay up to date on exploits for each flaw they identify. As for advanced practices, they add, infrastructure providers with access to more resources may have security researchers on hand to analyze heuristics and behaviors to detect malware.

There are additional baseline and advanced practices for signature analysis, heuristic analysis, behavioral analysis, packet sampling, and honeypots under the “Detect Malicious Traffic and Vulnerabilities” section for infrastructure providers, as well as similar levels of guidance for mitigating against distributed threats with filtering, traffic shaping, blackholing, sinkholing, scrubbing, and BGP flowspec. Stakeholders across categories can find similar detailed guidance.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/anti-botnet-guide-aims-to-tackle-automated-threats/d/d-id/1333371?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google’s “deceitful” location tracking is against the law, say 7 EU groups

The row over Google’s location tracking has spread to Europe.

Consumer organizations from across the region said this week that they will complain about Google’s location tracking activities to their data protection authorities, alleging that it is breaching the General Data Protection Regulation (GDPR).

BEUC, an umbrella group of 43 European consumer organizations, said that Norway, Netherlands, Greece, Czech Republic, Slovenia, Poland and Sweden will all file complaints.

They’re basing their gripes on a report from the Norwegian Consumer Council (Forbrukerrådet) called Every Step You Take that explains what Google is doing and why they think it might be flouting Europe’s privacy laws.

Monique Goyens, Director General of The European Consumer Organisation, summed up the complaints in a statement on the BEUC site:

Google’s data hunger is notorious but the scale with which it deceives its users to track and monetise their every move is breathtaking. Google is not respecting fundamental GDPR principles, such as the obligation to use data in a lawful, fair and transparent manner.

The report takes a deep dive into Google’s location tracking activities. The company tracks you in two ways, according to the research: Location History and Web App Activity.

Alongside basic data such as where you went and what mode of transport you took to get there, Location History also stores other data in the background, such as barometric pressure, nearby Wi-Fi hotspots and even your battery level. Google says that this is a voluntary, opt-in feature.

Web App Activity is enabled by default, and tracks what you do on Google sites, apps and services. This includes searches and location, so if you googled “Krispy Kreme” in downtown San Francisco without deliberately turning off that setting, the Googleplex would know of your doughnut hankerings. All this data is used for advertising.

Closing the loop

Aside from knowing more about how Google stalks you, the report is a compelling read for anyone interested in what companies can do with this data. It describes a process called “closing the loop”, in which companies can combine location data with other information such as browsing history, social network activity and shopping habits. If a company can connect that data with your location, it can deduce things about you, such as whether an ad worked on you or not, and whether it should show you more of them.

Then, there’s the data about the places you visit. If you spend an hour at a health clinic or regularly frequent a bar, then companies can make assumptions about you and add those to your profile.

You can find these location settings in your phone and make sure they’re off if you look hard enough, so what is Forbrukerrådet’s beef with the search and advertising giant? It’s the “look hard enough” part that has the Council’s hackles up. It thinks the company is being deliberately circumspect with users.

“Deceptive design”

The report, subtitled ‘How deceptive design lets Google track users 24/7’, accuses the company of hiding default settings from users setting up their Google accounts, requiring that they click “more options” to find them. Because Web App Activity is turned on by default, this means they may not be aware it is tracking their location, the report points out.

The report draws particular attention to the differences between Android and iOS…

 

… and on Android the set up process also uses blue buttons that look similar but display different options such as ‘Next’ and ‘Turn on’, the report warns. This means that users must be very attentive to avoid accidentally turning on location history during the setup process, it adds.

Forbrukerrådet believes that this may violate consent provisions under GDPR by persuading users to enable Location History without being aware of what it entails. It also argues that Google’s repeated nudging of Android users to turn on Location History when using other Google services amounts to pressuring the user, which it says could contravene consent rules under GDPR.

For example, turning on Google Assistant also prompts Android to ask you to turn on Location History, and Google doesn’t make it clear that Google Assistant will still work if you turn it off again.

The company also hides the negative implications of location tracking behind other links, while making the positive benefits more obvious, the report complains. And when users do click through to find out more, Google downplays the implications of the location tracking.

This may contravene GDPR if it can be shown that Google is hiding or obscuring the true nature of its location data usage sufficiently to violate GDPR’s rules around “specific and informed” consent, the report asserts.

The Web App Activity tracking is turned on by default, so under GDPR rules Google can’t claim that the user has given consent, the report says. Instead, Google would have to rely on legitimate interest as its justification for gathering that data. However, the report points out that legitimate interests must be transparent and real. Hiding this information behind extra clicks could put Google in violation, it warns.

This isn’t the first tussle that BEUC and Forbrukerrådet have had with Silicon Valley. In June, Forbrukerrådet published a report on what it calls Dark Patterns, describing how it thinks tech firms mislead us with interface design choices. A lot of the same claims crop up in this report.

For its part, Google is already facing a class action lawsuit over its location tracking activities in the US.

Users can check what data Google has collected by going to https://myaccount.google.com, and clicking My Activity. You can also delete all or certain data and, by going to Activity Controls, turn on or off particular tracking services.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xNmjtMCqB0I/

Massage app exposes users

A popular massage-booking app has spilled the beans on 309,000 customer profiles, including comments from their masseurs or masseuses on how creepy their customers are.

The app’s wide-open, no-password-required database was discovered by researcher Oliver Hough, who tipped off TechCrunch.

Hough said in a tweet on Tuesday that the breach was caused by unimplemented security that should have been easy-peasy, and that the failing could lead to “some serious blackmail.”

TechCrunch reports that Urban left the database for a Google-hosted Elasticsearch instance – that’s an enterprise search tool – online without a password, “allowing anyone to read hundreds of thousands of customer and staff records.”

Anyone who knew where to look could access, edit or delete the database.

The makers of the app, which was previously known as Urban Massage but is now going by simply “Urban,” confirmed the breach on Tuesday. In its FAQ, Urban said that customers’ names, email addresses and phone numbers were exposed, as well as, potentially, their postcodes if they placed a booking on the platform. Urban says it’s going to contact those whose information it thinks was exposed.

The good news: no payment card details were exposed or accessed. Urban said that it doesn’t store such information.

The other good news: this wasn’t an attack. Rather, it was a vulnerability exposed by a security researcher searching with Shodan: a search engine for exposed devices and databases.

The bad news: Urban didn’t mention the other bits that were exposed – and they could be deeply embarrassing to anybody who isn’t proud of being outed as a chronic appointment canceller or who asks for a happy ending. From Zack Whittaker’s write-up on TechCrunch:

Among the records included thousands of complaints from workers about their clients. The records included specific complaints – from account blocks for fraudulent behavior, abuse of the referral system and persistent cancelers. But, many records also included allegations of sexual misconduct by client – such as asking for ‘massage in genital area’ and requesting ‘sexual services from therapist.’ Others were marked as ‘dangerous,’ while others were blocked due to ‘police enquiries.’ Each complaint included a customer’s personally identifiable information – including their name, address and postcode and phone number.

The exposed database may have been open for at least a few weeks before Urban pulled it offline, which it did after TechCrunch contacted it.

Urban CEO Jack Tang said that he had informed the UK’s Information Commissioner (ICO) about the breach. As of Wednesday, the ICO hadn’t determined whether it was going to investigate.

Urban’s statement:

We immediately closed the potential vulnerability and have taken all appropriate action, including by notifying users and the ICO.

The researcher has now confirmed to us that he did not copy or retain any data and that he did not pass anything to anyone else other than the journalist. That was the only access we are aware of.

We would like to apologize to anyone potentially affected and continue to investigate this matter as a priority.

TechCrunch contacted several randomly chosen users whose information had been exposed. One user who requested anonymity said that the breach was a “huge violation” of her privacy.

Speaking of huge, this could potentially lead to a huge fine for Urban: the company could face penalties of up to 4% of its global annual revenue if it’s found to have breached GDPR rules.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sKAdwKEm7xM/