STE WILLIAMS

Hands off! Arm pitches tamper-resistant Cortex-M35-P CPU cores

Arm has released a new processor core design for Cortex-M-powered system-on-chips that will try to stop physical tampering and side-channel attacks by hackers.

The microcontroller-grade Cortex M35-P CPU cores are aimed at embedded IoT devices that operate in public or areas where there is a risk someone will either crack open the device or get close enough to perform a proximity-based attack. Think things like smart meters or connected street lights in a major city.

Rather than worry about network-based or remote side-channel attacks (that is what the Platform Security Architecture is for), Arm says the M35-P has been designed to ward off actual hands-on attempts to compromise a device by fiddling with the processor itself.

These physical attacks [PDF] on Arm chips include techniques such as recording electromagnetic radiation to spot when information is being transmitted or even cracking open the housing on the chip to manipulate the silicon itself.

A block of MediaTek Azure Sphere MCUs

Microsoft has designed an Arm Linux IoT cloud chip. Repeat, an Arm Linux IoT cloud chip

READ MORE

How common are such attacks? Not particularly, admits Asaf Shen, Arm’s VP of marketing for security IP. When they do occur, though, they are potentially devastating, and the barrier for entry is lowering, he said.

“Success attacking one device can easily turn into a large-scale attack,” explained Shen. “If one smart streetlight can be hacked, it can provide a window for potentially an entire city’s smart grid to be attacked.”

Because these sort of flaws would, by nature, be impossible for the vendors to patch as they are etched into the bare metal, Arm is taking it upon itself as the designers of the processor cores to beef up security.

Among the measures Arm is taking with the M35-P is an attempt to control electric current leak and electromagnetic radiation. The Softbank-owned Brit biz says it has engineered the blueprints to minimize both leakage and EM output, particularly while performing tasks such as transmitting security keys.

Those measures, Arm hopes, will combine with other technologies like PSA and TrustZone to close off side-channel attacks and attempts to get directly into the hardware itself.

At the same time, Shen noted, the security measures on the M35-P will not be able to fully prevent physical attacks. Once a hostile party is able to crack open a chip and manipulate it, a compromise is going to happen sooner or later. Rather, Arm wants to make such a compromise more trouble than it is worth.

“Nothing out there is 100 per cent bulletproof, at the end of the day everything and anything can be compromised,” Shen admitted. “The goal here is to make the attack uneconomical.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/02/hands_off_arm_pitches_tamperresistant_m35p_chips/

Vlad that’s over: Remote code flaws in Schneider Electric apps whacked

Infosec researchers at Tenable Security have unearthed a remote code execution flaw in critical infrastructure software made by energy management multinational Schneider Electric.

The vulnerability could have allowed miscreants to control underlying critical infrastructure systems, researchers said.

The apps affected – used widely in oil and gas, water and other critical infrastructure facilities – were InduSoft Web Studio and InTouch Machine Edition.

If exploited, attackers would have been able to move laterally through the network, exposing additional systems to attack. The worst-case scenario would have been the crippling of power plant operations.

InduSoft is an automation tool for developing human-machine interfaces (HMIs) and SCADA systems and InTouch is used to develop apps that connect automation systems and interfaces for browsers, smartphones and tablets.

Both contained an buffer overflow vuln that allowed an attacker to mount a denial of service attack or potentially execute arbitrary code, said Tenable.

Tom Parsons, head of Tenable in Ireland, said there were no known instances of the flaws being exploited, adding the firm worked with Schneider over three months to resolve the issue. Schneider has since released patches for both affected systems (PDF).

“The recent statement from Homeland Security and NCSC points towards hostile states having an interest in critical infrastructure,” he said.

Last month the UK’s National Cyber Security Centre (NCSC) and the US Federal Bureau of Investigation warned that Russian state-sponsored hackers are targeting network infrastructure. The joint Technical Alert described a global assault on routers, switches, firewalls and network intrusion detection hardware.

The US Department for Homeland Security and FBI have also warned that Russia is hacking into American nuclear facilities and other infrastructure.

Meanwhile, the UK government is waving a stick at infrastructure firms, warning they could face fines of up to £17m if their cybersecurity is found to be inadequate.

Dave Cole, chief product officer at Tenable, said the flaw “is particularly concerning because of the potential access it grants cybercriminals looking to do serious damage to mission-critical systems that quite literally power our communities”.

The Register has contacted Schneider for further comment.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/02/security_firm_uncovers_zeroday_exploit_in_critical_infrastructure_software/

Breaches Drive Consumer Stress over Cybersecurity

As major data breaches make headlines, consumers are increasingly worried about cyberattacks, password management, and data security.

A few years ago, many people didn’t talk about cybersecurity or even pay much attention to it. These days, it’s a growing source of stress among consumers, who rely on several devices and businesses to protect their data and regularly read reports about major companies getting breached.

More than 80% of Americans, and 72% of Canadians, admit they’ve experienced stress due to news of data breaches, according to a new report on consumer levels of “cyber stress” conducted by Opinion Matters and sponsored by Kaspersky Labs. Researchers polled 2,515 Internet users over the age of 16 to gauge the effects of digital security on their stress levels.

It’s the first time this study has been conducted; however, it follows a gradual and definitive shift in consumer awareness driven by more-frequent reports of cybercrime. The Identity Theft Resource Center says 1,579 data breaches were reported in 2017, marking a 45% increase from the previous year and the highest number since it started tracking this information.

People are thinking more critically about their data and what they can do to protect it. Three-quarters say protecting their devices from cybercrime has caused them stress.

“When cyberattacks and breaches started becoming a regular occurrence in the news, it seemed to be a wake-up call for many consumers to realize that a cybersecurity issue could personally impact them,” says Brian Anderson, vice president at Kaspersky Lab North America.

The turning point, he notes, was the spike in attacks hitting companies people knew and frequented. When large breaches hit Target, Home Depot, and eBay in 2013 and 2014, it made the consequences of poor data security more tangible for consumers affected.

However, the most common sources of stress are not massive one-time breaches, says Anderson. It’s the idea that people have to protect their information all the time because these events could happen at any moment. They understand the real-life effects of a data breach, such as identity theft and monetary loss, and have to protect more devices and accounts.

Respondents ages 25 to 34 were most likely to have had a security issue — virus, ransomware, malicious links, or emails — in the last five years. Nearly 60% of this age group reported one of these problems, compared with 46% of respondents overall. Nearly half of those ages 16 to 24 feel stress over password management, compared with one-quarter of participants older than 55.

“These [younger] age groups faced a comparable amount of cybersecurity issues, but because young people are often managing more passwords and more devices than older generations, cybersecurity is causing them a greater amount of stress,” Anderson explains.

Consumers who have experienced cyberattacks are more likely to worry about them. Of those who reported an issue, one-third agreed they find it stressful to protect all of their devices.

There is a silver lining to the stress. As people become more aware of the need for security, they’re adopting tools like password management software to keep track of their data. Anderson says consumers are becoming more aware of where they share their data, as well as the applications and services that may have more access than is necessary.

Respondents are least willing to share information with social networks (33%), mobile payment services (29%), banking apps (25%), and messaging apps (17%), but they’re also hesitant to share with friends and family. Nearly half (49%) would trust a partner with a username and password; the same amount would share answers for security questions.

That said, greater stress doesn’t necessarily mean people are taking more precautions. Fourteen percent of Americans and 6% of Canadians admit they have experienced four or more cybersecurity issues in the last five years. While they could blame bad luck, researchers say this level of frequency may also indicate failure to adopt security measures.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/breaches-drive-consumer-stress-over-cybersecurity/d/d-id/1331693?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Spring Clean Your Security Systems: 6 Places to Start

The sun is shining and you have an extra kick in your step. Why not use that newfound energy to take care of those bothersome security tasks you’ve put off all winter?

While most people traditionally spend the spring deep-scrubbing their bathrooms, cleaning out their garage, and dumping their hoarded detritus, the melodious chirps of colorful birds and a touch of vitamin D shining down on our pale faces are also good signals for security pros to update and renew their company’s information security systems. Here are six places to start:

1. Problematic Patching
If I have to remind you to use spring cleaning for your normal patching, you’re doing it wrong. Most infosec professionals already have a regular monthly patch cycle for normal desktops and servers, but every network has a few problematic servers or devices that do not get patched regularly. Perhaps these are one-off legacy servers running old operating systems for a custom application or a collection of set-and-forget Internet of Things (IoT) devices that aren’t updated regularly.

Whatever they are, now is a time to take care of them. Check the firmware updates on all hardware devices and bring them up to date. If you have any embarrassingly old servers hanging around, take the time to consider a plan to remove them and replace the old custom apps on them. As always, vulnerability and patch management software make this job easier, but don’t forget that these tools don’t always know about your IoT devices.

2. Password Pruning
If you follow password best practices — long random passwords, with different passwords for each application or system — you probably don’t have to change your passwords all that often. On the other hand, digital spring cleaning is still a good time to consider your passwords and those of other users at your company.

Most security pros probably already have a password manager because there is no other good way for a human to remember hundreds of long, complex passwords. If that’s the case, good news! Changing your passwords is simple. Most of these managers have an automated feature that will automatically change all the passwords it can at once. If you still use a single password for all of your logins, or rotate between a few different ones, you should change them and consider setting up a new password manager. Now that you’ve cleaned up your act, consider spearheading an annual company-wide password update initiative or some form of regular password training at your organization each spring.

3. Pare Down Privileges
Network admins and IT workers should already have a formal system in place for adding accounts and privileges for new employees and, more importantly, a formal HR process for removing all those accounts when they leave. Nonetheless, spring cleaning is a great time to audit these accounts and remove any that are unnecessary.

For example, perhaps you set up a temporary account giving a consultant some privileged access but forgot to remove it. Perhaps an employee with job-related privileges on one set of systems moved to a new role and doesn’t need those privileges any longer. These represent potential weak spots in your organization’s security posture if left unaddressed. Whatever the case, use this time to examine your accounts and individual privileges to make sure you adhere to the principle of least privilege.

4. Dispensable Data
In the buzzword age of big data, businesses feel a need to gather and store every piece of data that could possibly be important, hoping that a data scientist might find a way to correlate it and extract value. But data can also be a liability, especially when it technically belongs to someone else.

Every security-conscious company should have gone through at least one data audit to identify the most important data they need to secure. Spring cleaning is a great opportunity to refresh that audit, with an eye focused on dumping any extraneous junk you don’t really need and that could expose you to extra liability.

5. Awareness
When was your last phishing training? If it’s been more than a year, that’s too long. Maybe it’s time for a refresher course focusing on the latest threat trends. While your employees know about phishing, do they know all the subtleties to modern spearphishing emails? Maybe they know file attachments are bad, but do they still trust Word documents too much? Spring is a perfect time for a quick corporate security awareness session.

6. Perished Policies
Many organizations treat firewalls, next-generation firewalls, and unified threat management (UTM) tools like set-and-forget devices. They establish enough policies to get their business working, and then they don’t look at the systems again for months or years. This can cause problems because your network is more dynamic than you suspect and because the threat landscape constantly evolves. As attack methods change, you can and should tweak your security policies in new ways to increase protections.

Besides that, many administrators add temporary policies for legitimate reasons but then forget to remove them. For instance, a contractor needs to transfer files regularly with a remote cohort at his headquarters. To make things easy, IT spins up a temporary FTP server and punches a hole in their firewall to let the contractors reach it remotely. A month later, when the job is done, the administrator has forgotten about the FTP server and policy. Six months later, the forgotten server hasn’t been patched and a hacker leverages a new exploit on it to gain remote access to the entire virtual infrastructure. Not good.

These human errors are why you should add policy purging to your digital spring cleaning task list. The good news is many firewalls and UTMs have features that will show you which policies you use the most and which have remained unused for weeks or months. These sorts of features can help you quickly eradicate any unnecessary gaps in your security.

In short, the sun’s shining and giving you an extra spring in your step. Use that newfound energy to perform these six tasks, and any other small security chores you’ve put off for too long. By next winter, I’m certain you’ll be happy you did!

Related Content:

Corey Nachreiner regularly contributes to security publications and speaks internationally at leading industry trade shows like RSA. He has written thousands of security alerts and educational articles and is the primary contributor to the WatchGuard Security Center blog, … View Full Bio

Article source: https://www.darkreading.com/endpoint/spring-clean-your-security-systems-6-places-to-start/a/d-id/1331663?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Automation Exacerbates Cybersecurity Skills Gap

Three out of four security pros say the more automated AI products they bring in, the harder it is to find trained staff to run the tools.

As the security industry grapples with the consequences of a constrained supply of experienced cybersecurity talent, many pundits have lauded automation as a way out. But a new survey out today shows that many security professionals are experiencing the opposite effect. The more artificial intelligence (AI)- and machine learning-powered tools they bring in, the more they need experienced staff to deal with those tools. 

Conducted by Ponemon Institute on behalf of DomainTools, the study queried over 600 US cybersecurity professionals on the effects of automation on their staffing situations. The results offered up are counterintuitive to general belief that automation will ameliorate the cybersecurity skills gap.

According to the study, 75% of organizations report that their security team is currently understaffed and the same proportion say they have difficulty attracting qualified candidates. Over four in 10 organizations report that the difficulties they’ve faced with recruiting and retaining employees has led to increased investment in cybersecurity automation tools. However, 76% of respondents report that machine learning and AI tools and services aggravate the problem because they increase the need for more highly skilled IT security staff. And only 15% of organizations report that AI is a dependable and trusted security tool for their organization.

This jibes with what a lot of experienced security practitioners have to say about automation. 

“It is very tempting to think that automation will fix a lot of cybersecurity issues. However, automation mechanisms are worthless without a staff which can smartly leverage them and implement them,” says Frank Downs, senior manager of Cyber Information Security Practices at ISACA. “An organization can purchase the most incredible intrusion detection/prevention system in the world. However, if they don’t have the staff to configure, implement, and manage it — it might as well stay uninstalled.” 

That’s not to say that there’s no value in automation, it’s just that the same principle of “GIGO” applies for cybersecurity automation as it does for any other technical system.  

“Automation really helps make the people on the team more effective. There’s no substitute for human flexibility and intuition, so automation lets you take repetitive tasks off the table and enables people do more interesting work,” explains Todd Inskeep, principal for Booz Allen Hamilton and advisory board member for RSA Conference. “That’s important, but one of the first things I learned about computers — ‘GIGO,’ or ‘garbage in, garbage out’ — still applies with automation and machine intelligence.” 

The other issue is that automation tends to follow a maturity path where the most automated systems are never fully up to date with the timeliest threat trends. As a result, there always need to be experienced humans who are adaptable enough to deal with the unknown threats of tomorrow, says Lucas Moody, CISO for Palo Alto Networks.

“If you break it down, automation is about taking care of yesterday’s problems. We are automating what we’ve mastered and what we understand well,” says Moody. “In order to tackle tomorrow’s challenges, we need to hire professionals who are strategic, creative, and adaptable. We’re really looking for those individuals who thrive on change and problem-solving.” 

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/careers-and-people/automation-exacerbates-cybersecurity-skills-gap/d/d-id/1331697?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Firefox isn’t adding ads, it’s ‘sponsored content’

You know that whole “nothing is free” thing about online content? About how the big internet companies may look like they’re giving away content – we’re not making out checks to Google or Facebook, after all – but they’re making plenty of moolah from advertising and selling our data? We are, as it’s often said, the product.

Mozilla says bunk to all that. We can expect to see sponsored links in the open-source Firefox browser as of next week, but we don’t have to sell our souls to get Mozilla the extra revenue it needs to survive. Mozilla says all the analytics will happen on the client side, keeping users’ data private.

That’s according to a post written by Nate Weiner, the founder and CEO of Pocket. Mozilla acquired the company, which makes the underlying technology, early last year.

Weiner said that we need high-quality content. At its best, the internet makes us laugh, it makes us think, it teaches us, and it brings us new perspectives. Content producers, for better or worse, depend on advertising to bring it to us. But the model just isn’t working anymore, he said, what with user privacy withering away and junk clickbait squeezing out the good stuff:

Unfortunately, today, this advertising model is broken. It doesn’t respect user privacy, it’s not transparent, and it lacks control, all the while starting to move us toward low quality, clickbait content.

Of course, anybody who’s watched the ongoing brawl between content producers, adblockers, adblocker blockers and adblock blocker blockers knows that it’s not pretty out there.

We’ve seen some interesting approaches to trying to fix this mess. The Brave browser is one such. Launched in 2016, the idea is to pay Bitcoin to users who agree to view “clean” ads or to pay sites in exchange for having their ads blocked.

Another approach is Google Chrome’s ad filter that aims to mop up the worst offenders, leaving you with just ads that don’t make you feel like throwing your laptop out of the window.

Using Pocket, Mozilla set out a few months ago to see if it could get personalized, high-quality experiences to users without those users having to fork over their data and privacy. Weiner said the experiment was a success:

We’ve come to accept a premise around advertising today that users need to trade their privacy and data in exchange for personalized, high quality experiences. Our experiments over the last few months have proved that this isn’t true.

Since the start of the year, Mozilla’s been showing some Firefox users links to recommended content on its New Tab page. Some of the recommendations are sponsored, with content producers paying to be included in the list of recommendations. The links were made available in the Nightly and Beta releases as of Monday. In Firefox 60, due to ship on 9 May, the feature will be available to all Firefox users in the US.

You can check out Mozilla’s activity stream on GitHub for details on how the company is protecting privacy while still delivering personalized content.

Mozilla swears it’s not sending users’ specific browsing behavior, or the sites they visit, to any Mozilla server. Rather, data is sent to its servers in the form of discreet HTTPS pings or messages whenever users do something on the Activity Stream about:home or about:newtab pages.

We try to minimize the amount and frequency of pings by batching them together. Pings are sent in JSON serialized format.

At Mozilla, we take your privacy very seriously. The Activity Stream page will never send any data that could personally identify you. We do not transmit what you are browsing, searches you perform or any private settings. Activity Stream does not set or send cookies, and uses Transport Layer Security to securely transmit data to Mozilla servers.

Mozilla says that data collected from Activity Stream is retained on Mozilla secured servers for 30 days before being rolled up into an anonymous, aggregated format. After that, the raw data will be permanently deleted. Mozilla says it never shares data with any third party. If that’s still too much for some users, it is possible to opt out of Firefox’s data collection altogether.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VJhSSYZhWKU/

North Korea’s antivirus software whitelisted mystery malware

North Korea’s very own antivirus software has been revealed as based on a 10-year-old application made by Trend Micro, but with added nasties.

So says Check Point, which was sent a copy of the “SiliVaccine” application and after analysis declared it contained “large chunks of 10+-year-old antivirus engine code belonging to Trend Micro”.

Trend Micro has confirmed that analysis.

Intriguingly, Check Point alleges that SiliVaccine has whitelisted one virus signature that Trend Micro’s products could detect. Just why North Korea’s government wants software that won’t spot some viruses is not hard to guess: a totalitarian dictatorship can only sustain itself with pervasive surveillance and leaving a backdoor that allows viruses in would facilitate just that.

Check Point’s analysis of SiliVaccine found some other oddities, such as the use of the Themida and Unopix, “packing” tools commonly used to make reverse engineering difficult. As SiliVaccine has no known legal competitors in the hermit kingdom, the need for such precautions is not obvious. There’s also a home-brew encryption scheme that’s based on SHA1 to protect virus signatures, but with an easy-to-find and simple key that translates from Korean as “Pattern encryption”.

Much of the tool’s code is convoluted, there’s a feature that lists the names of malicious files for no apparent reason and a driver named to suggest one function but which instead does another. And does it badly, at that.

Check Point received the software from freelance journalist Martyn Williams, who sent what was billed as an installer but was actually a self-extracting WinRAR file. Such files are .exes, and unpack their contents without requiring an extraction program. The file containing SiliVaccine offered an installer for the application plus a patch that turned out to be an installer for the JAKU malware.

Check Point While notes that “attribution is always a difficult task in cyber security” and won’t therefore pin the application’s oddities on North Korea’s government. But its researchers did feel safe saying “What is clear, however, are the shady practices and questionable goals of SiliVaccine’s creators and backers.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/02/north_korea_silivaccine_av_software_analysis/

AWS sends noise to Signal: You can’t use our servers to beat censors

Amazon has followed Google’s example by lowering the boom on a practice called “domain fronting” that organisations like Signal use to get around government censorship.

As defined by Amazon Web Services, “Domain Fronting is when a non-standard client makes a TLS/SSL connection to a certain name, but then makes a HTTPS request for an unrelated name. For example, the TLS connection may connect to ‘www.example.com’ but then issue a request for ‘www.example.org’.”

Doing so means that if an application like Signal is blocked by government edict, domain fronting makes its traffic appear to originate somewhere legitimate.

“The idea behind domain fronting was that to block a single site, you’d have to block the rest of the internet as well. In the end, the rest of the internet didn’t like that plan,” wrote Signal founder Moxie Marlinspike, who took the time time to explain the concept his company has become the first high-profile target of a policy forbidding Domain Fronting that AWS announced last week.

Marlinspike also yesterday revealed Amazon’s threat to kick Signal off AWS.

AWS’ post said it doesn’t like Domain Fronting because a domain-impersonation technique with possibly-nefarious uses and therefore a security risk.

In the message sent to Signal, the domain in question is owned by Amazon – Souk.com, a storefront for geographies like the United Arab Emirates, Egypt, Saudi Arabia, and Kuwait.

Boy fixing computer with hammer

Google kills off domain fronting – and so secure comms just got tougher

READ MORE

The message quotes from the AWS CloudFront terms of service: “You must own or have all necessary rights to use any domain name or SSL certificate that you use in conjunction with Amazon CloudFront”.

Signal had used Google App Engine as its domain front, but the ad giant prohibited the practice in early April. That decision led Signal to repeat the approach on AWS.

Marlinspike’s post explains that merely trying to establish an encrypted connection is enough to draw a censor’s beady eye: “a TLS handshake fully exposes the target hostname in plaintext, since the hostname is included in the SNI header in the clear. This remains the case even in TLS 1.3, and it gives a censor all they need.”

In response to AWS’s accusation, Marlinspike said Signal isn’t impersonating anybody: “Although our interpretation is ultimately not the one that matters, we don’t believe that we are violating the terms they describe: Our CloudFront distribution isn’t using the SSL certificate of any domain but our own,” and “We aren’t falsifying the origin of traffic when our clients connect to CloudFront.”

He didn’t explain what options remain for Signal, but Marlinspike warned even if a workaround is possible, it won’t happen fast, because Signal has only a small team.

For now, countries that want Signal blocked have their wish. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/02/aws_bans_domain_fronting/

Scammers using Google Maps to skirt link-shortener crackdown

Scam sites have been abusing a little-known feature on Google Maps to redirect users to dodgy websites.

This according to security company Sophos, who says a number of shady pages are being peddled to users via obfuscated Maps links.

According to security shop Sophos scammers are using the Maps API as a defacto link-shortening service, hiding their pages as redirects within Maps links.

The reason for this is Google’s recent efforts to get rid of its Goo.gl URL-shortening service. The link-shortening site is a favorite for scammers looking to hide the actual address of pages.

URL shorteners reveal your trip to strip club, dash to disease clinic – research

READ MORE

“Of course Google doesn’t stand for iffy links,” Sophos says, “so spammy Goo.gl URLs are almost as easy to report as they are to create.”

Without Goo.gl to pick on, scammers are now abusing a loophole in the Maps API that allows for redirects to be put into Google Maps URLs. This allows the attackers to chain the links to their scam pages within a link to Google Maps, essentially creating a more trustworthy URL that users are more likely to follow.

The trick also has the benefit of being harder to catch and shut down than links made with the well-policed Goo.gl service. Because it uses Google Maps there’s no reporting structure in place to get the scammers shut down and the scammers don;t have to use a Google-owned interface or API to do it.

This isn’t the first time Google’s URL-managers have been found to be open for abuse. In 2016, researchers disclosed that flaws in Goo.gl, among other link-shorteners, could be exploited to track users and harvest personal information. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/01/google_maps_url/

Inside the Two Types of Account Takeover Attacks

There are two types of automated threats that leverage user credentials to target login pages with account takeover.

If your website has a login page, chances are good that it’s targeted in account takeover attacks. Nearly all (96%) websites with login pages have bad bots and are hit with account takeover attempts, report researchers from the Distil Networks Research Lab.

Login pages are among the most abused Web pages, a finding from the 2017 Bad Bot report that prompted the research team to analyze the anatomy of account takeover attacks in greater depth. They studied data from 600 domains with login pages and pulled a smaller subset of 100 Web pages, which had the largest data sets of bad bot traffic, to study them further.

Account takeover attempts are intended to test credentials for validity. If they’re legitimate, attackers sell the usernames and passwords on the Dark Web or gain account access to pilfer personal or financial information and sell that instead. Alternatively, they could use the account to transfer money, purchase goods or services, or spread disinformation campaigns.

There are two types of account takeover attempts and they occur at about the same frequency, researchers report. Half are volumetric, meaning the bot floods the login page with credentials in an attempt to verify them as soon as possible. These “credential-stuffing” attacks are easy to identify because they’re accompanied by a spike in activity: the average credential stuffing attack will involve 35,000 to 50,000 requests and between 500% and 5,000% increase in login page traffic.

While they’re easier for businesses to detect, volumetric attempts have the benefit of immediate gratification to perpetrators. “If you suddenly see a massive increase in failed logins, it’s indicative of a volumetric attack,” says Edward Roberts, Distil’s director of product marketing. “If they can do that quickly and check with various sites across the Web, there’s a benefit.”

Volumetric attempts increase 300% after data breaches, when large amounts of credentials are made readily available to attackers. Bot operators assume two things: recently stolen credentials will still be active, and that people reuse their credentials across several websites. Researchers detected 17 volumetric credential stuffing attacks per day. The average number of attacks per organization is two to three per month; however, some sites are hit with as many as 10.

The other half of account takeover attempts are “low and slow,” otherwise known as “credential cracking and stuffing” attacks. In these, bad bots consistently deliver login requests 24/7. They’re slower-paced and because of that, it’s tougher for businesses to pick up on them.

Volumetric attacks usually occur within a set time frame. Low and slow attempts rely on an ongoing stream of malicious requests. They have no distinct beginning or end.

Sites with login pages are likely to get hit, says Roberts, who notes airlines, e-commerce sites, and financial companies are often targeted. Attempts range from simple ones, which generate significant traffic per device, to sophisticated attacks with as few as two requests per device.

“In terms of the difference here, it’s about how they are trying to hide themselves and how they’re trying to be evasive,” he explains. As they get more sophisticated, attackers make a greater effort to fly under organizations’ radar.

Businesses can mitigate simple account takeover attempts by blocking IP addresses, IP organizations, or traffic from specific countries. Moderate attempts can be mitigated by blocking the device fingerprint. Sophisticated attacks are best addressed by using deep interrogation to verify the legitimacy of each request, researchers report.

Researchers noticed a few key trends in how attackers plan account takeover attempts. These attacks usually happen at a predetermined frequency; for example, sites hit with attacks on Wednesday will also experience their next attack on a Wednesday. Often, they conduct a “test round” to gauge the effectiveness of their bots ahead of a large attack. Attempts were also more likely to happen on Fridays and Saturdays, when most security teams are offline.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/inside-the-two-types-of-account-takeover-attacks/d/d-id/1331690?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple