STE WILLIAMS

Listening Watch sounds out security idea with websites that listen

Mobile authenticator apps are a great way to improve password security. If only they didn’t slow you down by making you type in those darn numerical codes. Surely, in 2018, there must be a better way?

Two researchers at the University of Birmingham Alabama think they may have an answer, but it needs a pair of halfway-decent speakers, a phone, and a smartwatch.

Listening Watch, a project based on earlier work by researchers Prakash Shrestha and Nitesh Saxena, uses the power of sound to log you into your favourite websites. There’s a paper describing the concept here.

When logging into websites, two-factor authentication (2FA) offers an extra layer of protection over and above passwords because it checks an additional asset that the user owns before granting access. In some cases, this asset is a separate hardware token. In others, it’s a commonly-owned device like a smart phone. 

Attackers are always looking for ways to break 2FA. For example, RSA had to replace most of its SecurID tokens in 2011 after someone stole the codes used to initialise each one. NIST deprecated SMS as a 2FA mechanism in June last year after intruders were found stealing peoples’ phone numbers and using them for false authentication.

2FA is also finicky to use – it involves an extra step to log in to something – which is annoying for users. A 2016 study showed that 28% of users don’t use 2FA, and six in ten of those that do only do it because someone makes them.

An easier way?

Using the Listening Watch method, a user trying to authenticate to a website enters their username and password as usual. The site then sets up two separate conversations. One is with the browser and the other with the user’s smartphone, which is linked to the smartwatch or fitness wearable on their wrist.

The website sends the browser an audio signal with computer-generated speech reciting a random code. At the same time, it also sends a message to the user’s phone to make the wearable device record whatever it hears.

The browser plays the audio aloud, records its own audio and sends it back to the website, which forwards it to the phone. The smartwatch records whatever it hears at the same time, and also sends it to the phone. If the user is wearing the watch, then both the browser and the watch should have heard the same thing.

After receiving both the audio from the website and the audio from the smartwatch, the phone uses speech recognition to extract the spoken code from each. If the codes match, it also compares the two audio signals to see if they are similar. If they are, then the user is close to the browser and the phone tells the site to accept their login request. Otherwise, the login is rejected.

This research is a complete redesign of an earlier project from the same researchers called Sound-Proof. This used a similar concept, but instead of spoken codes, it relied on ambient sound. This was vulnerable to compromise by remote attackers who could predict and replicate those sounds. An attacker close to the user could simply have recorded the same ambient sounds to attack the system.

The Listening Watch system’s spoken code stops remote attackers by making the sounds unique to the local environment and entirely unpredictable, said the researchers. The use of a wearable device, which typically has a relatively low-resolution, limited-range microphone, makes it possible to record audio without others eavesdropping, the researchers added. There is, however, a danger that smartwatch microphones would get more powerful, which would open the process up to local attack, they admitted.

So, this would be a low-friction 2FA mechanism for websites, just so long as you weren’t listening to your headphones when you try to access your bitcoin exchange. It’s a tantalizing idea, but for the time being at least, security conscious web users will likely still be playing hunt-and-peck with authenticator codes.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5ea824_SYSo/

Google created “unnecessary risk” for Fortnite users, claims Epic boss

The nay-sayers were right – releasing the Android version of the mega-successful game Fortnite in a way that bypassed Google’s Play Store was a security risk after all.

Publisher Epic Games opened invitations to download the beta version of Fortnite from its website on August 9. Just days later, a Google security researcher identified only as ‘Edward’ published news of a vulnerability in its installer that could make possible what has recently been dubbed the ‘Man-in-the-disk’ (MITD) attack.

This was bad for several reasons, the first being that anyone exploiting it could too easily substitute their malware for the Fortnite Android Package (APK) file on Samsung devices (the game’s exclusive launch partner), without the user being any the wiser.

An alarming possibility, of course, which is why Epic fixed the game by changing the downloader’s storage location from a public to a private area within a day of being told about it, on August 16.

But Epic faced a second problem – Google said it would make the flaw public a week later, on August 23, as mandated by its famously tough disclosure policy.

Epic wasn’t happy, claiming this didn’t allow enough time for all of its Samsung launch and beta users to receive an update.

Tweeted Epic’s CEO and founder, Tim Sweeney, the day after Google made the flaw public:

We asked Google to hold the disclosure until the update was more widely installed. They refused, creating an unnecessary risk for Android users in order to score cheap PR points.

Of course, if Epic had made Fortnite for Android available through the Play Store instead of offering it as a sideloaded app pointing at Epic’s servers, perhaps the vulnerability wouldn’t have existed in the first place.

Finding a flaw in a game looks bad enough but finding a gaping flaw in the software designed to download that game from outside the Play Store looks even worse, even if the flaw was easily fixed.

Fortnite users couldn’t care less where they get the Android app from, but Epic – and Google – do.

As was widely debated in the weeks leading up to the app’s release, hosting Fortnite for Android on Google Play would mean handing over as much as 30% of the proceeds for the privilege.

Given that Fortnite for Apple’s iOS has reportedly been making $27 million per month, hosting the APK on Epic’s servers looked like a great way to cut out the middleman.

Cynics will point out that Google’s Android business model depends in part at least on taking that cut, and losing the biggest games phenomenon of the moment to a direct download was never going to go down well.

However true that might be, sideloading comes with big risks, particularly on Android versions prior to Android 8.0 (Oreo), which still allows users to download from ‘unknown sources’ on a global rather than app-by-app basis.

Malicious apps can exploit this setting to install themselves, including completely fake Android Fortnite apps of the sort found circulating earlier this summer.

Whether Google’s decision to disclose the flaw after a week was justified or not, it’s hard to argue the case that Epic’s distribution model is good for the long-term security of its users.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FO-4KpMiZ9g/

Tumblr outlaws creepshots and deepfake porn

Back in March, Motherboard found nearly 70 Tumblr blogs dedicated to sharing creepshots: the still shots and videos taken by mouth-breathers who follow women around, generally in public, to grab suggestive images… including those the creeps get when they hide, say, under a clothing rack in a department store, sticking out a camera-holding hand to get upskirt shots of unsuspecting women or girls.

Tumblr had, as Motherboard put it in its headline, a “massive creepshot problem.”

The anonymous tipster who first alerted Motherboard to the swarm of creepshot sharers/uploaders/traders said it was “only the tip of the iceberg”:

There are probably hundreds of these accounts filming in high schools, college campuses, in malls, and on the streets. And Tumblr seems to not care at all about the problem.

Indeed, at the time, Motherboard noted that one of the most popular creepshot Tumblrs had some 11,000 followers, with one of its posts linking to over 53,000 interactions – that includes reblogs, where the video or picture appears on a user’s own Tumblr and hence spreads even further. The images were taken by people who, from the looks of it, didn’t refrain from stalking the underaged, since some of the images appeared to be of teenagers or even younger girls.

Well, here it is, five months later, and something must have elbowed Tumblr into caring.

On Monday, Tumblr announced that it’s going to explicitly ban non-consensual creepshots, as well as deepfakes: the artificial intelligence- (AI-) generated videos in which people’s heads are glued into porn videos or into uttering things they’d never say in public.

The new Community Guidelines will go into effect on 10 September.

On that date, Tumblr says it will strip out any ambiguity from the “zero-tolerance policy on non-consensual sexual images” that was already in place – a policy that didn’t manage to staunch the flood of creepshots.

The simple statement it’s adding (in bold) to its existing policy on harassment, in order “to remove any uncertainty”:

Harassment. Don’t engage in targeted abuse or harassment. Don’t engage in the unwanted sexualization or sexual harassment of others.

As technology comes up with new ways to cast the unwilling into porn videos, so too has Tumblr had to come up with explicit verbiage to list it in its policy. Thus, deepfakes has also made its debut in the community guidelines, the company said:

Posting sexually explicit photos of people without their consent was never allowed on Tumblr, but with the invention of deepfakes and the proliferation of non-consensual creepshots, we are updating our Community Guidelines to more clearly address new technologies that can be used to humiliate and threaten other people.

Tumblr is also striking 41 words from its hate speech guidelines: words that had set the bar a little high, stipulating that a post needed to be “especially heinous” to merit reporting. The words that are coming out:

[DELETED: If you encounter negative speech that doesn’t rise to the level of violence or threats of violence, we encourage you to dismantle negative speech through argument rather than censorship. That said, if you encounter anything especially heinous, tell us about it.]

The platform is also outlawing the posting of “gore” that could inspire copycat violence, as well as other content that glorifies or incites violence or its perpetrators:

Not all violence is motivated by racial or ethnic hatred, but the glorification of mass murders like Columbine, Sandy Hook, and Parkland could inspire copycat violence. With that in mind, we’re revising the Community Guidelines on violent content by adding new language to specifically ban the glorification of violent acts or the perpetrators of those acts.

As of Tuesday, there were no creepshot blogs to be found on Tumblr. The only remnant of content of this ilk seems to be allusions to Predditors: the vigilantes who took it upon themselves to dox the creepshotters who were uploading non-consensual images of women to Reddit.

Although discussions about the appropriateness of the actions of Predditors are still to be found on Tumblr, the Predditor blog itself was accidentally shut down, brought back to life in 2012, and has since then been snuffed again.

Given the fact that Tumblr’s apparently seen the error of its more creepy ways – at one point, one of its blogs featured a creepshot how-to – and that Reddit banned creepshots, we can assume, or at least hope, that users won’t perceive a need for another Predditors-like blog.

That might be pie-in-the-sky thinking, given the way technology keeps coming up with new ways to harass people, a la deepfakes. But here’s hoping that Tumblr manages to make it back to the Shangri-La it says it started out as: the place where people could “take one person’s idea, build on it, and share it as something new” that “transformed Tumblr from a simple blogging site into a place where people were talking, exploring, learning, and growing through reblog chains.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fHxxI9ODyVg/

If you have to simulate a phishing attack on your org, at least try to get something useful from it

Just when it looked as if the US Democratic National Committee (DNC) had finally got one over on the phishing hackers that had been owning it since 2016, the triumph was torn away by a moment of rebellious fakery.

On August 20, DNC security partner Lookout’s machine-learning system spotted a site impersonating the DNC VoteBuilder portal, a red flag that a phishing campaign designed to nab login credentials was afoot.

It hadn’t been up for long, perhaps 30 minutes at most, which meant they’d probably caught it before much or any damage had been done.

With the FBI on the case, and Russia as usual the number-one suspect, politicians railed against the Republican’s refusal to fund voter protection, with New York Rep R Carolyn B Maloney tweeting: “Our intel community warned us about this, and now it’s happening. This isn’t ‘fake news’ – it’s a REAL attack on our democracy. We need to act.”

Democracy was under attack all right – by the Michigan Democratic Party, who’d helpfully decided to do a spot of red teaming without telling anyone.

“Cybersecurity experts agree this kind of testing is critical to protecting an organization’s infrastructure, and we will continue to work with our partners, including the DNC, to protect our systems and our democracy,” said an unrepentant Michigan party chair Brandon Dillon.

It was a toss-up as to who was more embarrassed – the DNC for not being able to tell a fake from the real thing or the Michigan Democrats for not realising that confusing its own side wouldn’t be well received.

Where’s the lesson?

Nobody doubts that phishing is ubiquitous. The question is whether and how employees can be trained to resist these attacks by prepping them on common phishing techniques and formats. What’s become abundantly clear is that it should be done by experts who’ve thought through the pitfalls.

For sure, blind phishing simulation – tests where important people are not in on the test – tend to end badly. In 2014, a US Army commander (reported here – subscription required) thought it would be a good idea to send a small group of colleagues a bogus warning that their federal Thrift Savings 401(k) retirement plans had been hacked and they needed to log into their accounts.

Spooked, the recipients forwarded the test email to thousands of others, who flooded the plan’s call centre with enquiries. Ironically, many employees weren’t taken in by the phishing test but thought it would be helpful to tell those who might be.

The secret of blind phishing simulation, then, is good blind phishing simulation, which means following a few rules. The first of these is that running the test should generate useful data, both for the testers but also the people being tested.

Simply proving that some people fall for phishing attacks is an empty discovery because that much is known. The point of simulations should be to reduce the likelihood of this in a way that can be measured over time, which involves giving feedback to targets so they can improve. A second rule is not to test the wrong part of the system. It’s not clear whether the DNC incidents had got as far as sending fake phishing emails, but that would have been impossible to ascertain once the phishing domains connected to the ruse had been taken offline.

Now it’s dark

There’s also the contentious possibility that anti-phishing simulation doesn’t work anyway. More than one survey over the years has confirmed that even expert users aware of phishing tricks can find some impossible to spot, which turns phishing simulations into a version of phishing, which is to say a percentages game in which everyone is susceptible to some extent.

In fairness to the companies that sell anti-phishing systems, none would claim they are enough on their own. It’s just another layer that might be useful that should be deployed along with other protections designed to detect phishing attacks before and just after they make it to users’ inboxes.

What has the DNC simulation-gone-awry taught us? That, at least as far as the DNC is concerned, not everyone in that organisation has faith in its security to the extent they felt it might be useful to prove that. It’s the sort of complicated human problem no amount of tech will ever solve. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/29/dnc_red_teaming/

We’re all sick of Fortnite, but the flaw found in its downloader is the latest way to attack Android

A newfound way to hack Android using a technique dubbed “Man-in-the-Disk” is central to the recent security flap about Fortnite on the mobile platform.

Man-in-the-Disk can circumvent sandboxes and infect a smartphone or tablet using shared external storage through a seemingly harmless Android application.

Sandboxing isolates applications from each other. The idea is that even if a malicious application found its way on to an Android device, it wouldn’t be able to steal data associated with other apps.

Check Point researcher Slava Makkaveev explained, during a presentation at the DEF CON hacking jamboree in Las Vegas, how an application with no particularly dangerous or suspicious permissions can escape the sandbox.

The technique – named after the well-known Man-in-the-Middle type of attack – works by abusing calls to read or write to external storage, a routine function of mobile applications.

External storage is also often used for temporarily storing data downloads from the internet. An application may use the area to store supplementary modules that it installs to expand its functionality, like additional content or updates.

Man frustrated with computer

Ah, um, let’s see. Yup… Fortnite CEO is still mad at Google for revealing security hole early

READ MORE

The problem is that any application with read/write access to the external storage can gain access to the files and modify them, adding something malicious. Google has already warned app developers to be wary of malfeasance in this area.

Makkaveev discovered that not all app developers, not even Google employees or certain smartphone manufacturers, follow the advice. Makkaveev demonstrated exploitation of the vulnerability in Google Translate, Yandex.Translate, Google Voice Typing, and Google Text-to-Speech, as well as system applications by LG and the Xiaomi browser.

He warned that vulnerable apps are likely numerous, an observation evidenced by events over the last few days.

Google researchers recently discovered that the same Man-in-the-Disk attack can be applied to the Android version of the popular game Fortnite. To download the game, users need to install a helper app first. This, in turn, is supposed to download the game files.

But by using the Man-in-the-Disk attack, a crook can trick the helper into installing a malicious application.

Fortnite’s developer, Epic Games, is aware of this vulnerability and has already issued a new version of the installer. Players should be using version 2.1.0 to stay safe. If you have Fortnite already installed, remove it then reinstall from scratch using the patched version of the software.

Epic Games is none too pleased that Google went public with the exposure of Fortnite to this class of vulnerability, as previously reported. Kaspersky Lab CTO Nikita Shvetsov noted on Monday that the flaw stemmed from the same “Man-in-the-Disk” attack some Google apps were revealed as being vulnerable to earlier this month.

Kaspersky Lab’s explanation of the Man-in-the-Disk vulnerability – and how consumers can minimise their exposure to the problem – can be found here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/29/android_external_storage_man_in_the_disk/

Intel Management Engine JTAG flaw proof-of-concept published

The security researchers who found a way to compromise Intel’s Management Engine last year have just released proof-of-concept exploit code for the now-patched vulnerability.

Mark Ermolov and Maxim Goryachy at Positive Technologies have published a detailed walkthrough for accessing an Intel’s Management Engine (IME) feature known as Joint Test Action Group (JTAG), which provides debugging access to the processor via USB. The PoC incorporates the work of Dmitry Sklyarov, another researcher from the company.

The PoC code doesn’t represent a significant security threat to Intel systems, given that there’s a patch and the requirements for exploitation include physical access via USB. It’s mainly a matter of academic interest to security researchers, though it also serves as a reminder that the IME expands the hardware attack surface.

The IME is microcontroller designed to work with the Platform Controller Hub chip, alongside integrated peripherals. Running its own MINIX microkernel, it oversees much of the data moving between the processor and external devices and its access to processor data makes it an appealing target.

The disclosure of a vulnerability last year in Intel’s Active Management Technology, a firmware application that runs on the IME, amplified longstanding concerns that Intel’s chip management tech could serve as as a backdoor into Intel systems.

In May last year, the Electronic Frontier Foundation asked Intel to provide a way to disable the IME. In August, Positive Technologies revealed that Intel already offered a kill switch to customers with high security requirements.

intel

Intel’s super-secret Management Engine firmware now glimpsed, fingered via USB

READ MORE

In September, the researchers let slip that they would be demonstrating an additional IME bug at Black Hat Europe come December. That turned out to be the JTAG exploit.

Intel issued a patch for the JTAG vulnerability (INTEL-SA-00086) last November and updated its fix in February 2018. The flaw allowed the PoC code to activate JTAG for the IME core, thereby letting the attacker run unsigned code. The PoC was developed on a Gigabyte Brix GP-BPCE-3350C, which is a Celeron-based compact PC.

Ermolov and Goryachy recommend that those interested in testing the code do so on a similar box, but note that it should work on other Intel Apollo Lake-based PCs.

Either way, TXE firmware version 3.0.1.1107 is required. So too is a utility called Intel TXE System Tools. Intel doesn’t make its ME/TXE/SPS System Tools available to end users but some of its OEM partners include them with software and driver updates. A special USB 3.0 debugging connector is also necessary, though those who enjoy hacking hardware can make their own by isolating the D+, D-, and Vcc contacts on a USB 3.0 Type A Male to Type A Male cable.

In other words, the exploitation process is rather involved and not for the faint of heart. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/29/intel_jtag_flaw/

Voting machine maker claims vote machine hack-fests a ‘green light’ for foreign hackers

Voting machine maker ESS says it did not cooperate with the Voting Village at hacking conference DEF CON because it worried the event posed a national security risk.

This is according to a letter the biz sent to four US senators in response to inquiries about why the manufacturer was dismissive of the show’s village and its warnings of wobbly security in systems that officials use to record, tally, and report votes.

Among the vendors singled out was ESS, sparking Senators Kamala Harris (D-CA), Mark Warner (D-VA), Susan Collins (R-ME) and James Lankford (R-OK) to express concern that ESS wasn’t serious about security.

“We are disheartened that ESS chose to dismiss these demonstrations as unrealistic nd that your company is not supportive of independent testing,” the senators wrote in their letter [PDF].

“We believe that independent testing is one of the most effective ways to understand and address potential cybersecurity risks.”

Nothing to see here, move along

Earlier this week, ESS provided the senate with a response letter [PDF] arguing that, while it is happy to work with outside researchers, it feels the DEF CON competition was doing more harm than good.

“All informed observers and participants in protecting America agree that our nation’s critical infrastructure is under attack by nation-states, cybercriminals, and professional and amateur hackers. That’s why forums open to anonymous hackers must be viewed with caution, as they may be a green light for foreign intelligence operatives who attend for purposes of corporate and international espionage,” ESS CEO Tom Burt wrote.

“We believe that exposing technology in these kinds of environments makes hacking elections easier, not harder, and we suspect that our adversaries are paying very close attention.”

Security researchers, however, aren’t buying it. Among those to blast the manufacturer’s response was Voting Village cofounder and Princeton University Professor Matt Blaze, who issued a scathing rebuttal.

Rob Joyce, the former head of the NSA’s elite Tailored Access Operations hacking squad (and noted Christmas light enthusiast) backed Blaze, and expressed support for the hackers whose loyalty was questioned by ESS.

The exchange threatens to overshadow a larger security effort ESS kicked off last week to improve its hardware and system security as well as its reputation in the infosec space by working better with government cybersecurity agencies and private research operations.

This embarrassing exchange is, to say the least, particularly bad timing for the vendor. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/28/voting_machine_hacking/

Why Security Needs a Software-Defined Perimeter

Most security teams today still don’t know whether a user at the end of a remote connection is a hacker, spy, fraudster — or even a dog. An SDP can change that.

In 1993, Peter Steiner published a now famous cartoon in which one dog tells another that on the Internet no one knows you’re a dog. Twenty-five years later, that adage is still true. But it’s even worse than that: Many security experts often don’t know whether a “user” is a hacker, spy, or fraudster, either. The ability to verify the human being (or dog) on the other end of a remote connection is critical to security. It’s about time we get it right with user-centric dynamic access controls.

Photo Credit: Jochen Tack/imageBROKER/Shutterstock

We often hear that the problem of verifying users stems from the original purpose and design of the Internet. The Internet protocols used to send and receive messages do not require users to identify themselves. But the problem also stems from the design of legacy security. Specifically, the tools we use do not focus on the user. For example, firewalls and network access controls are built around network addresses and ports. Even complex next-generation firewalls are built around protocols, not users.

In order to verify the dog — er, user — on the end of a remote connection, we need a security solution that is built around the user. That is, one that makes it impossible for a hacker to impersonate anyone other than the authorized user. We need a software-defined perimeter (SDP). SDP, also known as a “Black Cloud,” is a security approach that evolved from the work done (see pages 28–30 in this PDF) at the Defense Information Systems Agency under the Global Information Grid Black Core Network initiative around 2007.

A software-defined perimeter focuses on user context, not credentials, to grant access to corporate assets. To solve the problem of stopping network attacks on application infrastructure, in 2013 the Cloud Security Alliance formed the SDP Workgroup, which developed a clean-sheet approach that combines device authentication, identity-based access, and dynamically provisioned connectivity. The group noted that “While the security components in SDP are common place, the integration of the three components is fairly novel. More importantly, the SDP security model has been shown to stop all forms of network attacks including DDoS, Man-in-the-Middle, Server Query (OWASP10) as well as Advanced Persistent Threat (APT).”

An SDP requires users to present multiple authentication variables, such as time, place, and machine health and configuration to confirm whether users are who they say they are, and whether or not they should be trusted. This context enables organizations to identify an illegitimate user even if that person is in possession of legitimate user credentials. 

Access controls also need to be dynamic to account for risk and privilege escalation. Users interact with systems and applications in real time. Throughout any given session, they can perform any number of transactions of varying risk levels. For example, a user may check email several times, print a confidential document, and update the corporate blog. An SDP continuously monitors context when changes occur related to the user’s behavior or the environment. The system can manage access based on location, time, security posture, and custom attributes.

An SDP also needs to be scriptable so that it can check more than the information on the device. It needs to be able to reach out to collect and analyze other sources of data to provide context and help authorize users. This ensures that even if a legitimate user is attempting to access resources with a new or different device, that enough information can be gathered to authenticate the user and permit access.

Once a user is properly authenticated — that is, we can determine with confidence that the user is Joe Smith from accounting and not a fraudster or a dog — then the SDP creates a secure, encrypted tunnel between the user and the resource(s) to protect the communications channel. In addition, the rest of the network is rendered invisible. By hiding network resources, the SDP reduces the attack surface and eliminates any possibility of the user scanning and moving laterally across the network.

Finally, because of the complexity and size of today’s IT environments, an SDP needs to be scalable and highly dependable. It should be built like the cloud to enable massive scalability, and be distributed and resilient.

The key to securing access is making sure our adversaries can’t simply steal credentials to gain access. We need an SDP to separate authorized users from hackers (and other bad dogs) on the Internet. Despite this longstanding and essential security control, many enterprises struggle to get it right — but they don’t have to. All of these characteristics are available in SDP architecture today. 

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Leo Taddeo is responsible for oversight of Cyxtera’s global security operations, investigations and intelligence programs, crisis management, and business continuity processes.  He provides deep domain insight into the techniques, tactics and procedures used by … View Full Bio

Article source: https://www.darkreading.com/why-security-needs-a-software-defined-perimeter/a/d-id/1332666?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

PCI SSC Releases New Security Tools for Small Businesses

Tool intended to help small businesses understand their risk and how well they’re being addressed.

Big data breaches make big news, but small businesses account for 61% of breached organizations, according to Verizon’s “2018 Data Breach Investigations Report.” Now, the Payment Card Industry Security Standards Council (PCI SSC) has released a tool to help them understand their risk and how well they’re being addressed.

The PCI Data Security Essential Evaluation Tool provides an online service and set of evaluation forms for small businesses to use in the initial assessment of their payment security situation.

PCI also launched an updated version of the PCI Data Security Essentials Resources for Small Merchants to help smaller businesses understand how to protect themselves and their customers in the face of evolving threats. In addition, the PCI SSC Merchant Resource Page added has six documents, including two new papers, on how to set up systems and deploy infrastructure to safeguard payments and private information.

Read here, here, and here for more.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/pci-ssc-releases-new-security-tools-for-small-businesses/d/d-id/1332685?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fileless Attacks Jump 94% in First Half of 2018

While ransomware is still popular, fileless and PowerShell attacks are the threats to watch this year.

A snapshot of the threat landscape from the first half of 2018 shows fileless and PowerShell attacks are the ones to worry about this year, security analysts report.

Endpoint security firm SentinelOne today published its “H1 2018 Enterprise Risk Index Report,” which shows fileless-based attacks rose by 94% between January and June. PowerShell attacks spiked from 2.5 attacks per 1,000 endpoints in May 2018 to 5.2 attacks per 1,000 endpoints in June. Ransomware remains popular, ranging from 5.6 to 14.4 attacks per 1,000 endpoints.

The rise of fileless and PowerShell attacks wasn’t a major surprise to Aviram Shmueli, SentinelOne’s director of product management and leader of this report, who says the two are especially appealing to threat actors who want to fly under their targets’ radar.

“Fileless attacks are more sophisticated; they don’t leave any trace,” he explains, adding that these types of threats are increasingly common in user data. The PowerShell spike “tells us PowerShell [attacks] are here to stay,” Shmueli says. Fileless, lateral movement, and document attacks made up 20% of all attacks analyzed in the report.

Fileless and PowerShell attacks are powerful, and attackers can leverage either to do significant damage, he says. It’s worth noting the two can overlap: PowerShell lets actors access internal components and can be fileless in nature, enabling them to evade detection.

This report isn’t the only one pointing to the popularity of PowerShell among cyberattackers. Earlier this year, McAfee published findings indicating a 267% spike in fileless malware samples spreading PowerShell in the fourth quarter of 2017 alone, compared with the same time period one year prior.

When it comes to which threat is easier to execute, Shmueli says PowerShell might be the more accessible option. “PowerShell is already installed on Windows systems, so attackers can rely on the fact that it’s already there without installing anything else,” he adds. “This is why we believe attackers choose to use this vector so frequently.”

However, the simultaneous increase in fileless attacks indicates threat actors are becoming more sophisticated and turning toward advanced forms of cybercrime. It’s becoming less difficult for them to create payloads that won’t get caught, complicating defense for targets.

Researchers investigated the drivers behind the recent spikes in fileless and PowerShell attacks but couldn’t point to a specific reason driving the increase. This seems to be more of an overall growing trend than a spike related to a particular campaign.

While all three threat vectors are popular, Shmueli says companies should be most cautious about fileless campaigns. The 2018 spike was preceded by an incremental rise in fileless malware that is likely to continue.

“I believe that we will see a growing number of fileless attacks and PowerShell attacks,” he adds. As for ransomware, he’s not so sure. “The trend is not so constant,” he admits. SentinelOne will continue to release these types of reports on a quarterly basis.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/fileless-attacks-jump-94--in-first-half-of-2018/d/d-id/1332686?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple