STE WILLIAMS

Proving the Value of Security Awareness with Metrics that ‘Deserve More’

Without metrics that matter to the business, awareness programs will continue to be the bastard child of security.

Awareness programs are a staple of any good security program, yet they rarely get the resources that they require to accomplish their goal. The reason seems to be, “You get the budget that you deserve, not the budget that you need.”

Awareness programs generally run phishing campaigns and push out computer-based training (CBT). Phishing has some potential demonstration of value. CBT proves people view training. There might be some collateral advantage as well. However, the only tangible metrics most awareness programs provide is proof of compliance, and potentially showing that people become less susceptible to simulated phishing. 

To “deserve more” you need to determine metrics that show true business value. To that end, I’ve broken awareness-relevant metrics into four categories: Compliance, Engagement, Tangible Return on Investment (ROI), and Intangible Benefits.

Compliance Metrics
Compliance metrics involve ensuring that an organization satisfies third-party compliance needs, which, in general, means that you ensure employees complete awareness training. The minimal amount of budget required to provide training to your employees is justified at a minimum level.

Engagement Metrics
This is the most common category of metrics that I see used by most awareness practitioners. These metrics involve the general use and acceptance of the awareness components. Completion metrics, which overlap with compliance metrics, is one form. Sometimes, people are surveyed about the likability of the materials. Metrics may also include voluntary engagement with supplemental materials, such as if a user voluntarily takes extra awareness training, attends an event, etc.

There are two caveats with engagement metrics. They don’t indicate effectiveness, and remember that the more engagement people have, the less the time they are performing their jobs.

Tangible Behavioral Improvement and ROI
This is the metric that shows employees are actually raising their security awareness. In this case, you need to measure actual security behaviors or the indications of those behaviors. It is irrelevant if people can recite the criteria for a good password. Do they have a secure password? It doesn’t matter if people know they should secure sensitive information. Do they leave sensitive information vulnerable when they leave their desk?

Simulated phishing attacks do not demonstrate behavioral change. The issue is that while simulated phishing attacks could indicate how people might respond to real attacks, it is too easy to manipulate phishing simulations to return any level response that you want. On the other hand, fewer malware incidents on the network are a relevant metric.

Ideally, you want to attach a financial value to behavioral changes. For example, if malware incidents are decreased by 25%, and there is an average cost associated to malware incidents, you can calculate an estimated ROI. Likewise, if there are fewer incidents associated with compromised credentials, you can calculate an estimated ROI. To do so, you need to collect metrics that are related to potential awareness failings, and track those metrics.

Intangible Benefits
Clearly, you want to demonstrate a financial return; however, there are other potential points of value to an organization. For example, if an awareness program creates goodwill toward the security department, users may be more likely to report incidents and cooperate with other efforts. If users believe the awareness program generates value to them, there might be better employee retention. If employees are less susceptible to personal attacks, they might spend less time at work mitigating stolen identities.

“Deserve More”
While compliance requirements will mean that there will always be an awareness program, it is up to a security awareness practitioner to demonstrate that their efforts deserve more than the minimum funding required to achieve compliance. The way to do this is to focus on the periodic collection of metrics that demonstrate a ROI well beyond standard engagement metrics. This is how you deserve more.

Related Content:

Ira Winkler is president of Secure Mentem and author of Advanced Persistent Security. View Full Bio

Article source: https://www.darkreading.com/perimeter/proving-the-value-of-security-awareness-with-metrics-that-deserve-more-/a/d-id/1334739?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Baltimore Email, Other Systems Still Offline from May 7 Ransomware Attack

The city’s mayor says there’s no ‘exact timeline on when all systems will be restored.’

The city of Baltimore’s email system remains down today as it continues its recovery from a massive ransomware attack on May 7 that is under investigation by the FBI. 

Baltimore suffered an attack from the so-called Robbinhood ransomware variant but vowed not to pay the ransom, which has not been made public. As of today, the city was unable to send or receive email messages, and Baltimore Mayor Bernard C. “Jack” Young said in a statement on Friday that it’s unclear just when all of the city’s systems would be available. 

“I am not able to provide you with an exact timeline on when all systems will be restored. Like any large enterprise, we have thousands of systems and applications. Our focus is getting critical services back online, and doing so in a manner that ensures we keep security as one of our top priorities throughout this process. You may see partial services beginning to restore within a matter of weeks, while some of our more intricate systems may take months in the recovery process,” he said.

Some systems are being rebuilt, he said. “We are well into the restorative process, and as I’ve indicated, are cooperating with the FBI on their investigation. Due to that investigation, we are not able to share information about the attack.”

Researchers at Armor, who have studied the attack, confirmed that as of this posting, no monies had been paid to the Bitcoin wallet address used in the city’s ransom note or to the wallet assigned to the City of Greenville, N.C., which was also hit by the same ransomware earlier.

Read more here

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/baltimore-email-other-systems-still-offline-from-may-7-ransomware-attack/d/d-id/1334782?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Most hackers for hire are scammers, research shows

Hackers for hire are a bunch of swindlers, according to research published last week by Google and academics from the University of California, San Diego.

The researchers were specifically interested in a segment of black-market services known as hackers for hire: the crooks you send in when you lack the hacking skills to do the job yourself and the morals that whisper in your ear that this is not a nice, or legal, thing to do.

Such services offer targeted attacks that remain a potent threat, the researchers said, due to the fact that they’re so tailored. Think of spearphishing or whaling attacks that are so convincing because they get all the details right, such as forging company invoices or setting up copycat log-in sites that steal account credentials.

That kind of thing takes effort. Fortunately, most hackers for hire aren’t up to the task, to say the least. Many were outright scams – not too surprising – and some wouldn’t even take on the job if it involved attacking Gmail. For those services that did agree to take on the challenge of hacking Gmail accounts, the cost ballooned over the course of two years, from $123 to $384 – with a peak of $461 in February 2018.

Yahoo hacking prices have tracked the same as Google, while Facebook and Instagram hacking prices have actually fallen to the current average of $307.

The researchers hypothesize that the price differences for hacking the various email providers and the change in pricing are likely driven by what they call both operational and economic factors: namely, Google and Yahoo have gotten better at protecting email accounts, while prices have increased as the market for a specific service shrinks:

Prices will naturally increase as the market for a specific service shrinks (reducing the ability to amortize sunk costs on back-end infrastructure for evading platform defenses) and also as specific services introduce more, or more effective, protection mechanisms that need to be bypassed (increasing the transactional cost for each hacking attempt).

Overall, hackers for hire are pleasingly incompetent… or frauds

What’s sure to keep people’s accounts secure is surely aggravating the weasels who want to pay somebody to take them over. Namely, the hijacking ecosystem is “far from mature,” the researchers concluded.

They tested it out by setting up bogus online buyer personas with which to approach 27 hacking-for-hire services. The researchers tasked those services with compromising particular victim accounts.

Those supposed “victims” were actually honeypot Gmail accounts operated in coordination with Google.

Only five of the services they contacted delivered on their promise to attack the supposed victims. The rest were scammers, demurred when it came to attacking Gmail accounts, or had lousy customer service, they said:

Just five of the services we contacted delivered on their promise to attack our victim personas. The others declined, saying they could not cover Gmail, or were outright scams. We frequently encountered poor customer service, slow responses, and inaccurate advertisements for pricing.

The other good news: U2F (Universal 2nd Factor) security keys are working, the researchers said:

Further, the current techniques for bypassing 2FA can be mitigated with the adoption of U2F security keys.

… we would be remiss were we not to mention that Google last week got U2F egg on its face when it had to recall its Titan Bluetooth U2F keys after finding a security flaw.

Google has argued that Titan keys are still more secure than relying on just a password for access, and true, an attacker has to to be within about 10 meters and has to launch their attack just as you press the button on your Titan key… and needs to know your username and password in advance.

So we’ll grant the researchers that point.

Sum it all up, and the researchers don’t think the hackers-for-hire market is a large-scale threat at this point:

We surmise from our findings, including evidence about the volume of real targets, that the commercial account hijacking market remains quite small and niche. With prices commonly in excess of $300, it does not yet threaten to make targeted attacks a mass market threat.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/njQb43tYQKM/

Don’t break Windows 10 by deleting SID, Microsoft warns

Windows account security identifiers (SIDS) were the subject of a warning issued by Microsoft for users and admins not to delete the sub-type in case they inadvertently break applications.

It’s not clear what prompted Microsoft to issue the caution for a type of SID that has been part of its OS since Windows 8 and Windows Server 2012, but the implication is that a lack of awareness has been causing support problems.

A bit like the Unix UID, SIDS are a fundamental part of the Windows system for identifying users, accounts, and groups and deciding whether one has permission to access the other.

If a Windows user (Alice, let’s say) sets up an account on her computer in her name, Windows identifies the account using a unique SID. Alice can change her account name as often as she wants (to AliceB or even Jeff), but the underlying SID that identifies it to Windows will always stay the same.

The 2012 overhaul expanded SIDS to cover things like file access, drive locations, access to certificates, cameras, removable storage etc. Each one became a ‘capability’ that a user or application could have, or not have, the rights to access.

According to Microsoft, Windows 10 1809 can use more than 300 of these, one of the most commonly encountered of which looks like this:

S-1-15-3-1024-1065365936-1281604716-3511738428-1654721687-432734479-3232135806-4053264122-3456934681

It’s not hard to see why this might confuse anyone who delves into their Registry using the editor (Start Run regedt32.exe) where it appears as ‘account unknown’ with full read access.

After research, it seems that this might be something Windows itself needs to restart after a reboot, a sort of global SID.

That means that anyone who deletes it without understanding this purpose could break Windows itself. As Microsoft’s warning states:

DO NOT DELETE capability SIDS from either the Registry or file system permissions. Removing a capability SID from file system permissions or registry permissions may cause a feature or application to function incorrectly. After you remove a capability SID, you cannot use the UI to add it back.

A further search reveals users asking support forums for advice on this SID, unaware that it is legitimate, plus examples where admins have deleted it and live to regret the decision.

‘Unfriendly’ names

So how do admins resolve which of these are legit SIDS and which might be suspicious?

Microsoft admits that capability IDs are not ‘friendly” (i.e. easy to understand) so using these on their own won’t be much help.  It even notes:

By design, a capability SID does not resolve to a friendly name.

The answer is that all capability SIDS should appear in the registry – Start Run regedt32.exe, and navigate to the following registry entry:

 HKEY_LOCAL_MACHINESOFTWAREMicrosoftSecurityManagerCapabilityClassesAllCachedCapabilities.

If it doesn’t appear in this list then it warrants further investigation, bearing in mind that it might still be a legitimate third-party capability.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pXhiEXNV17Y/

Some Androids don’t call 911 when you tell them to call an ambulance

Somebody’s not breathing. You panic, you grab your phone, and you call for an ambulance.

Or do you?

Unfortunately, if you’re using an Android phone, you might not be. You could instead be calling for, say, medical transportation that isn’t authorized to respond to emergencies.

As the Idaho Statesmen reported recently, Android users who use voice commands may tell their smartphones to “call an ambulance” but that phrase doesn’t trigger all Androids to dial the US emergency number of 911. The newspaper didn’t specify which Android models fail to dial 911.

Tell Siri, however, to call an ambulance, and the voice assistant will dial 911. That’s a relief. But when some Android phones are given that voice command, they instead pull up a list of ambulance companies. Alternatively, they may respond with a Google search that returns, say, a blog post on when it’s appropriate to call an ambulance, the Statesman reports.

Dispatchers for Injury Care EMS – a Boise, Idaho-based company that transports patients in its ambulances, including, for example, from hospitals to nursing homes – told the news outlet that they’ve been getting a steady trickle of calls that were meant to go to 911.

The reason for that may well be that Injury Care EMS is the first company that appears in a Google list of ambulance companies in the Boise area. Injury Care EMS owner Dr. Richard Radnovich and his dispatchers told the Statesman that they’re getting the misplaced calls several times a week.

Rich Wright, an EMT student and the community liaison for Injury Care, told the Statesman that one such recent call was from a mother whose son drank too much. She was trying to get paramedics to help him out, he said:

It was a mom who was panicked, and she was trying to do the best she could to get an ambulance to her son, and we just happened to be the company that her phone had dialed.

Dispatchers are telling such callers that they need to hang up and dial 911, but even the few seconds it takes to tell them that, and for the callers to hang up and call the right number, eats up precious time during an emergency. It takes up even more time if the caller is confused and the dispatcher needs to explain it more thoroughly.

Life-saver Siri

We’ve seen multiple instances of Siri being used to call emergency services and then being credited for saving people’s lives, all because precious time was saved when getting medical attention to people in need.

There was one such case in 2017, when a 4-year-old saved his mother’s life by telling Siri to please dial 999 – the British emergency services number – to “save Mummy’s life.”

A year before that, an Australian mother, rushing to the nursery when a baby monitor showed her 1-year-old had stopped breathing, dropped her phone while she was turning on the light. She still managed to tell Siri to call for help while she performed CPR. Both she and her husband credited the few precious seconds that Siri gave them for potentially making all the difference.

The outcome of that particular story is one of the upsides of the fact that then-recent iPhones picked up the ability to always be listening for commands. That feature came about in iOS 9, when Apple enabled activation of the built-in personal assistant at the sound of your voice, rather than waiting for you to hold down the Home button.

A question of public safety

Those are some of the ways in which Siri has been credited with saving lives. Google’s voice assistant? Not so much. At least, it hasn’t featured in headlines about saving mummies or babies, though that certainly doesn’t mean it hasn’t happened.

At any rate, Radnovich reached out to the Statesman because he sees the issue as a question of public safety. He also reached out to Google, but neither he nor the newspaper got much satisfaction out of the company.

From an email sent by a Google spokeswoman to the Statesman:

The supported query for the Google Assistant is ‘Hey Google, call 911.’ This will trigger the Assistant to call 911. Asking the Assistant to ‘call an ambulance’ is not currently supported and we don’t encourage use of that voice command.

OK… so, can’t Google just, like, rewrite the code so that the “call an ambulance” voice command triggers a call to 911, as Wright suggests?

Sorry, Google, but your failure to do so does not compute.

Android users, we can’t tell you which models call 911 when you ask for an ambulance or which don’t. So in lieu of Google changing things around so that the voice command triggers a 911 call, please do try to remember that in the US, it’s safest to dial 911, or tell Google voice assistant to dial 911, not an ambulance.

Heaven knows what you’ll get if you don’t.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ec7_HgYAkrU/

Cache of 49 million Instagram records found online

A security researcher has discovered a massive cache of data for millions of Instagram accounts, publicly accessible for everyone to see. The account included sensitive information that would be useful to cyberstalkers, among others.

A security researcher calling themselves anurag sen on Twitter discovered the database hosted on Amazon Web Services. It had over 49 million records when discovered and was still growing before it was deleted.

The Instagram data included user bios, profile pictures, follower numbers and location. This information is viewable online. What’s more puzzling is that it also contained the email address and telephone number used to set up the accounts, according to Techcrunch, which broke the story.

Reporters identified the owner of the database as Mumbai-based social media company Chtrbox. It pays social media influencers to publish sponsored content through their accounts. The database has since disappeared from Amazon.

Response from Chatrbox

Chatrbox took issue with press coverage of the leaked records, sending Naked Security the following statement:

The reports on a leak of private data are inaccurate. A particular database for limited influencers was inadvertently exposed for approximately 72 hours. This database did not include any sensitive personal data and only contained information available from the public domain, or self reported by influencers.

We would also like to affirm that no personal data has been sourced through unethical means by Chtrbox. Our database is for internal research use only, we have never sold individual data or our database, and we have never purchased hacked-data resulting from social media platform breaches. Our use of our database is limited to help our team connect with the right influencers to support influencers to monetize their online presence, and help brands create great content.

How might someone compile a massive database of Instagram information?

The company wouldn’t answer any more questions, so it’s difficult to know for sure. User names, profile shots, and follower numbers are publicly available and could be gathered by screen scraping. Screen scrapers use automated scripts to visit websites and copy the information they find there.

Companies use scraped data for all kinds of purposes, such as price comparisons and sentiment analysis. It’s considered malicious and many publishers try to block it because the scrapers are using their proprietary data and also draining their server resources.

We’ve seen people scraping Instagram before. Redditors attempted to archive every image from the site that they could, for kicks.

But it can get you into trouble. Authorities in Nova Scotia, Canada arrested a 19-year-old for scraping around 7,000 freedom-of-information releases from a public web site there, calling him a hacker. They subsequently dropped the charges.

What isn’t typically public is the phone number and email address used to create the account, and which TechCrunch says was included with some records. Facebook used to make this available via the Instagram API, even for accounts that didn’t publicly list that information. It had to turn off that feature in September 2017 after it found people downloading celebrity contact details.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MpQmM_HnwlM/

What You Need to Know About Zero Trust Security

The zero trust model might be the answer to a world in which perimeters are made to be breached. Is it right for your organization?PreviousNext

If your network has a perimeter, it will someday be breached. That’s both the lesson the “real world” works so hard to teach and the premise behind a key security model: zero trust. 

“Don’t trust, and verify” might be a nutshell description of the zero trust model — “don’t trust” because no user or endpoint within the network is considered intrinsically secure, and “verify” because each user and endpoint accessing any resources of the network must authenticate and be verified at every point, not just at the perimeter or large network segment boundaries.

This often-repeated authentication throughout the network and application infrastructure relies on the concept of “microsegmentation,” in which boundaries are defined around individual applications and logical network segments. This kind of frequent check point can go a long way toward putting an end to lateral infection in malware outbreaks, and it doesn’t have to be as cumbersome to users as it sounds — as long as technology is used to deal with some of the logins and authentications along the way.

While the concept behind the zero trust model is simple, implementation can be anything but. Before a company decides to invest in the technology and processes, it should understand what is involved in the model and its application. Dark Reading has identified seven issues to resolve before launching into a zero trust environment. 

If you have helped your organization move to a zero trust environment, we’d like to hear about your experience. Do you agree with our list? DId you find other issues more important? Let us know in the comments, below.

(Image: whyframeshot VIA Adobe Stock)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/vulnerabilities---threats/what-you-need-to-know-about-zero-trust-security/d/d-id/1334751?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Learn to Hack Non-Competes & Sell 0-Days at Black Hat USA

Plus, hear from key figures about the history and the enduring influence of The Cult of the Dead Cow this August in Las Vegas.

The cybersecurity community is larger and more vibrant than ever, and Black Hat USA is the place to be if you want to learn all about it while you’re right in the thick of it.

There are some great opportunities among the crop of new Briefings recently confirmed for the Black Hat USA 2019 community track, which aims to provide a forum for idea sharing and discussion on relevant issues.

Selling 0-Days to Governments and Offensive Security Companies promises to give you a fascinating look inside a community which few understand: those in the business of selling 0-day vulnerabilities. In this 50-minute Briefing you’ll learn about a vulnerability brokerage company called Q-recon and get an inside look at how this market works, from the perspectives of researchers, brokers, and clients. You’ll also walk away with a better understanding of who’s selling 0-days and to whom, what the process is like, and how to sell 0-days yourself!

Hacking Your Non-Compete is a must-see if you’ve ever contemplated leaving one company to join another. In the course of this 50-minute Briefing you’ll get a practical rundown of the working details of non-compete agreements, operating agreements between tech company founders, and what to do when something goes wrong.

You’ll study real cases involving competing with a former employer, learn about soliciting work from a current client at your new company and how to protect intellectual property you bring to a new employer. You’ll also learn from a computer forensics investigator where people typically go wrong and how to transition from one employer to the next, and from a technology and intellectual property attorney on the real legal outcomes of those cases.

Making Big Things Better the Dead Cow Way is a unique opportunity for Black Hat USA attendees to hear, firsthand, how The Cult of the Dead Cow shaped the culture and community of the entire security industry. In this session, three key figures from the 35-year-old group’s history — Mudge Zatko, Chris Rioux, and Deth Vegetable — will discuss the cDc’s evolution from teenage misfits into industry leaders, its many contributions, and the enduring lessons for other hackers out to make a difference.

They will be questioned by Joseph Menn, author of “Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World,” which will be published June 4. Notably, cDc Minister of Propaganda Deth Veggie will be appearing for the first time under his real name to discuss the group’s formative years, how he engaged with the media for fame and infamy, and the current presidential bid of group alumnus Beto O’Rourke.

For more information about these Briefings and many more check out the Black Hat USA Briefings page, which is regularly updated with new content as we get closer to the event.

Article source: https://www.darkreading.com/black-hat/learn-to-hack-non-competes-and-sell-0-days-at-black-hat-usa/d/d-id/1334776?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Consumer IoT Devices Are Compromising Enterprise Networks

While IoT devices continue to multiply, the latest studies show a dangerous lack of visibility into those connected to enterprise networks.

Consumer-grade Internet of Things (IoT) devices continue to multiply within enterprise networks, according to a new report out today that shows these IoT devices open up organizations to a wide range of attacks. 

With data pulled from more than 1,000 enterprise organizations running one or more IoT devices in its network, the “2019 IoT Threats Report” study was conducted by researchers at Zscaler ThreatLabZ. Their goal was to survey the IoT attack surface within typical enterprises by looking at IoT device footprints over the course of a one-month period. It found that the organizations under study were running 270 different IoT device profiles from 153 different IoT manufacturers. All told, these devices pumped out 56 million device transactions over the course of a single month. 

For the most part, all of that IoT data is flying around in the clear. Researchers found that 91.5% of IoT transactions are conducted over a plaintext channel, and a scant 18% of IoT devices running that use SSL exclusively to communicate in enterprise settings.

That low level of encryption should come as no surprise, considering how many consumer-class devices were represented in the mix of IoT devices found in these business environments. Zscaler reports that the top four IoT devices most often seen in the study were set-top boxes, smart TVs, smart watches, and media players. The study shows that in some ways, the IoT phenomenon is just another cycle of the BYOD challenges that security teams were first forced to face a decade ago during the early days of the smartphone boom.

“Many of the devices are employee-owned, and this is just one of the reasons they are a security concern,” the report explained.

One of the other big concerns is the high use of default and hard-coded passwords present in IoT devices — a favorite weakness among the most common malware families targeting IoT devices, which included Mirai, Gafgyt, and Hakai. The report said Zscaler blocked about 6,000 malicious transactions on devices during the study period.

“Often, the IoT malware payloads contain a list of known default username/password names, which, among other things, enables one infected IoT device to infect another,” the report noted. It explained that Mirai, in particular, also favored leveraging vulnerabilities in IoT management frameworks that could help attackers achieve remote code execution.

Similar to those heady early days of smartphone proliferation, enterprises are reporting extremely low visibility into IoT device prevalence and activity within their networks. A study released by Ponemon Institute earlier this month showed that only 5% of organizations say they keep an inventory of all managed IoT devices. What’s more, more than half of organizations do not classify risk from IoT devices based on their functionality or the type of data the devices process or have access to. A lot of this lack of governance boils down to visibility gaps. The Ponemon report found that 49% of enterprises do not regularly scan for IoT devices in the workplace, and only 8% say they have the capability to scan for IoT devices in real-time. 

The good news in all of this is that many enterprises are well-aware of this IoT security visibility gap and are working toward a solution. A study released yesterday by IDG and PulseSecure showed that 46% of enterprises today say that enhancing IoT discovery, isolation, and access control is a top IT priority in 2019.

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/iot/consumer-iot-devices-are-compromising-enterprise-networks/d/d-id/1334777?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bug-hunter reveals another ‘make me admin’ Windows 10 zero-day – and vows: ‘There’s more where that came from’

A bug-hunter who previously disclosed Windows security flaws has publicly revealed another zero-day vulnerability in Microsoft’s latest operating systems.

The discovered hole can be exploited by malware and rogue logged-in users to gain system-level privileges on Windows 10 and recent Server releases, allowing them to gain full control of the machine. No patch exists for this bug, details and exploit code for which were shared online on Tuesday for anyone to use and abuse.

The flaw was uncovered, and revealed on Microsoft-owned GitHub, funnily enough, by a pseudonymous netizen going by the handle SandboxEscaper. She has previously dropped Windows zero-days that can be exploited to delete or tamper with operating system components, elevate local privileges, and so on.

This latest one works by abusing Windows’ schtasks tool, designed to run programs at scheduled times, along with quirks in the operating system.

It appears the exploit code imports a legacy job file into the Windows Task Scheduler using schtasks, creating a new task, and then deletes that new task’s file from the Windows folder. Next, it creates a hard filesystem link pointing from where the new task’s file was created to pci.sys, one of Windows’ kernel-level driver files, and then runs the same schtasks command again. This clobbers pci.sys’s access permissions so that it can be modified and overwritten by the user, thus opening the door to privileged code execution.

The exploit, as implemented, needs to know a valid username and password combo on the machine to proceed, it seems. It can be tweaked and rebuilt from its source code to target other system files, other than pci.sys.

Will Dormann, a vulnerability analyst at the CERT Coordination Center, part of the US government-funded Software Engineering Institute, confirmed the exploit works against a fully patched and up-to-date version of Windows 10, 32 and 64-bit, as well as Windows Server 2016 and 2019. Windows 8 and 7 appear unaffected by the ‘sploit as it currently stands.

Here’s a video of the proof-of-concept attack in action:

To be generous to Microsoft, privilege escalation flaws are a dime a dozen in Windows: the software giant patches them every month in its operating system.

And it will likely be patching more still – SandboxEscaper apparently has more zero-days up her sleeve aside from this latest vulnerability: “I have four more unpatched bugs where that one came from. Three LPEs [local privilege escalations], all gaining code exec as system, not lame delete bugs or whatever, and one sandbox escape,” she boasted on Tuesday.

She’s also rather peeved at the West and society in general, and hopes to sell some of her exploits to non-Western miscreants, though she did not specify a currency: “If any non-western people want to buy LPEs, let me know. (Windows LPE only, not doing any other research nor interested in doing so). Won’t sell for less then 60k for an LPE. I don’t owe society a single thing. Just want to get rich and give you fucktards in the West the middle finger.”

The researcher earlier waxed lyrically about exploring trails in northern England while complaining that “human society deeply disgusts me.”

A spokesperson for Microsoft could not be reached for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/22/windows_zero_day/