STE WILLIAMS

New Mirai Variants Leverage Open Source Project

Aboriginal Linux gives Mirai new cross-platform capabilities – including Android.

Mirai, the IoT botnet responsible for enormous DDoS attacks in 2016, has continued to evolve: it’s now leveraging an open-source project named Aboriginal Linux to make cross-compiling the malicious code easier, more effective, and less prone to error.

Since the Mirai code was released as an open source project, the malware has quickly evolved in a number of different directions. This newest variant – discovered by Symantec researchers – supports Mirai variants for multiple platforms including Android. All were created using Aboriginal Linux. 

According to Symantec, the Linux tool makes the versions robust and operational “on a wide variety of devices ranging from routers, IP cameras, connected devices, and even Android devices.”

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-mirai-variants-leverage-open-source-project/d/d-id/1332648?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Deepfakes porn service: Don’t worry, we’ll only use “consenting adults”

At the beginning of the year, much of the internet shuddered at the then-new concept of deepfakes: artificial intelligence- (AI-) generated videos that stitched random (mostly famous, with an odd proclivity toward actor Nicholas Cage) people’s heads into porn videos or into uttering things they’d never say in public.

Platforms banned it: Reddit, Twitter, Pornhub, and Gfycat, to name a few.

At least one blackmailer stole images and cast the unwilling in nonconsensual, fabricated adult videos before being arrested for it.

The US Defense Advanced Research Projects Agency (DARPA) focused its AI forgery detection work on the issue, citing the potential use of fake images by the country’s adversaries in propaganda or misinformation campaigns. One security researcher brought up the potential for police bodycam footage to be tampered with.

Then again, on the more gleeful, “ka-CHING!!!” side of the deepfakes coin, we have Naughty America.

The porn company last week launched a service that lets customers pay to customize their own deepfakes.

TL;DR: yes, that means you can buy your head stitched onto a porn actor’s body. Or, for that matter, maybe that of your cat, if the cat gives consent. Or, then again, maybe your head stitched onto the bodies of both/all parties in a given video, as suggested by one of our Naked Security staffers who prefers to stay anonymous … while Mark Stockley suggested that next year’s headline will be “Woman sued by cats.”

As well as the ability to switch faces with performers, deepfakery can also change the background in any given video. Of course, that background swapping means that Naughty America can put your favorite porn star anywhere, according to CEO Andreas Hronopoulos:

I can put people in your bedroom.

The Verge reports that simple edits to porn videos will cost just a few hundred dollars, while longer, more complicated changes could run into the thousands.

According to Variety, Naughty America has a script that will ask customers for footage of themselves. The script includes specific instructions for the facial expressions necessary for getting an optimal likeness. (Here’s a hint: don’t forget to blink. DARPA found that the lack of blinking, at least at this stage of the technology’s evolution, is a giveaway.)

What’s less clear is how the company plans to ensure that the video footage submitted by customers comes from consenting adults.

Variety reports that Naughty America will use its compliance department to make sure that “any and all footage used comes from consenting adults.”

Well, that’s pretty darn vague. It ignores the problem of how to determine whether the submitted content has been acquired with the consent of the party depicted. How will Naughty America figure out whether a given clip has been spearphished out of a victim, a la Celebgate?

Let’s hope that it doesn’t boil down to “because the customer said it was them.” Let’s pray that they know full well that on the internet, we are all dogs until proven otherwise.

Let’s also hope that the legal department of a porn company knows, by now, how to properly vet the age of a customer. Kids have enough to worry about when it comes to nonconsensual porn and the extortion demands that come with it – please, Naughty America, don’t give the ranks of underage tormentors another tool to make their demands even more threatening.

The Verge sent more questions about all this on over to Naughty America, but the company hadn’t responded as of Tuesday.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HcqAQI5scbo/

Using smart meter data constitutes a search, but court allows them anyway

US cities using smart meters narrowly escaped a legal problem this month when a court decided that the benefits of these IoT devices outweighed the privacy issues created by collecting detailed home energy data.

The US Court of Appeals for the Seventh Circuit, a federal court that oversees appeals for Illinois, Indiana and Wisconsin, ruled against a privacy advocacy group called Naperville Smart Meter Awareness (NSMA). The group had sued the Illinois City of Naperville over its smart meter program, arguing that the devices give up too much information about residents.

Utilities have been busy installing smart meters across the US for several years now. The US Energy Information Administration counted 70,833,000 in the US in 2016, of which 88% were residential.

While traditional electricity meters gather energy consumption data in a single figure on a monthly basis, smart meters are far more granular. They collect thousands of readings per month, which provide a clearer profile of when and how residents are using electricity. This lets utilities manage their energy flow and improve grid reliability.

However, this increased visibility brings its own concerns for privacy-conscious consumers. Each home appliance has its own energy load signature, which creates a unique fingerprint for it and enables those collecting the energy to understand what people are doing at home.

The data collection creates a privacy issue, said the NSMA. Naperville’s meters read energy usage data every 15 minutes and store it for up to three years. Privacy advocates argued that people with access to that information could tell when a home is vacant, when people eat and sleep, and what appliances are in the home. It might even be possible to check electric vehicle charging times to find out about people’s travel patterns, it said.

These concerns led the NSMA to claim that smart meter data collection violated the Fourth Amendment of the US Constitution, which prohibits unreasonable searches and seizures.

That data collection isn’t something that people can opt out of. In its decision, filed last week, the court said:

While some cities have allowed residents to decide whether to adopt smart meters, Naperville’s residents have little choice. If they want electricity in their homes, they must buy it from the city’s public utility. And they cannot opt out of the smart-meter program.

It’s a search, but is it fair?

The court asked two questions in assessing NSMA’s appeal: is the collection of data from smart meters a search, and is it reasonable?

It ruled that using technology to peek in detail at goings-on inside peoples’ homes every 15 minutes does constitute a search, but decided that the search was reasonable.

US law decides whether a search is reasonable by balancing its intrusion on Fourth Amendment rights against how much it serves the government’s interests, the court said. It decided that smart meter data collection is not very intrusive because the data isn’t used to prosecute people.

Conversely, because the collection of smart meter data allows utilities to reduce costs, provide cheaper power to consumers, encourage energy efficiency and increase grid stability, it is very much in the government’s interest and can be allowed.

The court had a caveat, though, and warned the City that the balance could change. It concluded:

We caution, however, that our holding depends on the particular circumstances of this case. Were a city to collect the data at shorter intervals, our conclusion could change. Likewise, our conclusion might change if the data was more easily accessible to law enforcement or other city officials outside the utility.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/1zetgRtCPGw/

Babysitting app suffers ‘temporary data breach’ of 93,000 users

Babysitting-booking app Sitter “temporarily” exposed the personal data of 93,000 account holders, according to a researcher who recently discovered the trove of data using the Shodan Internet of Things (IoT) search engine.

In a LinkedIn post, Bob Diachenko explains how he found the 2GB MongoDB database on August 13, which contained phone numbers, addresses, transaction details, phone book contacts, partial credit card numbers, and encrypted account passwords.

Other information included in-app chat and notification history, plus details of which users needed a babysitter at what time and at which address.

Shodan indexed the database a day before Diachenko noticed it, which suggests a short period of exposure – although it’s possible it was left in an unsecured state for longer.

The positive news: when told of the breach, Sitter reacted quickly, taking it offline. The alternative view is that if it hadn’t been noticed by chance, the data might still be up there and vulnerable to ransom or theft.

According to Sitter:

Sitter has already notified all of its users and partners of the temporary data breach you identified that resulted in the last week in the course of development of certain product enhancements. The security vulnerability was immediately re-secured. Sitter prides itself on trust, openness, and transparency with its users and is committed to maintaining a secure environment for its users.

Sitter can console itself that it’s not alone. Earlier this month, the same researcher discovered another MongoDB database, this time exposing the personal data of 2.3 million Mexican patients from the state of Michoacán.

Before that, in 2017, an attacker started ransoming an astonishing 28,000 unsecured MongoDB databases, receiving payment from at least 20 of the victims in Bitcoins.

That too was only noticed when researcher Victor Gevers joined the dots while reporting exposed databases to their owners.

There’s no evidence that anyone other than Diachenko accessed the data in the Sitter incident, so it would seem the company may have got lucky this time.

Once the cybercriminals notice, no breach ever remains “temporary”.


Image courtesy of Sitter.app

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uf4TgA5hdzE/

Facebook’s rating you on how trustworthy you are

Are you trustworthy, or are you just another fake news boil that needs to be lanced?

Facebook is figuring that out, having spent the past year developing a previously unreported trustworthiness rating system to assign to users. We Facebook users are being assigned a trustworthiness rating between zero and one, according to an interview by the Washington Post with Tessa Lyons, product manager in charge of fighting misinformation.

This is part of the ongoing battle Silicon Valley is waging with those who’ve been tinkering with social media platforms, from the Russian actors who littered Twitter with propaganda and let loose armies of automated accounts on both it and Facebook, to the fake-news pushers on both ends of the political spectrum.

Facebook has had a bumpy time of it when it comes to fake news.

In April, Facebook started putting some context around the sources of news stories. That includes all news stories: both the sources with good reputations, the junk factories, and the junk-churning bot-armies making money from it.

You might also recall that in March 2017, Facebook started slapping “disputed” flags on what its panel of fact-checkers deemed fishy news.

As it happened, these flags just made things worse. They did nothing to stop the spread of fake news, instead only causing traffic to some disputed stories to skyrocket as a backlash to what some groups saw as an attempt to bury “the truth”.

Last month, Facebook threw in the towel on the notion that it’s going to get rid of misinformation. The thinking: it might be raw sewage, but hey, even raw sewage has a right to flow, right? Instead, it says that it’s demoting it: punishment that extends to Pages and domains that repeatedly share bogus news.

It all came to a head at a press event that was supposed to be feel-good PR: a notion that CNN reporter Oliver Darcy skewered by grilling Facebook Head of News Feed John Hegeman about its decision to allow Alex Jones’ conspiracy news site InfoWars on its platform.

How, Darcy asked, can the company claim to be serious about tackling the problem of misinformation online while simultaneously allowing InfoWars to maintain a page with nearly one million followers on its website?

Hegeman’s reply: the company…

…does not take down false news.

But that doesn’t mean that social media platforms aren’t working to analyze account behavior to spot violative actors. As the Washington Post points out, Twitter is now factoring in the behavior of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.

Thus, it shouldn’t come as a surprise to learn that Facebook is doing something similar. But just exactly how it’s doing it is – again, no surprise – a mystery. Like all of Facebook’s algorithms – say, the ones that gauge how likely it is we’ll buy stuff, or the ones that try to figure out if we’re using a false identity – the user-trustworthiness one is as opaque as chocolate pudding.

The lack of transparency into how Facebook is judging us doesn’t make it easy for Facebook fact checkers to do their job. One of those fact checkers is First Draft, a research lab within the Harvard Kennedy School that focuses on the impact of misinformation.

Director Claire Wardle told the Washington Post that even though this lack of clarity is tough to deal with, it’s easy to see why Facebook needs to keep its technology close to the vest, given that it can be used to game the platform’s systems:

Not knowing how [Facebook is] judging us is what makes us uncomfortable. But the irony is that they can’t tell us how they are judging us – because if they do, the algorithms that they built will be gamed.

A case in point is the controversy over conservative conspiracy theorist Alex Jones and his InfoWars site, which ultimately wound up with both being banned from Facebook and other social media sites earlier in the month.

It was no clear-cut victory over misinformation, the way that Facebook executives saw it. Rather, they suspected that mass reporting of Jones’s content was part of an effort to game Facebook’s systems.

Lyons told the Washington Post that if people were to report only the posts that were false, her job would be easy. The truth is far more complicated, though. She said that soon after Facebook gave users the ability to report posts they considered to be false, in 2015, she realized that people were flagging posts simply because they didn’t agree with the content.

Those reported posts get forwarded to Facebook’s third-party fact checkers. To use their time efficiently, Lyons’s team needed to figure out whether those who were flagging posts were themselves trustworthy.

One signal that Facebook uses to assess that trustworthiness is how people interact with articles, Lyons said:

For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.

As far as the other signals that Facebook uses to rate us go, the company’s not saying. But fuzzy as the inner workings may be, it’s probably safe to assume that the overall trustworthiness of Facebook content is better off with us being rated than not. Think about it: do you trust the input of outraged grumps who stab at the “report” button just because they don’t agree with something?

No, me neither. And if it takes a trustworthiness rating to demote that kind of behavior, that seems like a fair trade-off.

At least it’s not a trustworthiness score that’s being assigned to us publicly, as if we were products, like the Peeple people proposed a few years back.

Sure, we’re products as far as Facebook is concerned. But at least our score isn’t being stamped on our rumps!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eYGtwjIK4Ss/

Wickr Adds New Censorship Circumvention Feature to its Encrypted App

Secure Open Access addresses void created by Google, Amazon decision to disallow domain fronting, company says.

Wickr has added a new Secure Open Access capability to its instant messaging app, which the company says enables encrypted communications that is far more resilient to Internet traffic restrictions and censorship attempts than typical domain-fronting approaches.

The new feature is based on the open source Psiphon Internet censorship circumvention tool developed by the University of Toronto’s Citizen Lab for users of Windows and mobile devices. It uses domain fronting as just one of multiple techniques, including SSH and VPN technology, for directing encrypted traffic around blocking attempts.

“Think of it as a ‘smart VPN’ that relies on an agile and smart access engine that optimizes the Wickr app,” says Michael Hull, president of Psiphon. “In a nutshell, [Secure Open Access] enables Wickr users anywhere in the world — whether business teams or individuals — to stay connected and end-to-end secure.”

Wickr sees Secure Open Access as filling a void that Amazon and Google created earlier this year when they stopped supporting domain fronting on their platforms.

Domain fronting is a technique for hiding traffic to a specific host server and service by forwarding it through a proxy domain belonging to a Google, Amazon, or other ISP and content distribution network. Encrypted communications apps, like Signal and Telegram, and services, like The Onion Router (TOR), that are banned in certain countries, for instance, have used Google.com as a domain front for routing traffic to their servers.

A message sent via Signal would appear like regular HTTPS traffic to Google, while the actual domain to which it was headed would be encrypted in the HTTP host header and therefore invisible to a censor. To block the traffic, a censor would have to block all traffic to Google.com.

Many security researchers and privacy rights advocates have touted domain fronting as giving people — especially in oppressed societies — a way to access blocked apps and services. So Google’s and Amazon’s decision to disallow their domains from being used for domain fronting was widely considered as a major setback for Internet privacy and free speech.

The problem with traditional domain fronting is that it typically relies on the infrastructure of a single cloud provider — like a Google or an Amazon — to hide traffic, Hull says.  “This practice inevitably faced restrictions as it gained popularity simply because it put providers’ customers at risk of losing service [and] connectivity as a result,” he says.

Wickr’s Secure Open Access is built to be adaptive and resilient to emerging traffic restrictions, according to the company. Instead of relying on a single cloud provider’s infrastructure, Secure Open Access uses thousands of servers worldwide to enable uninterrupted, end-to-end encrypted messaging, calling, and file and screen sharing.

When a user launches Secure Open Access on his or her mobile device, the client initiates connections with up to 10 different servers simultaneously. The servers are chosen at random from a cached list of servers and a mix of different protocols, according to Wikr.

The goal behind making multiple simultaneous connections is to minimize wait times in case certain servers or protocols are blocked. Wickr Open Secure Access also is designed to pick the closest data center and lower-latency direct connections over domain-fronted connections to speed communications.

“To accomplish what Wickr Secure Open Access does, a user would have to run a few dozens of VPNs,” says Chris Lalonde, Wickr’s chief operating officer. Users would need to test how the VPNs work in a particular location before launching a secure communication application.

“With Wickr, they can now stay connected and continue to do work on any network, all in one app, on any device, by just enabling Secure Open Access feature,” he says.

Lalonde says Wickr’s new feature can help teams and organizations operating in any part of the world to communicate and collaborate securely without fear of interruption. “This capability is designed to mirror today’s global workforce that is traveling, collaborating across different geographies, and needs to protect business IP, sensitive data, and critical enterprise deals from countless threats and data breaches.”

Related Content:

 

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/wickr-adds-new-censorship-circumvention-feature-to-its-encrypted-app/d/d-id/1332623?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How an uploaded image could take over your website, and how to stop it

Vulnerability hunter Tavis Ormandy just reported a series of security problems in an application called Ghostscript.

Ormandy works for Google’s Project Zero – he literally finds bugs for a living – and his work is both well-known and renowned…

…but who or what is Ghostscript, and why would someone as skilled as Ormandy feel the need to dig into it?

Well, for many people, Ghostscript is software they’ve never heard of, but probably use or rely upon regularly without even realising it.

Ghostscript is a free, open source implementation of Adobe PostScript, a programming language and ecosystem that powers many printers, and that is the technical underpinning to almost every PDF file out there.

Indeed, if you open a PDF file, or generate one, you’re almost certainly firing up a PostScript runtime environment to execute a PostScript program that describes the document.

Many open source toolkits – image editors, document creators, illustration packages, PDF viewers and more – rely on Ghostscript to do the heavy lifting of text and graphics rendering.

In other words, whether you know it or not, you probably used Ghostscript recently, if not locally on your own computer then remotely on someone else’s servers when you used a cloud service.

Remote code execution and data leakage bugs in Ghostscript are therefore worth knowing about, even if they don’t put you in immediate danger and you have to rely on someone else to sort them out.

Ormandy’s vulnerabilities cover a range of security holes, including the ability to:

  • Run shell commands of your choice.
  • Create files in directories you’re not supposed to access.
  • Delete files even if you only have read permission.
  • Extract data from files for which you have no permission.

Ormandy’s bugs require you to feed an installed Ghoscript engine (known in the trade as an interpreter, because it interprets and makes sense of Ghostscript code) with a malicious input file of your own devising, so at first sight the bugs don’t really sound too bad.

After all, if you can download an arbitrary file and then get it launched in a fashion of your own choosing, you could already run malware in a wide range of languages.

On a Mac, for example, you could choose by default from preinstalled versions of Perl, Python, Ruby, PHP, Tcl, Bash, AppleScript and more; on Windows you could run code in Visual Basic Script, JavaScript or Powershell, right off the bat….

…or you could just download or build a compiled malware binary and run it directly.

Nevertheless, even if it’s your own computer and you’re allowed to install new programs, your regular account is supposed to be blocked from reading or deleting files belonging to other users, so bugs of this sort represent Elevation of Privilege (EoP) vulnerabilities.

The bigger problem

But there’s a bigger problem here, caused by the way Ghostcript is often used – it’s not usually treated as a programming language like Perl or Python, but as a mechanism for converting graphics or printing documents.

If someone sends you a document file for printing, or an image file for conversion, you’re supposed to be able to feed it without too much worry into the relevant rendering app for processing – and that app might be Ghostscript.

Ghostscript even has a special setting, activated with the command line option -dSAFER, that acts as a kind of sandbox to limit the damage that a rogue image or document can do, but Ormandy’s latest vulnerabilities can wriggle out from under the SAFER restrictions.

And many servers, notably those running blogs and forums, include and use Ghostscript – whether the company running those services realises it or not.

For example, many blogging and social networking sites allow you to upload your own images, and will automatically rework the files you upload so that they fit the formatting and rendering requirements of the site.

These reformatting changes typically include converting the file to a chosen file format, scaling it to a standard set of sizes, applying colour filters, adding copyright or other messages, compressing it to save space, and more. (The jargon word for this is step is transcoding.)

Very often, a ubiquitous command line tool called ImageMagick is used for this purpose, meaning that the ImageMagick utility is regularly expected to ingest and safely process untrusted files uploaded from outside…

…and ImageMagick, by default, uses Ghostscript in its transcoding process, if it enounters PostScript content.

Therefore an exploitable security hole in Ghostscript often means a hole in ImageMagick, and a hole in ImageMagick may very well mean an externally exposed hole in your content management system, blogging server or online forum.

What to do?

Ghostscript, it seems, is still working through Ormandy’s latest bugs, so there isn’t an update available yet.

But one of the most likely ways you’d be affected by one of these bugs, at least by remote content uploaded to attack you on purpose, is via ImageMagick.

You can therefore use ImageMagick’s configurations settings to help limit your exposure.

For example, you can turn off ImageMagick’s ability to process PostScript files (and the similar, related, file format known as Encapsulated PostScript) altogether, unless you’re certain you need to handle files of that type.

By default, ImageMagick will render PS and EPS files if presented with PostScript data, but you can change that through policy.xml, one of ImageMagick’s configuration files.

To find out where your policy.xml lives, try this:

   $ identify -list configure | grep CONFIGURE-PATH
   CONFIGURE_PATH /usr/local/etc/ImageMagick-7/
   $

To find out what file formats your ImageMagick supports that you can selectively turn off:

   $ identify -list format
     Format  Mode  Description
     -----------------------------------------------
        3FR  r--   Hasselblad CFV/H3D39II
        3G2  r--   Media Container
        3GP  r--   Media Container
       . . . .
        EPS  rw-   Encapsulated PostScript
       EPS2* -w-   Level II Encapsulated PostScript
       EPS3* -w+   Level III Encapsulated PostScript
       EPSF  rw-   Encapsulated PostScript
       . . . .
        PDF  rw+   Portable Document Format
       PDFA  rw+   Portable Document Archive Format
       . . . .
         PS  rw+   PostScript
        PS2* -w+   Level II PostScript
        PS3* -w+   Level III PostScript
       . . . . 
     YCbCrA* rw+   Raw Y, Cb, Cr, and alpha samples
        YUV* rw-   CCIR 601 4:1:1 or 4:2:2
       . . . .
     * native blob support
     r read support
     w write support
     + support for multiple images
   $

To turn off handling of individual formats, add one or more lines like this to policy.xml:

   policy domain="coder" rights="none" pattern="{EPS,PS,PDF,PDFA}" /

(Above, we just picked all the apparently PostScript or PDF-related formats that our installation listed as having “read support”.)

A better solution, using an opt-in approach rather than opt-out to reduce your attack surface as much as possible, is something like this:

   policy domain="coder" rights="none" pattern="*" /
   policy domain="coder" rights="read|write" pattern="{GIF,JPEG,PNG}" /

The transcoder identification pattern "*" on the first line is a wildcard, meaning it matches anything and therefore turns off everything.

The second line then explicitly turns back on a minimum set of transcoders, based on the file formats you need to support for untrusted uploads.

What next?

If you’re using Linux, or a package manager such as Macports, Homebrew or Fink on a Mac, your system should alert you when an official Ghostscript update is available, so key your eyes open.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OzGf5rn8yYM/

When something’s weird in your ImageMagick upload, who ya gonna call? Ghostbusters!

GhostScript’s security sandbox is so weak, website admins, developers, and users should block ImageMagick and other tools from using the software altogether.

Returning from a sabbatical, noted Google Project Zero researcher Tavis Ormandy this week emitted new ways to execute arbitrary code on vulnerable servers and similar machines that process incoming files using GhostScript. He said exploitation is possible against pic wrangling toolkit ImageMagick, file viewer Evince, image editor Gimp, and other programs that pass data to GhostScript to process.

Earlier this week, Ormandy tweeted the bottom line: “This is your annual reminder to disable all the GhostScript coders in policy.xml.” He was referring back to an ImageMagick-related vuln disclosed in April 2017, and the policy.xml configuration file for controlling access to ImageMagick handlers.

Essentially, if you run a website, or app or some other online service, that uses ImageMagick to process user-submitted pictures – such as photos to turn into account profile pics – you should update your policy.xml to block any GhostScript code from running. What can happen is, a miscreant can upload a file – say, myface.png – and embed some PostScript in it that, when passed to a GhostScript handler via ImageMagick, triggers the execution of a smuggled-in command. That would allow scumbags to submit files to run malicious commands and hijack your computer systems.

Blame GhostScript

Crucially, the problem is not within ImageMagick, Evince, GIMP, etc – it’s in the GhostScript backend they rely on. Right now, the best way to protect yourself is to disable any GhostScript handlers.

This CERT advisory also summarizes the issue: “Ghostscript contains multiple -dSAFER sandbox bypass vulnerabilities, which may allow a remote, unauthenticated attacker to execute arbitrary commands on a vulnerable system.”

Ormandy explained the technical details in this mailing list post, which leads with: “I strongly suggest that distributions start disabling PS, EPS, PDF and XPS coders in policy.xml by default.”

According to the Googler, an attacker can get past the sandbox by abusing a broken error handler, /invalidaccess; a bug in setcolor; a type confusion problem because LockDistillerParams isn’t type-checked; and .tempfile permissions were broken at some point, so an attacker can create files at an arbitrary location.

And those were just what Ormandy found with a manual scan of the code. He’s writing a fuzzer for Ghostscript, which should provide plenty of entertainment in the future.

“IMHO, -dSAFER is a fragile security boundary at the moment, and executing untrusted postscript should be discouraged, at least by default”, he added.

The CERT advisory provides an example of how to block GhostScript processing, assuming a Red Hat system. The sysadmin should add the following lines to the policymap section of /etc/ImageMagick/policy.xml:

policy domain="coder" rights="none" pattern="PS" /
policy domain="coder" rights="none" pattern="EPS" /
policy domain="coder" rights="none" pattern="PDF" /
policy domain="coder" rights="none" pattern="XPS" /

Get busy, sysadmins. As far as we’re aware, no patches are available short of altering your policy.xml file, or removing any code that passes untrusted data to GhostScript. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/23/imagemagick_ghostscript/

Whoa, is it Patch Tuesday already? No, just an unexpected critical Photoshop fix

Adobe says there’s a critical flaw in its Photoshop Creative Cloud software for Windows and macOS that can be exploited by malicious files to hijack systems.

The unscheduled update fixes up critical memory corruption bugs discovered by Fortinet’s Kushal Arvind Shah, both of which leave vulnerable systems open to remote code execution (RCE).

Opening a specially crafted file in a vulnerable Photoshop version will trigger the execute of code smuggled into the images. The RCE is “in the context of the current user.” However, as general-purpose workstation software, Photoshop CC is likely to be installed on lots of machines where nobody’s bothered to minimize user privileges.

The affected versions are Photoshop CC 2018 19.1.5 and earlier, and Photoshop CC 2017 18.1.5 and earlier, on Windows and macOS. The updated versions are Photoshop CC 2018 19.1.6 and Photoshop CC 2017 18.1.6. Make sure you run the usual software update routine to pick up the security fixes.

Full details are yet to be released for the two vulnerabilities, given the Common Vulnerabilities and Exposures assignments CVE-2018-12810 and CVE-2018-12811.

The two bugs weren’t in this week’s Patch Tuesday cycle, which covered Flash, Acrobat Reader, Experience Manager, and a separate privilege escalation flaw in Creative Cloud.

In spite of the “critical” rating, Adobe gave the patch a priority of the latest bugs as it’s a bug “in a product that has historically not been a target for attackers.”

El Reg reckons “your discretion” would just as well be treated as “do it now.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/23/adobe_photoshop_critical_patch/

If it doesn’t need to be connected, don’t: Nurse prescribes meds for sickly hospital infosec

BSides Manchester A children’s nurse prescribed hospitals ways to improve their computer security at the BSides conference in Manchester, England, earlier this month.

Jelena Milosevic developed an interest in cybersecurity over the past four years while working as an on-call nurse in several hospitals across the Netherlands, where she said digital security practices were generally poor.

Security and privacy has become increasingly important for hospitals and clinics. Aging systems host troves of personal, medical, and financial information that could easily be monetized in the wrong hands. Obsolete platforms such as Windows XP – used to manage blood fridges and similar tech – as well as the introduction of Internet-of-Things gadgets threaten to expose healthcare facilities to hackers and malware.

Milosevic said hospitals might be more inclined than other organizations to succumb to ransomware, and possibly pay up, due to poor backup practices and the cost of reassembling records.

She added that the full consequences of the WannaCry ransomware outbreak are unknown. This software nasty hit the UK’s National Health Service particularly hard last year, and similar strains of malware, such as Orangeworm, have posed problems for hospitals in Europe.

WannaCry was a wakeup call for health institutions in the UK and beyond. Since the infection, most hospital websites have moved from HTTP to the more secure HTTPS, according to Milosevic – a move that wouldn’t have halted the virus’s spread but is indicative of IT staff taking security more seriously.

Basic security of hospital websites 2017 [source: Jelena Milosevic]

A graph comparing Dutch and American hospital website security in 2017 … click to enlarge

Hospitals are being given mixed messages about the security risk posed by internet-connected or network-connected medical kit. Manufacturers tell healthcare pros the equipment should be always connected to some backend, contrary to the advice of security clearing house ICS-CERT and others.

Milosevic criticized hardware makers for offering IoT healthcare tech that offered “no patch, no update, no antivirus and no proxy” – in other words, chronically insecure. “Don’t put it on the internet if it doesn’t need to be on the internet,” she said, adding that there was often no medical need for such devices to be connected to the ‘net 24/7.

Four in five healthcare institutions have no one responsible for security, she claimed. “The IT department isn’t the security department, but that’s what doctors and nurses think,” Milosevic said. She added that information security in hospitals should be offered through an independent department. Once established, this should offer training to other hospital units and departments.

Security needs to be built from the ground up and supplemented with awareness programmes, she said. Milosevic also argued that in much the same way a doctor needs to know how a body works, medical pros should also know how their computer gear works.

“Healthcare without [basic] security is like surgery without sterile instruments,” Milosevic said.

A video recording of Milosevic’s presentation can be found below.

Youtube Video

Milosevic has worked for various hospitals in the Netherlands since 1995 and before that spent 10 years on the intensive care unit at the University Children’s Hospital in Belgrade. For the past four years she has been a member of the I Am The Cavalry and Women in Cybersecurity, both community-based infosec advocacy organizations. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/23/hospital_infosec_bsides/