STE WILLIAMS

How many ways can a PDF mess up your PC? 47 in this Adobe update alone

Adobe has posted security updates for Acrobat, Reader, and Photoshop, many of them critical fixes.

The developer says the Acrobat and Reader update will address a total of 47 CVE-listed vulnerabilities, including two dozen remote code execution flaws in the PDF readers. Adobe notes that none of the bugs are being actively targeted yet.

adobe

Adobe: Two critical Flash security bugs fixed for the price of one

READ MORE

Of those 47 CVE entries, 13 are for use-after-free remote code execution bugs, while another seven allow remote code execution via heap overflow errors. The remaining remote code execution vulnerabilities are a double free error (CVE-2018-4990), an out-of-bounds write error (CVE-2018-4950 ), a type confusion error (CVE-2018-4953), and an untrusted pointer dereference (CVE-2018-4987).

In all, 19 of the patched flaws are information disclosure bugs via out-of-bounds read errors, while two others (CVE-2018-4994, CVE-2018-4979) describe security bypass vulnerabilities. Two other information disclosure flaws are due to NTLM SSO hash theft (CVE-2018-4993) and a memory corruption error (CVE-2018-4965).

For Acrobat and Reader DC, the updated version is 2018.011.20040, while Acrobat 2017 and Acrobat Reader DC 2017 are patched in version 2017.011.30080. The “Classic 2015” versions of Acrobat Reader DC and Acrobat DC are patched in update 2015.006.30418.

Photoshop, meanwhile, has been updated to patch over CVE-2018-4946, a remote code execution flaw due to an out-of-bounds write error. Discovery of the flaw was credited to researcher Giwan Go, who reported it via Trend Micro’s Zero Day Initiative.

Those running Photoshop CC 2018 (both the Mac and Windows versions) will want to install versions 19.1.4, while those using Photoshop CC 2017 will want to download the 18.1.4 release.

This latest Adobe updates comes less than a week after the vendor kicked out another scheduled batch of fixes for Flash to coincide with Patch Tuesday. That update also included updates for Creative Cloud on both Mac and Windows. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/14/adobe_critical_fixes/

Why Enterprises Can’t Ignore Third-Party IoT-Related Risks

There’s a major disconnect between Internet of Things governance and risk management, according to a new report. Follow these five steps to address the risks.

The Internet of Things (IoT) is one of the greatest technological advancements in the last decade, so it’s no wonder that the IoT market is expected to grow to 20.4 billion devices by 2020 and more than 8.4 billion IoT devices are already in use today.

According to a new report by the Ponemon Institute and Shared Assessments, “The Internet of Things (IoT): A New Era of Third Party Risk,” it is estimated that every workplace has approximately 16,000 IoT devices connected to its network. Given the prevalence of IoT adoption, it makes sense that IoT presents a major threat vector for hackers who have discovered new entry points for cyberattacks. Basically, any device with an Internet connection is subject to being compromised and can become a back door for attackers to access enterprises or steal other sensitive data.

Unfortunately, many IoT devices run on firmware that is often difficult to patch and update, and some come with default passwords that are easy to crack. We’ve already seen plenty of distributed denial-of-service (DDoS) attacks through IoT devices, including the Mirai botnet and Brickerbot, IoT ransomware, malware, and more. Over the past two years, baby monitors, robots, smart TVs and refrigerators, Nest thermostats, and even connected cars have made headlines for being hacked.

Many enterprises are finally realizing the growing attack surface that IoT devices bring to the workplace, and some are beginning to monitor for these endpoints. But what happens when an IoT device that’s connected to a corporate network by a third party suddenly becomes compromised? Is that enterprise monitoring its third parties for IoT risks? Is there a policy in place to handle risky third-party IoT devices? According to this new research, many enterprises are ill prepared for this uphill IoT risk management battle.

Shared Assessments commissioned Ponemon to survey 605 individuals who participate in corporate governance and/or risk oversight activities and are familiar with the use of IoT devices in their organization. The study found that while there have been some advances in third-party risk focused on IoT devices and applications since 2017, risk management in this area is still at a relatively low level of maturity. It revealed that almost all respondents (97%) believe their organization will suffer from a catastrophic IoT-related security event in the next two years, yet many aren’t properly assessing for third-party IoT risks and many don’t have an accurate inventory of IoT devices or applications.

The report underscores three major disconnects when it comes to third-party risk management practices, including:

The awareness of IoT risks is increasing as IoT adoption grows: With an increasing reliance on IoT devices in the workplace, organizations are realizing the magnitude of what an attack related to an unsecured IoT device could do to their business. Eighty-one percent of survey respondents say that a data breach caused by an unsecured IoT device is likely to occur in the next 24 months, and 60% are concerned the IoT ecosystem is vulnerable to a ransomware attack. However, only 28% say they currently include IoT-related risk as part of the third-party due diligence.

IoT risk management practices are uneven: The average number of IoT devices in the workplace is expected to grow from 15,875 to 24,762 over the next two years, so it’s not surprising that only 45% of respondents believe it’s possible to keep an inventory of such devices, while only 19% inventory at least 50% of their IoT devices. A large majority, 88%, cite lack of centralized control as a primary reason for the difficulty of completing and maintaining a full inventory. Even though 60% of respondents say their organization has a third-party risk management program in place, less than half of organizations (46%) say they have a policy in place to disable a risky IoT device within their own organization.

The gap between internal and third-party IoT monitoring is substantial: Almost half of all organizations say they are actively monitoring for IoT device risks within their workplace, but more concerning is that only 29% are actively monitoring for third-party IoT device risks. A quarter of respondents admit they are unsure if their organization was affected by a cyberattack involving an IoT device, while 35% said they don’t know if it would be possible to detect a third-party data breach. Shockingly, only 9% of respondents say they are fully aware of all of their physical objects connected to the Internet.

The bottom line is that more focus is being given to internal workplace IoT device risks than to risks posed by third parties. Many companies have fallen behind on the basics such as assigning accountability and inventory management, and there are uncertainties around who is responsible for managing and mitigating third-party risks. There’s also an over-reliance on third-party contracts and policies for IoT risk management.

To more effectively address IoT risks and improve third-party risk management programs, companies should take the following proactive steps:

  1. Update asset management processes and inventory systems to include IoT devices and applications, and understand the security characteristics of all inventoried devices. When devices are found to have inadequate IoT security controls, replace them.
  2. Identify and assign accountability for approval, monitoring, use, and deployment of IoT devices and applications within your organization.
  3. Ensure that IoT devices, applications and metrics are included, monitored, and reported as part of your third-party risk management program.
  4. Verify that specific third-party IoT related controls included in contract clauses, policies, and procedures can be operationalized and monitored for adherence and compliance.
  5. Collaborate with industry peers, colleagues, and experts to identify successful approaches, techniques, solutions, and standards to monitor and mitigate third-party IoT device and application risks.

Related Content:

Charlie Miller is senior vice president with the Santa Fe Group where his key responsibilities include managing and expanding the Collaborative Onsite Assessments Program and facilitating regulatory, partner and association relationships. Charlie has vast industry experience, … View Full Bio

Article source: https://www.darkreading.com/endpoint/why-enterprises-cant-ignore-third-party-iot-related-risks/a/d-id/1331703?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Chili’s Suffers Data Breach

The restaurant believes malware was used to collect payment card data including names and credit or debit numbers.

Chili’s Grill Bar, a restaurant brand owned by Dallas-based Brinker International, said some of its restaurants were hit with a cyberattack which may have resulted in compromise of users’ payment card data. It believes the incident was limited to March and April 2018.

The company first learned of the compromise on May 11, 2018 and launched an investigation to learn more. Based on the details so far, it seems malware was used to collect credit and debit card numbers, as well as the cardholders’ names, from payment systems used for in-restaurant purchases. Chili’s doesn’t collect social security numbers, full birthdays, or federal or state identification numbers, so none of this type of information was affected.

Officials report they have contacted both law enforcement and third-party forensic experts as part of the investigation. Chili’s reports it’s trying to provide fraud resolution and credit monitoring services for affected customers and it will share more info as it’s available. In the meantime, it has provided guidance for those who may have been compromised on its website.

Read more details here.

 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/chilis-suffers-data-breach/d/d-id/1331792?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook Suspends 200 Apps

Thousands of apps have been investigated as Facebook determines which had access to large amounts of user data before its 2014 policy changes.

Facebook is following through on a massive app investigation and audit promised by CEO Mark Zuckerberg back in March following the Cambridge Analytica scandal. In an update posted today, Facebook said it has investigated thousands of apps and suspended “around 200” while it inspects them. 

The company is taking a closer look at apps that had access to large amounts of information prior to policy changes it made in 2014. That year, Facebook implemented restrictions to limit the amount of data apps could access. Before 2014, apps didn’t need to request permission to collect data on users’ friends. After 2014, friends had to consent for their data to be collected.

These limitations prevent the extensive data collection of apps like the personality quiz created by Aleksandr Kogan, who shared his trove of information on millions of Facebook users with Cambridge Analytica. Kogan’s quiz was created before 2014, so his app – and others created in the same timeframe – could gather data on millions of people without their knowledge.

In the first phase of its investigation, Facebook is reviewing all apps that had this level of data access. The second phase will involve interviews and “requests for information,” including inquiries about the app and its data access, as well as potential audits and on-site inspections.

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/facebook-suspends-200-apps/d/d-id/1331794?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

2 million lines of source code left exposed by phone company EE

EE, which at 30 million customers is the UK’s largest mobile network, was formerly known as Everything Everywhere.

Unfortunately, the name has proved prescient: it reportedly did, in fact, leave everything for anyone anywhere to find by non-securing a critical code repository so that anyone could log in with the default username and password. As in, “admin” was both the user name and password for getting into the downloadable portal software, according to a security researcher with the Twitter handle “Six”.

As first reported by ZDNet, on Thursday, Six tweeted a screen capture that he said shows (redacted) access keys to authorize EE’s employee tool. “You trust these guys with your credit card details, while they do not care about security, or customer privacy,” Six said.

The researcher said that after waiting “many many weeks” for a reply from the company, he decided to publicly disclose the vulnerability. His motive was reportedly to “educate the wider masses about security, and how overlooked it is across the industries.”

The code repository contained two million lines of the source code behind EE’s systems, including systems that contained employee data.

Six said that he had discovered a SonarQube portal on an EE subdomain. SonarQube is an open-source platform that offers continuous code auditing to perform automatic reviews and which EE uses to seek out vulnerabilities across its website and customer portal.

The security researcher said that this type of default-password glitch could allow malicious hackers to comb through the code to identify vulnerabilities. But as Six points out, why even bother? Anybody could simply view what should have been private: namely, EE’s Amazon Web Services (AWS) keys, application programming interface (API) keys, and more.

That rates a negative 1 for not changing the default password, Six decreed, but a negative 2 for whoever allowed this code to get to production with a total of 167 vulnerabilities:

Unfortunately, leaving iffy source code open for modification like this means that unless it gets picked up in code review, that iffy code then becomes part of the official, production-level computer program.

What’s the likelihood that source code attacks would have resulted? Well, this isn’t the first time that people have wanted answers on that question. In 2013, one of Adobe’s network security breaches reportedly included the theft of 40GB of its source code.

At the time, Naked Security’s Paul Ducklin had some thoughts on that. Yes, it brings risk, but developing attacks from source codes is arduous – you have to walk, step by step, through a program to figure out what it’s doing.

At any rate, ZDNet quoted an EE spokesperson who said that “No customer data is, or has been, at risk.” Its code goes through the SonarQube quality check, after which it “goes through further checks, processes, and review from our security team before being published,” the spokesperson said.

This development code does not contain any information pertaining to our production infrastructure or production API credentials as these are maintained in separate secure systems and details are changed by a separate team.

ZDNet’s Zack Whittaker said that Six shared several screenshots taken from within the portal. He also noted that ZDNet itself couldn’t independently verify that “admin” was the portal’s login credentials – at least, not legally, given that it would require actually logging in.

An EE spokesperson later told ZDNet that the company had changed the password and that the service was pulled offline while the company investigates.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vUYIaivj5SQ/

Remote code execution bug found in GPON routers, but how bad is it really?

An anonymous researcher, via vpnMentor, recently disclosed two vulnerabilities in several older models of Dasan-made GPON routers. The first is an authentication bypass, which can be used to trigger the second vulnerability, which allows remote code execution (RCE).

The first vulnerability can be triggered simply by appending the string ?images/ to a URL ending in .html or /GponForm/, which allows the attacker to bypass the authentication process, and from there, trigger the remote code execution.

These vulnerabilities proved to be a tempting target for attackers who would love nothing better than to take control of these vulnerable routers and add them to their botnets.

In fact, within a day of the disclosure, there were reports of the vulnerabilities being exploited in the wild. Just a few weeks later, it looks like at least five botnets, including Mirai, are working to take advantage of these bugs, according to researchers at Netlab 360.

Just how big of an impact might these vulnerabilities have? It’s the topic of debate between the researcher who found the vulnerability and Dasan, which sold the routers to ISPs in several countries.

In a blog post, the researcher states that the vulnerability is present in all GPON routers they tested, potentially resulting in “an entire network compromise.” By citing a simple Shodan search for GPON devices, they determine that over a million devices are potentially affected.

But Dasan doesn’t agree with the researcher’s findings. In an official statement, Dasan says the vulnerability is present in only two series of routers released nine years ago which, given their age, are no longer supported. Dasan’s own estimates put the number of devices affected under 240,000 – a far cry from the original researcher’s estimate of nearly a million.

Regardless of which number is more accurate, the nine-year-old routers are likely toiling away in a dusty corner somewhere, and unlikely to be patched until they completely stop working and get replaced.

A quick search on Shodan reveals that these devices are primarily in use in Mexico, Kazakhstan, and Vietnam, with several thousand devices active in Russia and Nigeria as well. It’s unsurprising then that one of the botnets took advantage of IPs in Vietnam to propagate. According to Netlab 360, right now the botnets are merely working to gain territory and are not actively planting malicious code, but that could always change.

Dasan says that anyone who has an affected GPON router has been contacted and informed of the flaws.

It will be up to the discretion of each customer to decide how to address the condition for their deployed equipment.

In case the choice isn’t to bin the router and get a new one, there is a patch available, but there’s a big catch: it’s not from Dasan, as the company hasn’t released its own patch and hasn’t indicated if it ever will. The patch was made by the researcher who disclosed the bug in the first place.

There are risks in using a third-party patch and each user will have to balance those against the costs and risks of not patching at all, attempting to quarantine the device or simply replacing it with something newer and easier to update.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Em3n8dHsoPE/

Is Google’s Duplex AI helpful or plain creepy?

Last week, Google CEO Sundar Pichai used the company’s annual I/O event to demo an experimental new feature of Google Assistant.

It consisted of two ordinary-sounding one-minute voice conversations, one to book a hair appointment, the other to make a restaurant reservation.

The unusual aspect of those conversations – which Google said were not staged – is that in both the caller was a computer powered by its Duplex AI technology capable of talking and responding to human beings on the other end using natural language.

The clever (or creepy) bit is that had Pichai not told audience members about the AI they would have been unlikely to have detected it.

Computer-generated voice systems are supposed to be stilted, synthesised, and limited in their responses, but this one sounded convincingly human in every way right down to its reassuringly disfluent use of “mhmm” and “um” as part of its chatter.

Duplex is robust enough that Google will start offering it to a small number of Voice Assistant Android users this summer, which they’ll use to make simple reservations like the ones in the demo.

As I/O attendees applauded, and online watchers wondered aloud whether Duplex might be good enough to pass the famous Turing test, the doubters offered a less optimistic assessment of Google’s cleverness.

Might criminals use voice AI to deceive people? What are the implications of people delegating social interaction to machines?  Will it put millions of service industry workers out of a job?

Then there are nuanced ethical issues Google faces from day one, such as do people have a right to know they are talking to a machine?

This struck many as a big tech firm doing something because it could, said one New York Times writer who described the demo as “horrifying”:

Silicon Valley is ethically lost, rudderless and has not learned a thing.

Stung, Google clarified:

We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.

But if people talking to Duplex will be told they are talking to a machine, why make it sound so convincingly human?

Technically, Duplex is a combination of systems including automatic speech recognition (ASR) and neural network ‘deep learning’ whose capabilities have surged in the last five years on the back of processing improvements and the high salaries offered to PhDs.

For now, the technology can only be used for a narrow set of tasks, but inevitably this will expand quickly, which in turn will lead to calls for more rules and regulation.

Google can probably cope with this, but what will be more difficult will be changing how people see AI, especially where it is being used to automate social interactions that express deeper meanings that are slow to evolve.

As with the ‘Turk’, a famous 19th century chess-playing automaton which turned out to be a very small man hiding under the table, it’s as if AI makes us feel like we are being deceived.

Google’s Duplex is no clever trick, but on some unconscious level, the pattern has been set – people feel compelled to look under the table or behind the curtain to find something that reminds them of themselves.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nthSRGobDUs/

Warehouse full of digital copiers yields truckloads of secrets

Photocopiers: Ever wonder what the person before you copied? And no, we’re not talking about body parts.

We’re talking birth certificates, bank records, driver licenses, customers’ credit cards, income tax forms and other valuable personal information: a jackpot for identity thieves.

It’s unfortunately not hard at all to get your hands on whatever the person before you copied: as the Federal Trade Commission (FTC) has been warning us for years, as at least one copier vendor has trotted out Christian Slater to do a Mr. Robot scary-hacker video about, and as copier-focused security outfits keep reminding us, most machines made since 2002 have hard drives.

Those hard drives contain copies of everything the machines print, copy, scan, fax or email. Unfortunately, few businesses take advantage of data security features the manufacturers may offer – such as encryption or file wiping – either as standard, or at an additional cost for an add-on kit.

That was made clear in February, when CBS Evening News visited a New Jersey warehouse full of used copy machines.

As the FTC has advised, digital copiers are often leased, returned, and then leased again or sold. Apparently, not many businesses bother to wipe their hard drives when they acquire or return the machines.

In February, CBS’s Armen Keteyian and John Juntunen – who runs a company called Digital Copier Security that sells software to scrub data off copier hard drives – picked up four machines at about $300 each.

Keteyian:

Almost every one of them holds a secret.

The content of the hard drives are one thing. Some of the machines were passed along with documents still on their copier glass: no forensics software required.

Within 30 minutes, the hard drives were pulled. In less than 12 hours, a free forensic program downloaded tens of thousands of documents.

The results included:

  • Detailed domestic violence complaints and a list of wanted sex offenders from Buffalo, New York, police department’s copier.
  • A list of targets in a major drug raid from a second machine from the Buffalo Police: this one from the Narcotics Unit.
  • A New York construction company’s machine yielded design plans for a building near Ground Zero in Manhattan; 95 pages of pay stubs with names, addresses and taxpayer IDs; and $40,000 in copied checks.
  • 300 pages of individual medical records that came from a machine from Affinity Health Plan – a New York insurance company – that emerged when they hit “print.”

Keteyian, writing about the medical records:

They included everything from drug prescriptions, to blood test results, to a cancer diagnosis.

As Keteyian notes, that’s a “potentially serious breach of federal privacy law”: specifically, a violation (or, one assumes, 300 violations?) of the Health Insurance Portability and Accountability Act (HIPAA).

The Buffalo Police Department and the New York construction company declined to comment on the story. Affinity Health Plan issued a statement that said, in part:

We are taking the necessary steps to ensure that none of our customers’ personal information remains on other previously leased copiers, and that no personal information will be released inadvertently in the future.

Of course, copiers can be dangerous in other ways besides storing sensitive materials on their hard drives. We’ve seen…

The FTC has a slew of tips on how to ensure that photocopiers’ hard drives don’t give away your secrets.

The TL;DR: treat your copier as you would a computer – because that’s what it is. Typically, it’s an internet-enabled, network-trusted computer, with all the inherent dangers that entails.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RVvF1FLqAXs/

Nest turns up the temperature on password reusers

Google’s Nest division of smart-home gadgets recently notified some users about a data breach that involved their credentials. For that, it deserves a pat on the back.

In a security notice sent to one user and published by the Internet Society, Nest told the user to change their password and turn on two-step verification (2SV), also known as multiple- or two-factor authentication (MFA or 2FA).

Whether you call it MFA, 2FA or 2SV, it’s an increasingly common security procedure that aims to protect your online accounts against password-stealing cybercrooks.

So why do we want to pat Nest on the back? Because the breach wasn’t a matter of Nest’s own password database getting breached or, say, from an employee being careless.

Rather, Nest spotted the password because it cropped up in a list of breached credentials, meaning two things: 1) the users whom Nest emailed have been reusing passwords, and 2) Nest’s been proactively keeping an eye out to protect them from their own password foibles.

As Online Trust Alliance Director Jeff Wilbur said in an Internet Society post on Thursday, it’s not clear how Nest figured out that the password had been compromised. Maybe Nest was alerted by security researcher Troy Hunt’s recently updated Pwned Passwords service (part of his “have i been pwned?” site)?

The service lets you enter a password to see if it matches more than half a billion passwords that have been compromised in data breaches. A hashed version of the full list of passwords can also be downloaded to do local or batch processing, Wilbur noted.

If we said it once, we’ve reused our don’t-reuse-passwords advice a thousand times. We’re not apologizing, though, since password reuse really is such an atrocious idea.

We know that cybercrooks use breached credentials to see if they work on a variety of third-party sites, be it Facebook, Netflix or many others – including online banking sites.

That, in fact, is why both Facebook and Netflix prowl the internet looking for your username/password combos to show up in troves of leaked credentials.

If those services do find customer credentials that match breached logins, they force users to change those reused passwords.

People are often discomfited by the notion: the thinking goes, how do services such as Facebook know enough about our passwords to know we’re reusing them?

The answer lies in comparing hashed passwords instead of comparing them in their plain-text form. As Facebook security engineer Chris Long explained in an official blog post back in 2014, what Facebook looks for are stolen credentials posted on the public “paste” sites. Once found, those stolen credentials are then run through the same code Facebook uses to check people’s passwords when they log in. If the hashed values match, bingo – you’ve found a password reuser, without having to look at the actual, plain-text password.

This is all according to what the National Institute of Standards and Technology (NIST) recommends in its Digital Identity Guidelines, published a year ago: the guidelines recommend that user passwords be compared against lists of known breached passwords so that users can be encouraged to create unique passwords not already known to bad actors.

Nice work, Nest! Wilbur said that the Internet Society gives Nest a thumbs-up for demonstrating best practices for how any organization providing online accounts should look out for its users.

As Wilbur noted, earlier this month, Twitter also suggested that all users reset their passwords, given that it had made a serious security mistake: namely, it had been storing unencrypted copies of passwords… as in, plaintext passwords, saved to internal, unhashed log files.

Gentle recommendations are one option. So too is locking users in a closet when you find that they’ve reused passwords/emails as Facebook did when it found matching credentials used on Adobe.

Want to come out of the closet? Switch to a unique set of credentials – one unique, strong set for every site, every service!

Whether you’re a Nest user or not, make sure your family, your friends, your colleagues and anybody else you can think of are choosing strong passwords, at least 12 characters long, that mix letters, numbers and special characters.

Here’s how!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/38tyLNZYyfg/

Have you updated your Electron app? We hope so. There was a bad code-injection bug in it

Electron – the widely used desktop application framework that renders top programs such as Slack, Atom, and Visual Studio Code – suffered from a security vulnerability that potentially allows miscreants to execute evil code on victims’ computers.

That means applications relying on Electron may need updating. If you use an Electron-based program – there’s a list here – you should follow best practices and make sure you’re running the latest release of the software. And app developers should ensure their software is patched, or at least not vulnerable, and available to download.

The programming blunder was highlighted and described in detail this month by TrustWave’s Brendan Scarvell. In short: the bug, CVE-2018-1000136, can be exploited to import arbitrary code into the application via Node.js.

An app developer only needed to be a little careless, and accept the default settings, and their application would be vulnerable. The issue was fixed in late March by the Electron team.

Scarvell noted that the framework is used by “Slack, Discord, Signal, Atom, Visual Studio Code, and Github Desktop,” among others, although the Signal team told us that Signal for Desktop was not affected by the above flaw. Similarly, other apps may not be vulnerable.

Scarvell set out the conditions for an app to be at risk: it’s built on Electron version 1.7.13, 1.8.4, or 2.0.0-beta.3, and the developer hasn’t manually opted one of the following:

  • ”Declared webviewTag: false in its webPreferences;
  • ”Enabled the nativeWindowOption option in its webPreferences; or
  • ”Intercepting new-window events and overriding event.newGuest without using the supplied options tag.”

So, what’s going on here? Setting nodeIntegration: false in an app’s webPreferences is supposed to prevent the software using Electron’s APIs from gaining access to Node.js – and that’s switched off by default.

The nodeIntegration: false setting also saves the developer the effort of sanitising user inputs which, if they were handled by Node.js, would enable cross-site-scripting attacks.

As Scarvell explained, the vulnerability he found allowed an attacker to change the nodeIntegration setting to “true”.

The issue is in the handling of another tag, WebView, which allows a developer to “embed content, such as web pages, into your Electron application and run it as a separate process,” in combination with how Electron handles new browser windows.

An attacker, he wrote, could control the new browser window (the window.open command) to pass a WebView tag that enabled nodeIntegration (that is, set it to “true”).

Electron provided a patch to CVE-2018-1000136 in versions 1.7.13, 1.8.4, 2.0.0-beta.4 here, along with mitigation instructions if, for some reason, a developer can’t upgrade. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/14/electron_xss_vulnerability_cve_2018_1000136/