STE WILLIAMS

Julian Assange Arrested in London

The WikiLeaks founder, who was taken from the Ecuadorian Embassy by British police, has been convicted of skipping bail in 2012.

Seven years after entering the Ecuadorian embassy in London, WikiLeaks founder Julian Assange was taken out of the building by British police, after which he was convicted of skipping bail and scheduled for sentencing at a later date.

But a bail violation is not Assange’s most critical legal issue. The US has requested his extradition from Britain to face conspiracy charges in connection with Chelsea Manning’s 2010 classified data leak. Because an extradition treaty exists between the US and Britain, it is generally assumed that Assange will ultimately be transferred to the US for questioning and trial.

In revoking his asylum at the embassy, Ecuador also suspended the Ecuadorian citizenship granted Assange under the previous administration. The government of Ecuador accused Assange and others at WikiLeaks of collaborating in attempts to destabilize the government.

Read more here.

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/careers-and-people/julian-assange-arrested-in-london/d/d-id/1334405?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Toddler locks father out of iPad for 25.5 MILLION minutes, or until 2067

Last week a father thought he’d been permanently locked out of his Apple iPad after his young son repeatedly entered an incorrect passcode.

‘Permanently’ in this context means 25.5 million minutes (or 25,536,442), equivalent to over 48 years. That’s the wait time that confronted journalist Evan Osnos last week when he looked at the iPad screen after recovering it from the youngster’s grasp.

Naturally, he turned in his hour of need to the world’s biggest tech support system, Twitter:

But how does such a thing happen? The short answer is not easily.

A lot of stories mention that Osnos’s son entered an incorrect passcode 10 times without mentioning how hard that is to do this in a short space of time.

It’s common knowledge that if you get the code wrong five times, the user is locked out for one minute – that could have happened in seconds.

However, entering a sixth incorrect code delays the next guess for five minutes, a seventh to 15 minutes, and so on until at the tenth attempt, the device is disabled.

Alternatively, if this option has been set by the user, at the tenth incorrect attempt it will automatically erase the device.

But the timeouts between guesses after number five mean that it should take three hours to enter those 10 incorrect guesses, which Apple reasonably assumes is beyond the patience of most toddlers.

If there’s a figure which should impress us it’s not the 48-year timeout so much as the three hours of persistence it took to get to that stage.

In fact, this is a common problem that crops up regularly on Apple support forums. In 2018, a mother in China also reportedly complained about the same 48-year wait after an identical intervention by a child.

Run a web search and you encounter variations on the same story stretching back to the days of early iPods and iPhones.

What seems to have elevated this story to media attention was that the victim was a journalist with lots of Twitter followers who’d never heard of the problem.

What to do

The good news is that the screen message about being locked out for decades is wrong – there is a way to recover the device – although the order of button presses required varies slightly depending on which version of iOS or iPhone/iPad is involved.

As Osnos tweeted:

Apple explains the procedure on its support site, which requires connecting to the user’s iTunes account to initiate a restore. This assumes users will have automatic backup to iCloud turned on – Settings [your name] iCloud iCloud Backup.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Crh8J-yKJNI/

App could have let attackers locate and take control of users’ cars

A smartphone app used to control vehicles across North America left them wide open to attackers, it was revealed on Monday. The MyCar application, from Canada-based AutoMobility Distribution, allowed anyone that knew about the vulnerability to control, monitor, and access vehicles from an unauthorized device, experts said.

MyCar is an app available on both iOS and Android devices that serves the aftermarket telematics market. Users can install connected devices into their cars, turning them into IoT devices that they can control via a cellular connection. According to its website, the MyCar app lets users control their cars remotely from anywhere by communicating with one of these devices via AutoMobility Distribution’s servers.

Users can remotely start their car, lock and unlock vehicles, or locate them. Other features include getting the temperature and vehicle battery levels, and sharing your vehicle with other users or even transferring it to a new owner.

The company sells the app under a service plan. Users get the smartphone app, the hardware device to install in their car, and service for a set period of one or three years.

It all sounds very convenient, especially when you want a nice warm car waiting for you on those cold winter mornings. Unfortunately, according to a vulnerability note issued by Carnegie Mellon University’s Software Engineering Institute, the app also enabled attackers to take control of your car.

AutoMobility Distribution’s developers apparently wanted a way to let users access functions in the car without worrying about usernames and passwords, so they committed a cardinal software development sin: They hard-coded administrator credentials directly into the app.

The vulnerability could lead to some serious consequences for users, according to the SEI CERT note, because an attacker could extract the credentials from the source code and use it to communicate with the server to compromise a user’s vehicle:

A remote un-authenticated attacker may be able to send commands to and retrieve data from a target MyCar unit. This may allow the attacker to learn the location of a target, or gain unauthorized physical access to a vehicle.

The vulnerability was first reported by a cybersecurity researcher identified as JMaaxz, who also discovered August smart locks leaking their firmware keys in 2016. In late March, he tweeted:

Then, he tweeted again as the vulnerability went public:

AutoMobility Distribution told us that it was made aware of the issue in January, adding:

Since then, all the resources at our disposal have been used to promptly address the situation, and we have fully resolved the issue. During this vulnerability period, no actual incident or issue with compromised privacy or functionality has been reported to us or detected by our systems.

Luckily, the danger is over. SEI CERT explained that AutoMobility has updated its app to remove the hardcoded credentials, and has revoked the admin credentials in older versions of the app. Other, rebranded versions of the app sold as Carlink, Linkr, Visions MyCar, and MyCar Kia have also been fixed, it added.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wnbd1N7l-vs/

Ban the use of ‘dark patterns’ by tech companies, say US lawmakers

Lawmakers are getting wise to online companies’ manipulative user interface design practices. Congressional leaders in the US unveiled a new law this week to ban the use of ‘dark patterns’ by large online players.

What are these dark patterns? Senator Mark Warner, one of the Act’s sponsors, describes them as design choices based on psychological research. They are…

…frequently used by social media platforms to mislead consumers into agreeing to settings and practices advantageous to the company.

Warner’s Deceptive Experiences To Online Users Reduction (DETOUR) Act makes it illegal for online companies with over 100 million users to design interfaces that aim at:

Obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.

What kinds of techniques are we talking about, and what decisions do they coerce users into making?

The website darkpatterns.org, created by user experience consultant Harry Brignull, calls out several kinds of manipulative user interface behaviours with some delightful names.

These include confirmshaming. This guilts the user into opting into something. You’ll have seen this on some passive-aggressive websites that try to make you sign up for mailing lists. Instead of just offering a ‘No’ option, they’ll say something like “no, I don’t want to stay abreast of current industry trends”.

Other examples include Privacy Zuckering, which trick users into publicly sharing more information about themselves than they wanted to. Guess who it’s named after?

Another, the Roach Motel, is an interface that makes it easy for you to sign up for something, but buries the option to leave on an obscure part of the site, or makes you speak to a human operator.

These design techniques can also steer users into giving up their privacy rights, which is something that regulators in Europe have been upset about. Last June, the Norwegian Consumer Council published a report called Deceived by Design. It attacked Facebook and Google for manipulating users to give up the privacy options granted to them by GDPR.

The proposed legislation would make large online companies tell users at least once every 90 days if they are experimenting with interfaces designed to promote engagement or “product conversion”, which typically means encouraging a purchase.

Any such experiments would also have to be approved by an independent review board registered with the Federal Trade Commission.

The Act also takes a stab at preventing self-regulation by imposing strict rules around the formation of professional standards bodies. The worry here seems to be that large online companies could otherwise create a professional standards body themselves. They could then use it to create their own guidelines for user interface design.

The legislation forces any such professional standards body created by industry to have at least one director representing the users, rather than the online companies that created it. It would also need explicit rules to prevent manipulative interface design.

One notable inclusion in the Act protects children. Under the legislation, it would be unlawful to craft user interfaces…

…with the purpose or substantial effect of cultivating compulsive usage, including video auto-play functions initiated without the consent of a user.

These rules represent a firm push back against manipulative practices by large online companies to steer their users down certain paths. If the Act passes into law, it’ll be interesting to see how forcefully it’s applied.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vpPhJ3k1jys/

Serious Security: How web forms can steal your bandwidth and harm your brand

Spamming is a word we all know and an activity we all loathe – it’s when crooks blast out unwanted emails for products we don’t want at a price we won’t pay from from suppliers we’ll never trust.

And the word spam has given us related terms such as SPIM for spam via instant messaging; SPIT for spam via internet telephony – robocalls and fake tech support scams, for example; and SPEWS, which is our tongue-in-cheek name for spam via electronic web submissions.

SPEWS have typically gone two main ways:

  • Crooks use bulk HTTP posting tools to fill out online comment forms on forums and blogs. The idea is to sneak past spam filters or harried moderators to get free ads, promotional guff and bogus endorsements posted and publicly visible, at least until they’re reported and removed.
  • Crooks use reporting or contact forms to send phishy messages into your organisation. The idea is to trick the form processing system into generating an internal email from content that came from outside, thereby sidestepping some or all of the spam filtering that external emails would usually undergo.

Russian cybersecurity researchers at Russian outfit Dr.Web recently reminded us all of a third way that crooks can use SPEWS to do their dirty work.

They noticed spamtrap emails that actually came from genuine corporate senders, but with poisoned web links in the greeting part.

Instead of saying, Hi, Mr Ducklin, as you might expect from a genuine email from a trustworthy brand, they said something more along the lines of Hi, MONEY FOR YOU! [weblink here], but with a legitimate-looking sender.

Indeed, digging into the emails showed not only that the sender was legitimate but also that the email did originate from a server you’d expect – there was no sender spoofing going on.

(Spoofing is where the crooks deliberately put a bogus name in the From: field, so at first glance the email seems to come from somewhere you trust.)

How it works

What the crooks are doing is subscribing to official corporate mailing lists but putting in other people’s email addresses so that the victims receive a signup message, even though they didn’t sign up themselves.

Ironically, the crooks are abusing a built-in mailing list safety feature – one that’s been de rigueur in most of the world for some time, if not actually required by law – that sends a one-off confirmation email before actually activating a mailing list subscription.

This safety feature is often referred to as double opt-in – you won’t get any email until you put in your address (opt-in #1), and then you won’t get anything but a confirmation message until you reply to or click a link in that message (opt-in #2).

Double opt-in is meant to stop other people signing you up, either through accident or malevolence, but it does mean that anyone with access to the sign-up form can get a legitimate company to send you a one-shot email from one of its legitimate servers.

To a crook, that feels like a challenge, not merely an observation – a genuine email server that can be automatically or semi-automatically triggered to send a message to someone’s else’s email address.

In many cases, signup emails are dull and unexciting – they don’t need to be inviting or attractive, after all, because they’re meant to be simple confirmations of a choice you have already made.

But some organisations can’t resist giving the glitzy marketing treatment even to their mailing list confirmations, filling them with logos, clickable links, tempting offers and all the other COOL THINGS YOU WILL ENJOY as long as you actually do complete your signup.

There’s something circular and unappealing about this approach – given that the company isn’t supposed to email you marketing material until you sign up and opt in, emailing you marketing material as part of the opt-in process seems to be putting the cart before the horse, or at least the “in” before the “opt”.

Even though marketing to you as part of getting approval to market to you is annoying, receiving one glamorous and groovy but potentially unwanted email that you weren’t expecting probably isn’t a big deal.

After all, ignoring the email automatically means that you’ll get no more of them.

What is a big deal is that Dr.Web noticed that several major brands and services were incautious about how much information from the signup form itself they trustingly copied into the signup email and ‘reflected’ back to the supplied email address.

For example, instead of signing me up as Duck and getting an email pushed out to me with a greeting Hi, Duck

…the crooks might be able to sign up as Duck! GET RICH QUICK [link] and trigger an email to me that said, Hi, Duck! GET RICH QUICK [clickable link].

The spammy part of the confirmation email would end up wrapped into a visually appealing, on-brand, professionally produced HTML page, giving it cultural credibility it didn’t deserve.

Worse, the email itself would pass all anti-spam sender checks such as SPF, DKIM and DMARC, because it really, genuinely, actually came from the right server, giving it a quantifiable technical credibility it didn’t deserve.

What to do?

When re-using untrusted data submitted from outside, be careful not to ‘reflect’ or pass on any of that data in the body of any web page your return, or email you generate.

Otherwise you open up your website or email server to reflection attacks, where you send out dodgy content that I get to choose.

If I give my name as Paul (or some crook gives my name for me), then sending me an email with the text Hello, Paul is mostly harmless, albeit presumptuous given that you actually have no idea who I am.

But if a crook gives give my name as Why not try this fantastic website [insert web hyperlink here], then sending me an email with that text in it is dangerous, because it could lead to a phishing site, to malware, to inappropriate content, and so on.

So, follow our advice:

  • Validate your input. If it came from outside, you can’t trust it, so check it. And even if it came from inside, check it anyway to filter out inappropriate, irrelevant or unwanted data as soon as you can.
  • Keep confirmation emails simple. They aren’t a marketing opportunity – by definition they are almost exactly the opposite! – so don’t overdo it. The less you send and the simpler you keep it, the lower the chance that anyone can abuse your system to spew out messages that look like something they aren’t.
  • Don’t re-use anything from the input form except the raw signup email address. There’s no point in addressing me as Dear Paul Ducklin in an email that’s supposed to assume it might not be me at all. You don’t know me, so be honest and just use the word youinstead.
  • Pass autogenerated emails like this through your regular spam filter if you can. Don’t exonerate web form submissions from spam filtering just because they were generated on from a supposedly secure server managed by IT.

As so often in cybersecurity, less is more!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eOFdmqns0Pg/

Patch blues-day: Microsoft yanks code after some PCs are rendered super secure (and unbootable) following update

A bunch of PCs running the wares of Sophos or Avast have been freezing or failing to start following the installation of patches emitted by Microsoft on 9 April.

The afflicted are those running Windows 8.1, 7, Server 2008 R2 and Server 2012. Avast for Business and Cloudcare have been hit by the problem, as have PCs running Endpoint Protection managed by Sophos Central or Sophos Enterprise Console (SEC).

Microsoft said this morning that it had “temporarily blocked devices from receiving this update if the Sophos Endpoint is installed”, a move which, sadly, had come a bit late for those afflicted.

While Microsoft has yet to mention it, Avast has published an advisory to the effect that it is researching the problem, which it reckoned was mainly hitting Windows 7 users.

For its part, Sophos strongly advises users not to take the update, and if an unfortunate user has downloaded the thing, to uninstall it without hesitation.

According to Sophos, Windows users hit by the problem must boot their machines into safe mode, disable Sophos, reboot and uninstall the borked update. Then the antivirus service can be reenabled.

However, one Reg reader told us that in his case he was forced to boot from a Windows 7 recovery disk and rename the Sophos program folder to stop the service from vomiting over the operating system on startup. He went on to tell us that his office IT staff had been “inundated” as users found themselves at the sharp, pointy end of the mess.

It’s all a bit unfortunate, since the patches include security fixes that administrators should really install sooner rather than later. And yes, both the security-only updates and monthly roll-ups are affected.

But, alas, the now legendary Microsoft quality control has struck once again.

Back in January the software giant was forced to remove an Office 2010 update after some versions of Excel fell over. In March, Windows 10 gamers were instructed to uninstall KB4482887 after the patch left some games unplayable.

And we’ll draw a discreet veil over the Windows 10 October 2018 Update debacle.

Still, on the plus side, Windows 10 will soon be able to have go at uninstalling broken updates itself. The older operating systems will, alas, remain subject to the firehose of whiffy code sprayed from deep within the bowels of Redmond.

As with most updates these days, it’s best to try them on a test machine before allowing them anywhere near production. However, some users do not have that luxury.

And of course Windows 10, to which Microsoft would dearly like users to upgrade, is not affected by the borkening.

Thanks to all the Reg readers who got in touch about this. We feel your pain. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/11/microsoft_sophos/

Uncle Sam charges Julian Assange with conspiracy to commit computer intrusion

One-time Aussie cupboard-dweller Julian Assange has been charged with conspiracy to commit computer intrusion by the US government.

Shortly after his arrest in London today – which followed the Ecuadorian embassy handing him over to British police – a US indictment dated March 2018 was unsealed.

It charges Assange for his part in a computer-hacking conspiracy from 2010, when hundreds of thousands of secret US cables, war reports and briefs were released after being leaked by US Army intelligence analyst Chelsea Manning.

The US Department of Justice alleged that Assange had conspired with Manning – who had top secret security clearance – to break into Pentagon computers and snag the document cache.

The indictment (PDF) was made in the Eastern District of Virginia.

It alleges that Manning and Assange had multiple conversations about getting the files, with Assange helping her hack a password stored on Department of Defense computers, in a “password-cracking agreement”, and discussed measures to conceal Manning as the source.

The indictment said that Manning downloaded four nearly complete databases, and the vast majority of the documents contained were then released on Assange’s WikiLeaks website.

These contained, according to the indictment, about 90,000 Afghanistan war-related significant activity reports, 400,000 Iraq war-related significant activity reports, 800 Guantanamo Bay detainee assessment briefs and 250,000 US Department of State cables.

The government said Manning told Assange she was “throwing everything” at getting a set of documents, but that was “all I really have got left”. To which Assange allegedly replied: “Curious eyes never run dry in my experience.” Manning was then said to have used a Department of Defense computer to download the State department cables.

According to the indictment, Assange faces a maximum of five years in prison if convicted of the charge.

Assange was arrested in London this morning for a breach of his UK bail conditions, and then later further arrested on behalf of the US after the Met received an extradition request.

He is currently in court in Westminster, central London, where District Judge Michael Snow has reportedly found him guilty of failing to surrender on 29 June 2012, and has sent him to the Crown Court for sentencing.

He could face up to 12 months in prison. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/11/assange_indicted_for_conspiracy_to_commit_computer_hacking/

When Your Sandbox Fails

The sandbox is an important piece of the security stack, but an organization’s entire strategy shouldn’t rely on its ability to detect every threat. Here’s why.

Working in cybersecurity is like fighting crime in Gotham City. You spend your day squaring off against faceless villains with names like WannaCry, Petya, and Red October, who are constantly coming up with new tactics, technology, and gadgets to get the upper hand. Then, after a good, hard fight, you think you’ve won the day, only to see old adversaries pop up a few days or even years later — stronger, smarter, and a lot more sophisticated.

For example, an old nemesis returned earlier this year with a new trick up its sleeve. The Emotet banking Trojan, initially introduced in 2014, reappeared on our radar screen, this time with an interesting twist. This new version was an XML document with a .doc extension, allowing it to potentially avoid detection because most sandboxes require true file type. Even though the true file type is XML, it’s opened in Word on the endpoint.

Once open in Word, the macro within the XML file spawns a PowerShell script that calls out to a second-stage URL to download the Emotet payload. The payload then enumerates a list of installed apps and checks disk volumes to determine whether it is in a sandbox. If it is, it stops execution and shuts down. In addition, Emotet has long sleep and delay mechanisms to hinder dynamic analysis techniques, which are used by sandboxes to detect malicious activity. Genius!

Other recent threats have used similar tactics to avoid detection by a sandbox. Bebloh, a generic banking Trojan first detected in 2009, recently re-emerged as a variant targeting Japanese users. This specific variant is delivered via webmail as an Excel attachment that includes a macro, which spawns a silent command shell. Interestingly, this variant of Bebloh checks the locale and country settings at each stage of execution.

At first, the macro stops execution and quits the Excel application if the locale setting does not match Japanese. Once the command shell is activated, a PowerShell script is spawned to fetch remote content from a URL pattern that looks like a RAR file but is actually another PowerShell script that contains an embedded base64-encoded and encrypted DLL. The key used to decrypt this DLL is generated based on the country code from the culture set in the operating system. Finally, the decrypted DLL is reflectively injected into memory by another process using PowerShell, and the entry point of the DLL is called to start the malware.

The upshot is that the location settings in a sandbox would have to be set to JP (the code for Japan) throughout the entire environment to detect this infection chain — a highly unlikely configuration scenario. Bebloh checks for system uptime and physical system characteristics, and stops execution if it detects it is in a sandboxed environment.

Phishing is another area where sandboxes fail, because detection is dependent on a file exhibiting malicious behavior. Attackers can leverage a simple PDF file containing a single link to a malicious sign-in form to avoid detection. Documents with a single Uniform Resource Identifier have a very low footprint for sandboxes to detect, and the short TTL domain leaves little evidence for post-event analysis or threat intelligence services.

Emotet, Bebloh, and PDF phishing attacks are worrisome for one very good reason. They use sophisticated — ingenious, really — techniques to avoid detection in a sandbox environment. Sandboxing has traditionally been used as a tried-and-true method for protecting users from web-based threats by quarantining malicious content before it reaches a user’s device. In the past, this has been enough. Attacks have been detected and then placed into a sandbox environment, where they can be walled off from the network and analyzed for future remediations. Up until now, this strategy has worked well.

However, sandboxing relies on detection. If a threat is able to mask itself, shut itself down, or evade detection in some way, it pretty much has free rein to infect users’ devices, enabling it to eventually make its way into the network and critical business systems. And that’s a problem. In a detect-and-respond cybersecurity strategy, once a threat gets past the front gates, it’s game over.

This evolution of threat tactics and technology is nothing new. Malware and other web-based attacks are constantly evolving to counter traditional cybersecurity solutions. It seems that for every step forward we make as an industry, threat actors have a countermeasure in hand almost immediately — making cybersecurity a constant back and forth on the front lines.

Network separation and web isolation are two alternatives to a cybersecurity strategy based solely on detection. These solutions simply remove any connection between users’ machines and the public internet. Network separation prevents users from accessing the public Internet on any computer connected to the corporate network — often requiring users to rely on two computers. Web isolation allows web browsing but moves the fetch and execute commands off of endpoints and onto a remote isolation server on-site or in the cloud. Rather than trying to detect whether content is safe or risky, network separation and web isolation assume everything is risky and never allow the user to connect directly to the web. (In full disclosure, my company, Menlo Security, along with others in the industry, markets web isolation technology.)

The sandbox is still an important piece of the security stack, but an organization’s entire strategy shouldn’t be reliant on its ability to detect every threat. Even Batman needs to accept that some attacks are a given and that the best security strategy is to contain the threat, away from the citizens of Gotham, in such a way that they don’t even know there was an attack!

Related Content:

 

 

 Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kowsik Guruswamy is CTO of Menlo Security. Previously, he was co-­founder and CTO at Mu Dynamics, which pioneered a new way to analyze networked products for security vulnerabilities. Before Mu, he was a distinguished engineer at Juniper Networks. Kowsik joined Juniper … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/when-your-sandbox-fails/a/d-id/1334342?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Here’s the Microsoft April Patch Tuesday roundup

Microsoft and Adobe have released their April Patch Tuesday updates, which this month comprise a relatively modest 74 CVE vulnerabilities, 15 of which are rated ‘critical’.

But there’s still plenty to worry about, which is why a good place to start is with the two zero-day vulnerabilities Microsoft says are being actively exploited.

Zero-days

These are CVE-2019-0803 and CVE-2019-0859, both identical-looking elevation of privileges (EoP) issues in the same Win32k component.

Microsoft offers little detail about the reported exploitation, but both would still require local access which earns them a designation of ‘important’ rather than critical.

That hints that they are probably being chained in conjunction with other vulnerabilities known or unknown which is why patching them should be a top priority.

Criticals and beyond

The 14 Microsoft flaws marked critical – often a euphemism for remote code execution (RCE) – include six in the Edge browser’s Chakra Scripting Engine, which often now seems to generate a lot of patching work.

Add to this another three more RCEs in Microsoft XML CVE-2019-0791, CVE-2019-0792, and CVE-2019-0793 – and the threat posed by attackers who can lure victims to malicious websites through vulnerable browser components is underscored.

Others to patch include CVE-2019-0853, a critical RCE in the Windows Graphics Device Interface (GDI) handles objects in the memory. Ditto CVE-2019-0824, CVE-2019-0825, and CVE-2019-0827, a hat-trick of important-rated flaws affecting the Microsoft Office Access Connectivity Engine, and CVE-2019-0856, an issue in the Windows Remote Registry Service.

We can be less worried about the half dozen flaws in Internet Explorer’s VBScript, a deprecated component that is still in Windows 10, although this should be blocked by default on this version of Windows.

SophosLabs RCE

One flaw is being fixed thanks to Yaniv Frank of the SophosLabs Offensive Research Team (ORT), namely CVE-2019-0845. While fiddly to exploit, it’s an issue in the IOleCvt ActiveX control which could lead to an RCE.

Shockwave no more

After a quiet March, Adobe’s update hits users with a more normal load of updating work, including 21 CVEs – 11 of which are critical fixes for Adobe Reader. There are two vulnerabilities in Flash Player, one of which, CVE-2019-7096, is marked critical.

For anyone who’s forgotten, this month also marks the end of Shockwave Player. The last patched version will be 12.3.5.205 as outlined in APSB19-20. From now on, the only people receiving updates will be licensed enterprises.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oNgIfiw2evE/

As you wrap up this month’s patch installs, don’t forget these four Intel fixes

Intel has posted another round of firmware updates with fixes for four CVE-listed vulnerabilities.

Chipzilla’s April patch load includes fixes for a pair of bugs considered by Intel to be high security risks, as well as a speculative execution bug reported by university researchers last month.

CVE-2018-18094 is an escalation of privilege flaw in the Intel Media SDK installer. An attacker with code already running on the vulnerable machine could exploit the flaw to gain higher access privileges without user interaction. Intel credited its own team with discovering the vulnerability.

The second high-risk vulnerability is CVE-2019-0163, a bug in the Intel NUC firmware for Broadwell U i5 vPro (before version MYBDWi5v.86A). Intel says that an input validation flaw can allow an attacker already on the system to raise privileges, crash the PC, or even extract confidential information from a vulnerable board. This too was found and reported by Chipzilla’s own security team.

intel

Thought you were done patching this week? Not if you’re using an Intel-powered PC or server

READ MORE

CVE-2019-0162 is a side-channel information disclosure bug in Intel Virtual Memory Mapping, better known by its marketing handle, “Spoiler”. As the name would suggest, the flaw would potentially let an attacker with local access suss out memory addresses of things like passwords and security keys.

As this is a side-channel hardware flaw, there was no single fix released. Rather, Intel is directing users and admins to its best practices for handling side channel vulnerabilities. Credit on the discovery was given to the Worcester Polytechnic Institute team of Saad Islam, Ahmad Moghimi, Berk Gulmezoglu, and Berk Sunar, and the University of Lübeck team of Ida Bruhns, Moritz Krebbel, and Thomas Eisenbarth.

CVE-2019-0158 is an elevation of privilege flaw found in the Graphics Performance Analyzer for Linux, a tool that allows game devs to test and fine-tune their graphics-heavy creations on Intel hardware.

While that bug also allows an attacker to achieve a high-level of access on a vulnerable box, Intel is making this a “medium” risk level because a successful exploit requires duping the user into opening an attack file (thus keeping its CVSS score score down). Intel researcher Michael Henry got the shout-out on this one. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/11/intel_april_patch/