STE WILLIAMS

RSA coughs to critical-rated bug in its authentication SDK

RSA developers and admins have been given two critical-level authentication bugs to patch.

For the sysadmin, the issue struck RSA’s software providing Web-based authentication for Apache. CVE-2017-14377 is an authentication bypass that existed because of an “input validation flaw in RSA Authentication Agent for Web for Apache Web Server”.

If the authentication agent is configured to use UDP there’s no problem, but if it’s using TCP, a remote and unauthenticated attacker can send a crafted packet that triggers a validation error, gaining access to resources on the target.

RSA has released a patch here.

The other critical-rated bug is in the RSA Authentication Agent SDK for C, meaning it would be inherited by other systems developed in the SDK.

Versions 8.5 and 8.7 of the SDK had an error handling flaw, CVE-2017-14378, affecting TCP asynchronous mode implementations, in which “return codes from the API/SDK are not handled properly by the application”.

If an attacker triggered the error handling flaw, they could bypass authentication restrictions on the target system.

The fix for the C version of the SDK is here, and the bug isn’t present in the Java version of the SDK. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/03/rsa_auhentication_bugs/

Dirty COW redux: Linux devs patch botched patch for 2016 mess

Linus Torvalds last week rushed a patch into the Linux kernel, after researchers discovered the patch for 2016’s Dirty COW bug had a bug of its own.

Dirty COW is a privilege escalation vulnerability in Linux’s “copy-on-write” mechanism, first documented in October 2016 and affecting both Linux and Android systems.

As The Register wrote at the time, the problem means “programs can set up a race condition to tamper with what should be a read-only root-owned executable mapped into memory. The changes are then committed to storage, allowing a non-privileged user to alter root-owned files and setuid executables – and at this point, it’s game over.”

It was patched promptly, but last week, this post at the OSS-Sec mailing list explained the slip-up in the patch. Discovered by researchers from Bindecy, “Huge Dirty Cow” is discussed in detail here.

“In the ‘Dirty COW’ vulnerability patch (CVE-2016-5195), can_follow_write_pmd() was changed to take into account the new FOLL_COW flag (8310d48b125d ‘mm/huge_memory.c: respect FOLL_FORCE/FOLL_COW for thp’).”

Bindecy’s Eylon Ben Yaakov and Daniel Shapiro found a slip up in the use of pmd_mkdirty() in the touch_pmd() function, the post said.

What’s that mean? The get_user_pages can reach touch_pmd(), “which makes writing on read-only transparent huge pages possible”, and from there Yaakov and Shapiro found ways to crash a variety of processes.

They’ve published their proof-of-concept here.

Android doesn’t suffer from “HugeDirtyCow”. Red Hat Enterprise Linux is also safe. Many other *nixes do have the bug: “Every kernel version with THP support and the Dirty COW patch should be vulnerable (2.6.38 – 4.14)”, Yaakov and Shapiro wrote.

The kernel got its patch on November 27, before the bug was announced to the public. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/04/dirty_cow_sequel_huge_dirty_cow_patched/

UK government bans all Russian anti-virus software from Secret-rated systems

The United Kingdom’s National Cyber Security Centre has effectively banned the use of Russian anti-virus products from government departments and revealed it is trying to “prevent the transfer of UK data to the Russian state” from Kaspersky Labs software.

A guidance note published last Friday and distributed to permanent secretaries of government departments, addressed “The issue of supply chain risk in cloud-based products, including anti-virus (AV) software” and explained “how departments should approach the issue of foreign ownership of AV suppliers.”

The advice is simple:

“… where it is assessed that access to the information by the Russian state would be a risk to national security, a Russia-based AV company should not be chosen. In practical terms, this means that for systems processing information classified SECRET and above, a Russia-based provider should never be used.”

The guidance stated that its decision “will also apply to some Official tier systems as well, for a small number of departments which deal extensively with national security and related matters of foreign policy, international negotiations, defence and other sensitive information.”

The letter added that the National Cyber Security Centre is “in discussions with Kaspersky Lab … about whether we can develop a framework that we and others can independently verify, which would give the Government assurance about the security of their involvement in the wider UK market.”

“In particular we are seeking verifiable measures to prevent the transfer of UK data to the Russian state.”

The guidance continued: “We will be transparent about the outcome of those discussions with Kaspersky Lab and we will adjust our guidance if necessary in the light of any conclusions.”

The guidance quickly caused other problems for Kaspersky’s UK outfit, as British banking giant Barclays has written to customers to advise it’s discontinuing an offer of free Kaspersky software for users of its online banking services.

The letter, shared with The Register by a reader explains the decision as follows:

The UK Government has been advised by the National Cyber Security Centre to remove any Russian products from all highly sensitive systems classified as secret or above.

We’ve made the precautionary decision to no longer offer Kaspersky software to new users, however there’s nothing to suggest customers need to stop using Kaspersky.

The letter said customers need take no action and should ensure they run AV software.

The Register has sought comment from Kaspersky Labs on the National Cyber Security Centre’s decision and Barclays’ actions and will update this story if we receive any information. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/03/uk_government_bans_russian_anti_virus_software/

Apple iOS 11 security ‘downgrade’ decried as ‘horror show’

After rapidly patching a flaw that allowed anyone with access to a High Sierra Mac to obtain administrative control, Apple still has more work to do to make its software secure, namely iOS 11, it was claimed this week.

Oleg Afonin, a security researcher for password-cracking forensic IT biz Elcomsoft, in a blog post on Wednesday called iOS 11 “a horror story” due to changes the fruit-themed firm made to its mobile operating system that stripped away a stack of layered defenses.

What’s left, he argued, is a single point of failure: the iOS device passcode.

With an iOS device and its passcode – a barrier but not a particularly strong one – an attacker can gain access not only to the device, but to a variety of linked cloud services and any other hardware associated with the device owner’s Apple ID.

Before the release of iOS 11, Alfonin explained in a phone interview with The Register, there were several layers of protection in iOS.

“I feel they were pretty adequate for what they were,” he said. “It seems like Apple abandoned all the layers except the passcode. Now the entire protection scheme depends on that one thing.”

What changed was the iOS device backup password in iTunes. In iOS 10 and earlier, users could set a unique password to secure an encrypted backup copy of the data on an iPhone. That password travelled with the hardware and if you attempted to connect the iPhone to a different computer in order to make another backup via iTunes, you’d have to supply the same backup password.

In iOS 11, everything changed. As Apple explains in its Knowledge Base, “With iOS 11 or later, you can make a new encrypted backup of your device by resetting the password.”

That’s a security problem because device backups made through iTunes contain far more data than would be available just through an unlocked iPhone. And that data can be had through the sort of forensic tools Elcomsoft and other companies sell.

“Once an intruder gains access to the user’s iPhone and knows (or recovers) the passcode, there is no single extra layer of protection left,” Alfonin explains in his post. “Everything (and I mean, everything) is now completely exposed. Local backups, the keychain, iCloud lock, Apple account password, cloud backups and photos, passwords from the iCloud Keychain, call logs, location data, browsing history, browser tabs and even the user’s original Apple ID password are quickly exposed.”

So the risk goes beyond the compromised phone and any associated Apple devices: Apple’s iCloud Keychain could include, say, Google or Microsoft passwords.

iPhone 6S battery

What code is running on Apple’s Secure Enclave security chip? Now we have a decryption key…

READ MORE

Alfonin in his post suggested “Apple gave up” in the wake of complaints from police, the FBI, and users. Asked whether he had any reason to believe the change was made to appease authorities, he said, “I don’t believe this was made for the police. I believe it was just user complaints.”

Nonetheless, the iOS change has significant implications for those who deal with authorities, at border crossings for example.

“If I cross the border, I may be forced to reveal my passcode,” he said, noting that many thousands of electronic device searches happen every year.

With that passcode, authorities could create their own device backup and store it, which would allow them to go back and extract passwords unrelated to the device itself later on. “If that happens they have access to everything, every password I have,” he said.

Alfonin said with iOS 11, Apple’s entire protection scheme has fallen apart. He likened the situation to the 2014 iCloud hack known as Celebgate.

“Those iCloud accounts were protected with just passwords,” said Alfonin. “We have a similar situation today. If it’s just one single thing, then it’s not adequate protection.”

To fix the issue, Alfonin suggests going back to the way things were. “It was a perfectly balanced system,” he said. “I don’t think anybody complained seriously. The ability to reset an iTunes Backup password is not necessary. If they revert it back to the way it was in iOS 10, that would be perfect.”

Of course, this is just Alfonin and Elcomsoft’s opinion. Others in the world of infosec were not convinced by his arguments – for example, Dino Dai Zovi, cofounder of cloud security biz Capsulate8, was having none of it:

Apple did not respond to a request for comment. ®

PS: Apple’s iPhone X shares face scans with apps, which has some people worried. Also, if you have installed the password-less root security patch on macOS 10.13.0, and then upgraded to 10.13.1, make sure you reinstall the patch – Apple’s Software Update mechanism should do this automatically – and reboot. The upgrade from .0 to .1 nukes the emergency fix.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/01/apple_ios_11_security_downgrade_decried_as_horror_show/

Guilty: NSA bloke who took home exploits at the heart of Kaspersky antivirus slurp row

An NSA hacker has admitted taking home copies of classified software exploits – understood to be the cyber-weapons slurped from an agency worker’s home Windows PC by Kaspersky Labs’ antivirus.

Nghia Hoang Pho, 67, pleaded guilty in a US district court in Baltimore on Friday to one count of willful retention of national defense information. The Vietnam-born American citizen, who lives in Ellicott City, Maryland, faces roughly six to eight years in the clink, with sentencing set for April next year.

Pho is understood to be the Tailored Access Operations (TAO) programmer whose home computer was running Kaspersky Lab software that was allegedly used, one way or another, by Russian authorities to steal top-secret NSA documents and tools in 2015.

According to Kaspersky, its security package running on the PC detected Pho’s copies of the NSA exploits as new malicious software, and uploaded the powerful spyware to its cloud for further analysis by its researchers. The biz deleted its copy of the archive as soon as it realized what it had discovered, it is claimed. It is further alleged by US government sources that Russian spies were able to get their hands on the top-secret code via the antivirus package, although Kaspersky denies any direct involvement.

Judging from his plea deal with prosectors, Pho broke federal law when, as a developer on the NSA’s TAO hacking team, he took his work home with him multiple times and, in the process, exposed the classified information. Pho admitted that, over a five-year period starting in 2010, he copied information from NSA machines and took it all home with him.

“Beginning in 2010 and continuing through March 2015, Pho removed and retained U.S. government documents and writings that contained national defense information, including information classified as Top Secret and Sensitive Compartmented Information,” the US Department of Justice said in disclosing Friday’s guilty plea.

“This material was in both hard-copy and digital form, and was retained in Pho’s residence in Maryland.”

No other charges were filed, and there is no mention of any efforts by Pho to sell or pass off any of the data.

Kaspersky Lab has denied any wrongdoing in the matter or illicit ties to Russian intelligence. The security vendor also pointed out Pho’s machine was infected with loads of malware, meaning any miscreant could have stolen Uncle Sam’s cyber-weapons.

Regardless, the Moscow-based biz is fighting a ban on the use of its products on American government networks. Meanwhile, British spies at surveillance nerve center GCHQ today warned Brits to be wary of cloud-based antivirus toolkits. Kaspersky isn’t named specifically, but reading between the lines, Blighty’s snoops are saying: don’t Pho-k it up like the NSA did. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/02/nsa_tao_exploit_leak_guilty/

Coinbase ordered to turn over customer records to IRS

A federal district court in California this week blew away one of the major factors in the explosive growth and popularity of Bitcoin and its hundreds of cryptocurrency colleagues: the promise of anonymity.

The court ordered Coinbase, a popular cryptocurrency exchange and wallet service, to turn over three years worth of identifying records on more than 14,000 of its customers to the Internal Revenue Service (IRS), concluding a year-long battle over whether the tax agency could pierce the cryptocurrency anonymity veil.

The IRS didn’t get everything it had asked for a year ago, which included detailed information on any US persons who, “at any time during the period January 1 2013, through December 31 2015, conducted transactions in a convertible virtual currency as defined in IRS Notice 2014-21.”

After Coinbase refused to comply, calling the demand “indiscriminate and over broad,” and even some members of Congress complained that the summons could affect as many as 500,000 people, the IRS dramatically scaled back its demand, reducing the scope of the summons to include only users who had made in any one year, “at least the equivalent of $20,000 in any one transaction type (buy, sell, send or receive) …”

That, according to Coinbase, would include 14,355 users.

Additionally, the court didn’t grant all the user data the IRS was seeking. It denied requests for:

… account opening records, copies of passports or driver’s licenses, all wallet addresses, all public keys for all accounts/wallets/vaults, records of Know-Your-Customer diligence, agreements or instructions granting a third-party access, control, or transaction approval authority, and correspondence  between Coinbase and the account holder.

Those requests, the court said, were “broader than necessary.” But it did order Coinbase to turn over:

(1) the taxpayer ID number, (2) name, (3) birth date, (3) address, (4) records of account activity including transaction logs or other records identifying the date, amount, and type of transaction (purchase/sale/exchange), the post transaction  balance, and the names of counterparties to the transaction, and (5) all periodic statements of account or invoices (or the equivalent).

Which is likely, sooner than later, to render moot the promo from cryptocurrency vendor Monero that it offers, offers, “secure, private, untraceable currency.”

It is obvious why the IRS is interested – it considers virtual currencies property for federal tax purposes, which means citizens are supposed to pay taxes on any capital gains from cryptocurrency transactions.

And according to the IRS, fewer than 900 people per year reported income on Form 8949, which is used to account for, “a property description likely related to bitcoin.”

That is a miniscule fraction – less than two-hundredths of a percent – of the number of people using Coinbase, “the largest exchanger in the US of bitcoin into US dollars,” according to the government – with 4.8 million users and 10.6 million wallets.

Not to mention the enormous spike in the value of Bitcoin. Its US value was $13 at the start of 2013. It was less than $1,000 at the start of the year and after recently topping $11,000, it was trading at $9,380 at midweek. Those kinds of profits would obviously yield some significant tax liabilities.

According to the IRS petition, most Coinbase users “have not been or may not be complying with US internal revenue laws requiring the reporting of taxable income from virtual-currency transactions.”

Coinbase has so far not commented on the order. But David Farmer, director of communications, wrote on the company blog after a hearing earlier this month that the IRS demand amounts to “government overreach.”

In the future we hope to work with the IRS to establish a reasonable tax reporting method that makes sense for virtual currency service providers and consumers alike.

And Coin Center’s Peter Van Valkenburgh told The Verge,

Without better rationale for why these specific transactions were suspect, a similarly sweeping request could be made for customer data from any financial institution. It sets a bad precedent for financial privacy.

But according to the court, the IRS, “has a legitimate interest in investigating these taxpayers. Coinbase’s arguments to the contrary, it said, “are not persuasive.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KLlHPRBFd4E/

Ex-cop who ‘kept private copies of data’ fingers Cabinet Office minister in pr0nz at work claims

Cabinet Office Minister Damian Green has been caught up in a fresh row over his Parliamentary computer habits after the BBC reported that he had porn on his parliamentary PC a decade ago.

Neil Lewis, a former Scotland Yard detective specialising in computer investigations, was given a platform by the BBC’s morning TV news programme in the UK to make his allegations against the Conservative MP.

Lewis claimed to have found “thousands” of thumbnails of legal porn in Green’s computer’s browser cache, adding that he was in “no doubt whatsoever” that this was done by Green, according to the BBC – in spite of claiming that “you can’t put fingers on a keyboard”.

Veteran investigative journalist Duncan Campbell, who doubles up as a data forensic expert, told The Register: “The interviewee’s careful caveat about ‘not being able to put fingers on the keyboard’ sounded credible to me. That is consistent with the the interviewee being an experienced computer forensic investigator, who has given forensic evidence in court.”

The ex-police officer also appeared to confess that he himself had broken the law by keeping his own personal copies of material he obtained from Green’s laptop, in spite of being ordered to delete the official copies by his managers. According to the Daily Telegraph, he said: “Morally and ethically I didn’t think that was a correct way to continue.”

The Metropolitan Police later said it had opened an investigation into Lewis over the release of confidential information. It is also cooperating with a recently opened Parliamentary investigation into Green’s alleged harassment of a young Conservative activist.

Campbell said: “The manner of his disclosure today raises questions of malice. Why, when the police were investigating Green on the previous occasion, could it be relevant or proper to have examined his lawful porn viewing activities, if indeed that happened? If this investigator thinks that the public interest now requires disclosure as Green is under investigation in Parliament, what he could have said is ‘call the Met and ask for my reports from the time.'”

Green’s fellow minister, David Davis, the Brexit secretary, has threatened to resign if Green is forced out of his ministerial post as a result of the copper’s leaks.

The minister himself denies watching or downloading porn in Parliament. An aide said: “From the outset he has been very clear that he never watched or downloaded pornography on the computers seized from his office.”

Green was arrested and his Parliamentary office was raided in 2008 by police who entered the Palace of Westminster without a warrant after persuading Serjeant-at-Arms Jill Pay to let them in. Policemen insisted at the time they were investigating allegedly criminal leaks from the Home Office to Green.

That criminality, according to BBC reports from the time, included emails from Labour Home Secretary Jacqui Smith’s private secretary confessing that thousands of illegal migrants had been granted licences to operate as security guards; a memo revealing that another migrant had become a cleaner in the House of Commons; and a draft letter to Prime Minister Gordon Brown from Smith, warning that an economic recession could lead to a rise in crime.

Smith publicly denied at the time that she knew police had been instructed to arrest Green and raid his office, responding to accusations from Conservative MPs that as head of the government department affected by the leaks, she must have had a hand in the police raid.

Something’s strange here

If Lewis, the ex-detective, kept personal copies of embarrassing information on a cabinet minister he obtained during work hours, it is to be hoped that his (former) friends at the Met charge him with breaches of the Data Protection Act.

Nonetheless, Green has denied having any porn on his parliamentary machine at all. Lewis’s claims are also subtly different from other police leaks aimed at Green: a month ago Bob Quick, a disgraced former assistant commissioner of the Met, described Green as having “extreme” porn – which is illegal to own. Quick was sacked from the Met for letting press photographers see details of a secret briefing document as he walked into Downing Street, though he was also head of the police inquiry which decided to arrest Green.

Prominent Conservative MP Jacob Rees-Mogg commented that the allegations against Green are being brought by “police officers who besmirched their office in the past and are now shaming themselves with their public comments”.

For his part, Green is under investigation by Parliamentary authorities for allegedly inappropriate behaviour with a young Conservative activist. It appears that this has prompted the ex-coppers to start talking to the BBC about their decade-old investigation – which was dismissed by the Crown Prosecution Service at the time. An AOL email address supposedly in Green’s name was also allegedly found in the Ashley Madison hacked data dump.

Squinting?

Campbell told El Reg, based on his forensic experience: “When people go to porn sites, it’s very common to first look at overviews, or image galleries. By the time you’ve gone through the subsets to find the particular lad or lassie whose flesh you want to see more of, an investigator can virtually read your mind by that point. Do you click on that? If not, you might not have seen it.”

The issue of thumbnails is also strange. One could reasonably assume that a man seeking gratification would view individual (large) images, not just small thumbnails, which is what his police accusers have repeatedly accused him of having on his work computer. A possible explanation in his favour is that 2008 was the time when extreme porn was about to be criminalised, though Green’s blanket denials mean this is probably not the reason – and the latest police leaks explicitly rule out extreme pornography, despite the disgraced Met assistant commissioner’s claims.

Nonetheless, Green’s denials that there was any porn at all on his work machines ring hollow to anyone with any knowledge of human nature – or, indeed, with knowledge of digital forensic work. Campbell commented: “It’s rare in computer forensics for criminal cases, whatever the alleged crime and if the user is male, not to find traces of porn viewing, especially in late-night sessions. The evidence in my experiences is that that is a norm for most British blokes, even those whose religion might forbid it. If he was working late, it’s to be expected.”

All in all, the Green computer porn affair seems to have more in common with Plebgate than anything else – a scandal in which some police officers lied, some politicians lied, at least one officer was jailed, and a Conservative MP had to pay hefty libel damages. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/01/damian_green_pron_row_oddities/

Expert gives Congress solution to vote machine cyber-security fears: Keep a paper backup

Video With too many electronic voting systems buggy, insecure and vulnerable to attacks, US election officials would be well advised to keep paper trails handy.

This is according to Dr Matt Blaze, a University of Pennsylvania computer science professor and top cryptographer, who spoke to Congress this week about cyber-threats facing voting machines and election infrastructure.

Among Blaze’s recommendations is that rather than rely on purely electronic voting machines to log votes, officials use optical scan machines that retain a paper copy of each voter’s ballot that can be consulted if anyone grows concerned about counting errors or tampering. In other words, due to the fact that everything has bugs and flaws, truly paperless voting systems should be a no-no.

“In many electronic voting systems in use today, a successful attack that exploits a software flaw might leave behind little or no forensic evidence. This can make it effectively impossible to determine the true outcome of an election or even that a compromise has occurred,” Blaze told [PDF] the House Committee on Oversight and Government Reform.

“Unfortunately, these risks are not merely hypothetical or speculative. Many of the software and hardware technologies that support US elections today have been shown to suffer from serious and easily exploitable security vulnerabilities that could be used by an adversary to alter vote tallies or cast doubt on the integrity of election results.”

election hacking

It took DEF CON hackers minutes to pwn these US voting machines

READ MORE

The recommendation was one of several Blaze made to Congress to address what he says is a problem compounded by both the increasing sophistication of cyber attacks and the inherent complexity of managing voting systems in multiple jurisdictions over long areas, as is the case with US elections.

Blaze also believes regular audits need to be performed on election systems, including after every election. Those audits would be able to help spot potential software failures in voting machines as well as spot possible attacks on voting machines and networks.

Finally, Blaze said, the training and resources afforded to both local and state voting officials needs to improve. In particular, training on how to spot and avoid sophisticated cyber attacks that would seek to sway an election either by manipulating the vote tally itself or with more subtle tactics.

“Electronic voting machines and vote tallies are not the only potential targets for such attacks. Of particular concern are the back end systems that manage voter registration, ballot definition, and other election management tasks,” Blaze told Congress.

“Compromising any of these systems (which are often connected, directly or indirectly, to the Internet and therefore potentially remotely accessible) can be sufficient to disrupt an election while the polls are open or cast doubt on the legitimacy of the reported result.”

He also appealed on Twitter to fellow computer security experts to help shore up tabulation system defenses, cautioning them, though, to understand the tricky rules and red-tape involved in the administration of American elections:

Or as one election clerk summarized: please help, but please don’t assume officials are morons…

You can catch the committee hearing in the video below, and read written statements from panel chairman Will Hurd (R-TX) here; Homeland Security official Chris Krebs, here; Secretary of State of Louisiana Tom Schedler, here; Virginia Department of Elections Commissioner Edgardo Cortés, here; and Brookings Institution national security law expert Susan Hennessey, here. ®

Youtube Video

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/01/us_voting_machine_security_hearing/

Microsoft defends Windows 10 against ASLR criticism

Is it a bug or a feature? It’s one of the oldest debates in software.

Earlier this month the OS world was treated to the latest instalment, this time focusing on the way Microsoft implemented a low-level security protection called Address Space Layout Randomization (ASLR) in Windows 8 and 10.

On one side of the argument is Will Dormann, an engineer with Carnegie Mellon University’s CERT Coordination Center (CERT/CC), the body tasked by the US Department of Homeland Security with handing out important security advice.

His opening salvo was a tweet on 16 November in which he described the way Windows implements ASLR as “essentially making it worthless.”

Ouch.

In case anyone was in doubt, this was followed by an official vulnerability alert describing the claimed failings in detail. The summary being:

Windows 8 and later fail to properly randomize every application if system-wide mandatory ASLR is enabled via EMET [Enhanced Mitigation Experience Toolkit] or Windows Defender Exploit Guard [WDEG].

Stung, within days Microsoft put out a refutation stating that “ASLR is working as intended.”

That’s a significant difference of opinion, so who is right?

Let’s skip to the paradox of a punchline: they both might be, albeit within different frames of reference.

The theory behind ASLR (also used in different forms by Linux, Android, iOS and macOS) is to randomise the memory locations where executable programs and DLLs run in order to deter memory attacks such as buffer overflows.

The gist is that attackers can’t assume they know the memory location for a targeted processes because Windows could have put it anywhere.

Except, according to Dormann, it doesn’t work properly:

Both EMET and Windows Defender Exploit Guard enable system-wide ASLR without also enabling system-wide bottom-up ASLR … The result of this is that such programs will be relocated, but to the same address every time across reboots and even across different systems.

Microsoft asserts that this is by design and is intended to allow older software not compiled to support ASLR to remain compatible:

ASLR is working as intended and the configuration issue described by CERT/CC only affects applications where the EXE does not already opt-in to ASLR.

That “opt-in” is the /DYNAMICBASE flag which software can use to indicate to Windows that it’s compatible with ASLR (and the operating system can infer that if the flag is missing the software may not work correctly under ASLR).

Windows can treat applications that don’t “opt-in” in a number of different ways. It can leave them to determine their own memory location, move them to a different but non-random location (the behaviour observed by Dormann) or move them to a random location using a setting called mandatory ASLR and bottom-up randomization.

The CERT advisory also notes a problem in the way Windows Defender Exploit Guard implements mandatory ASLR and bottom-up randomization, a point Microsoft concedes:

CERT/CC did identify an issue with the configuration interface of Windows Defender Exploit Guard (WDEG) that currently prevents system-wide enablement of bottom-up randomization. The WDEG team is actively investigating this and will address the issue accordingly.

On the Windows 10 Fall Creators update, the issue can be mitigated manually by setting a registry value.

Neutrals might at this point be wondering what all the fuss is about: ASLR works most of the time as advertised, and the few occasions when it doesn’t won’t apply to many users.

If you like, Microsoft thought it was pragmatically ensuring compatibility (a feature) which Dormann interprets as an area of potential weakness (the bug).

It’s not the first time Dormann has taken a pop at Windows’ security: a year ago, his beef was Microsoft’s plans to drop EMET, now replaced in Windows 10 by WDEG.

Or perhaps the real issue is what users are supposed to make of a back and forth now so technically specialised that even some experts can’t keep up with its finer points.

OS security has been getting more complex with every passing year. It shouldn’t surprise us that the same is happening to arguments about whether these new layers inside Windows and its rivals are up to the job.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/foJYF3Rdfjg/

RFID repeater used to steal Mercedes with keys locked inside a house

Do you own a Mercedes or other fancy car that starts with a keyless fob – and which you’d rather not see thieves drive off in?

Do you own a refrigerator?

If you answered “yes” to both those questions, congratulations! You might not have to stand outside in your slippers, sobbing over a sadly empty parking spot! “Might” because, well, researchers aren’t entirely sure how much metal shielding you need to create a Faraday cage to block key fobs’ “unlock me/start me up!” radio signals.

Why does this matter? Because West Midlands Police on Sunday posted a surveillance video showing thieves mysteriously opening and getting into a Mercedes in less than 86 seconds, without a key.

Actually, it’s not all that mysterious. The video depicts a so-called relay attack. It’s well-known. We’ve seen plenty of them over recent years in this, the age of the keyless fob and the relay boxes and signal boosters that steal their signals.

The most recent case is this one in the West Midlands, UK. In the CCTV footage above, two men pull up outside the victim’s house. They’re both carrying relay boxes. West Midlands Police note that the devices are capable of receiving signals through walls, doors and windows, but not metal.

One of the men stands near the victim’s property, waving the device until he gets a signal from a key fob inside the house or garage. The other thief stands near the car with his relay box, which receives the signal from the relay box near the property. The car sniffs the unlock-me signal that’s close by, and it obligingly unlocks the door.

Police think this is the first time such a theft has been captured on CCTV in the West Midlands.

The whole thing took about a minute. Police say that they haven’t yet recovered the Mercedes, which was stolen overnight on 24 September in the Elmdon area of Solihull.

A relay box works by extending the signal coming from the car keys inside the house and tricking the car’s system into believing that it’s the actual key. That’s why the West Midlands car, and plenty of other stolen cars, unlock their doors without any warning alarm.

Here’s an example of it happening in Germany:

Here’s 2016 CCTV footage from Houston:

And here’s a video from the National Insurance Crime Bureau (NICB) featuring newscasters talking about relay attacks in California:

…and featuring NICB researchers who bought a relay attack unit to see how easy it is to steal a car with one.

TL;DR: It’s very easy.

As the NICB notes, it used to be the case that relay attacks would only unlock cars. But now you can not only get in; you can start that pretty little ride and take it for a spin.

The NICB tested a device on over 35 cars, mini vans, SUVs and a pickup truck over a two-week period last year. The relay attack unit – you can buy these things online – opened 19 out of the 35 cars tested. It started 18 of those 19 cars. With two-thirds of those cars, NICB researchers could not only start the cars and drive them away; they could also turn them off and restart them, as long as they had the device inside.

The attack devices vary in signal range and price, with powerful units fetching hundreds of dollars. But why bother? As far back as 2012, any idiot with a $30 hacking kit could bypass on-board diagnostics (OBD) security. The kits came replete with reprogramming modules and blank keys and enabled thieves to steal high-end cars such as BMWs in a matter of seconds or minutes.

In addition, the Berlin-based automobile club ADAC in March 2016 released a study in which it reported that thieves could use a $225 signal booster – in the same ballpark as a relay box – to fool cars into thinking their owners are nearby, allowing them to easily unlock the cars and start them up: a silent theft that doesn’t leave a scratch.

According to Mark Silvester of West Midlands Police, car owners should use a Thatcham-approved steering lock to physically immobilize the steering wheel. In the US, we typically call these Clubs, though that’s actually a brand name for a steering wheel lock.

And while you’re at it, you might as well try to remember to store your keys in the refrigerator, or the microwave, or whatever other Faraday cage you’ve got kicking around. It would be nice to find out if such cages are strong enough to keep the thieves from driving off with your wheels: if somebody gets your car even with your cars tucked in beside the ice cream, let us know!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qmiQyJpe3II/