STE WILLIAMS

Cryptocurrency clampdown! Twitter bans ICO ads to combat scammers

As widely-trailed last week, Twitter is to ban advertisers from pushing Initial Coin Offerings (ICOs) or selling tokens on its platform.

Implemented over the next 30 days, the reasons should be self-explanatory. To quote a statement circulated among the media but not yet reflected on the company’s official advertising policy pages:

We know that this type of content is often associated with deception and fraud, both organic and paid, and are proactively implementing a number of signals to prevent these types of accounts from engaging with others in a deceptive manner.

They’re not kidding: Just last month we heard of Twitter doppelgangers impersonating the verified IDs of well-known figures such as John McAfee, Elon Musk and or Ethereum co-founder Vitalik Buterin, soliciting cryptocurrency investment and promising implausible returns.

Twitter said it will still allow ads from verified cryptocurrency exchanges and secure wallet services listed on “certain major stock markets.”

Twitter joins a growing list of internet big players who are implementing a cryptocurrency clampdown.

Facebook implemented a ban on cryptocurrency ads in January, with Google announcing something equally draconian-sounding two weeks ago, due to come into force in June.

It also emerged that Reddit (a major forum for cryptocurrency discussion) banned cryptocurrency ads as long ago as early 2016, although so quietly that almost nobody noticed.

Of course, just banning something by changing policies is easier written than done.

Of all companies, Twitter will know this as it battles daily to keep at bay phishing and assorted frauds that operate through bot accounts.

It’s also not 100% clear where Twitter will draw its red lines. The fact that unlisted but legitimate virtual currency companies won’t be able to advertise virtual coins, wallets, or exchanges doesn’t mean they won’t be able to advertise their own existence, for instance.

Might that end up being the same thing? Twitter will need to publish its full policy wording before that becomes clear.

With regulators circling, it’s still a new chill for cryptocurrencies that ran amok during 2017 on several fronts.

No sooner did experts start worrying about bogus ICOs exploiting Bitcoin mania than web cryptomining boomed, much of which couldn’t be deemed consenting.

By the time US cities and even whole nations started fretting about the electricity consumed even by legitimate currency mining companies, the idea of virtual currencies looked to be turning dark.

It’s even turned political: last week the US Government issued an announcement banning US citizens from using a virtual currency called the ‘petro’ issued by Venezuela, on the grounds that the initiative is a thinly-disguised vehicle for bypassing economic sanctions.

These are distinct pieces of the virtual currency puzzle, but together they show why bans by Twitter, Google and Facebook matter because they serve to amplify the signal that cryptocurrencies have become an up-and-coming neighbourhood stricken by anti-social behaviour.

That’s unlikely to derail the Bitcoin express (although its value dropped a bit after Twitter’s announcement) but it reminds investors and believers that while cryptocurrencies lack central control they are still vulnerable to the same unpredictable forces as everything else.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Bl_gFd4xvsg/

Did the FBI engineer its iPhone encryption court showdown with Apple to force a precedent? Yes and no, say DoJ auditors

Analysis On December 2, 2015 Syed Farook and his wife Tashfeen Malik attended a holiday party at Farook’s workplace – the non-profit Inland Regional Center in San Bernardino, California – and without warning started indiscriminately shooting at employees.

Four minutes and 75 bullets later, 14 people were dead and 17 injured. Farook and Malik fled the scene but were located by the police four hours later and died in the resulting gunfight.

The attack stoked fears of Islamic extremism within the United States but the shooting has become renowned for a different reason: a showdown between the FBI and Apple over access to Farook’s mobile phone.

Now a new report [PDF] by the US Department of Justice’s internal inspector general, published Tuesday, has blown open the case and indicates the FBI might have been trying to play Apple for a patsy.

The truth is out there

The report title is remarkable in itself: “A Special Inquiry Regarding the Accuracy of FBI Statements Concerning its Capabilities to Exploit an iPhone Seized During the San Bernardino Terror Attack Investigation.”

Which could perhaps be more accurately titled: “Did the FBI lie about not being able to break into a terrorist’s phone in an effort to win a legal precedent granting it access to everyone else’s digital devices?”

And the answer is, remarkably, yes and no.

Two months after the attack, on February 9, 2016, the FBI announced it was unable to unlock one of the phones it had recovered from the couple’s home – an iPhone 5C running iOS 8 – because of its security features.

Those features had been introduced in a recent update of the phone’s operating system and included an auto-delete function if the wrong passcode was typed in too many times.

Hand it over. No

The FBI asked Apple to create a new version of its operating system to install on the phone and enable it to bypass the security features. Apple refused. So the FBI responded by getting a court order that demanded Apple create and supply the software workaround.

Apple again refused and decided to go public with its concerns, sparking a public feud and even wider public debate between privacy and security in the modern digital world.

In the end, the issue was resolved the day before a crunch court hearing when the FBI said it had found a third-party solution to cracking the phone and no longer needed to force Apple to break its own encryption.

The timing of that last-minute back down raised suspicions that the FBI had engineered the showdown to create a legal precedent that would force US companies to give it backdoor access to everyone’s digital devices now and in the future.

In the prior months, the FBI had been increasingly vocal about the need to be able to access everyone’s phones for security reasons. Its director repeatedly warned about criminals “going dark” and evading law enforcement’s efforts to track them down. Was the San Bernardino shooting the perfect test case? After all, who could argue against tracking down terrorists?

Photo by a katz / Shutterstock.com

FBI Director wants ‘adult conversation’ about backdooring encryption

READ MORE

It wasn’t just technologists that had their suspicions, it turns out. As the DoJ report makes clear, the FBI’s own Executive Assistant Director (EAD) Amy Hess was concerned that staff within the FBI had withheld knowledge about being able to crack the phone. She was especially concerned because she gave testimony to Congress in which she stated that the FBI did not have the ability to crack the phone – and that was why it had taken Apple to court.

Concerns over FBI civil war

On August 31, 2016 – five months after the FBI announced it could unlock the phone – the DoJ’s internal watchdog the Office of the Inspector General (OIG) received “a referral from the FBI Inspection Division after former EAD Hess expressed concern about an alleged disagreement between units within the FBI Operational Technology Division (OTD) over the ‘capabilities available to the national security programs’ to access the Farook iPhone following its seizure.”

In other words, she had found out that people may not have been entirely honest with her and someone in the FBI was concerned enough to report it to the DoJ.

The OIG says it “conducted inquiries” into the question, including interviewing “relevant key participants” and outlines what it found in its report. It doesn’t say when those interviews happened or why it has taken 18 months to finish up and publish the report.

The report concludes that FBI officials did not lie to Congress in their testimony because what they said was true at the time. That is a key finding in that it backs up the FBI’s claim that it was not able to access the phone at the time; anything else would have indicated that the FBI knowingly misled Congress and the public in an effort to grant itself new powers. Which would be an explosive situation.

Fortunately we are not a police state yet. But the report does flag some very disturbing conversations and inconsistencies that appear to point quite clearly to the fact that the FBI made the most out of the situation and may have done its best not to find out if some parts of the FBI were able to crack the phone in order to pursue its legal case.

The key to understanding what went on behind the scenes is in making sense of the FBI’s internal structures.

The report notes there was a communication issue between two key departments: the Cryptographic and Electronic Analysis Unit (CEAU) and the Remote Operations Unit (ROU).

Prepare for alphabet soup

The CEAU sits within the Digital Forensics and Analysis Section (DFAS) of the FBI and the ROU sits within the Technical Surveillance Section (TSS) of the agency. And both the DFAS and TSS sit within the Operational Technology Division (OTD) of the FBI.

As with any organization, these additional layers of bureaucracy create communication barriers. But the key thing to understand is that while both CEAU and ROU work on cracking digital devices (among other things), the ROU spends more time on issues of national security and CEAU does more everyday law enforcement.

It fell to the CEAU to try to break into Farook’s phone and it didn’t have the tools to do so, and reported that back to FBI leadership. Pretty soon, however, the issue became much bigger and the FBI started considering pressuring Apple to force it to give the FBI access to iPhones.

It appears that at that point, FBI leadership went back to the CEAU and asked it to make sure that no one in the FBI was able to crack the phone. It is here that the DoJ report says there was a communication breakdown – but raises the question as to whether that breakdown was inadvertent or deliberate.

A logical department to have asked if it had a crack was the ROU. But it turns out that there was never a direct request to the ROU – with senior officials claiming that it was simply assumed that the ROU would be approached during an agency-wide request for help. The report notes it received “conflicting testimony” on this critical aspect.

The ROU for its part says that it wasn’t forthcoming with what it had because it has a longstanding rule that it does not use its tools for anything but national security cases – and the San Bernardino shooting was explicitly being pursued as a criminal matter.

As it turns out – at least according to the DoJ report – the ROU didn’t have a crack for the relevant operating system, iOS 8. But what it did have was a relationship with a third-party (assumed to be Israel-based Cellebrite) that it knew was “90 per cent” of the way to cracking the operating system.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/27/fbi_encryption_showdown/

Intel shrugs off ‘new’ side-channel attacks on branch prediction units and SGX

Intel’s shrugged off two new allegations of design flaws that enable side-channel attacks.

One of the new allegations was discussed at Black Hat Asia in Singapore last week, where University of Graz PhD Students Moritz Lipp and Michael Schwarz delivered a talk titled “When good turns to evil: using Intel SGX to stealthily steal Bitcoins.”

SGX is Intel’s way of creating secure enclaves that, as advertised, offer “protected areas of execution in memory” that “protect select code and data from disclosure or modification.” SGX enclaves are supposed to be inaccessible from the OS and even survive attacks that crack the BIOS or corrupt drivers.

Lipp and Schwarz noted that SGX enclaves have been used by developers of Bitcoin wallets because they sensibly appreciate being able to store them a secure location, given that to own a Bitcoin key is a short step away from owning Bitcoin too. But the pair delivered some bad news: an old-school “prime and probe” attack can be run against SGX enclaves.

Prime and probe sees attackers fill known RAM addresses and then watch as their victim load data into the RAM they’ve targeted. Once attackers know a RAM address has been changed, they read its contents and go about their evil business.

Which sounds like a great way to get data out of an SGX enclave except for one small problem: SGX is immune to the timing software that lets you figure out when RAM was accessed. So the pair wrote their own, helped by a DRAM side channel attack that exploits timing differences to find DRAM row borders. Once they knew the row borders, they were able to infer the rest of the RAM addresses and conduct a prime and probe that revealed recently-changed areas of memory and could then exfiltrate data.

To rub salt into the wound, their attack can run from an SGX enclave of its own.

inception_screengrab_648

We need to go deeper: Meltdown and Spectre flaws will force security further down the stack

READ MORE

Intel characterised the presentation and the paper (PDF) describing it as a known method, described here,and re-heated to consider Bitcoin.

“This presentation describes a previously known method to recover an RSA key from an enclave containing RSA crypto code that is vulnerable to a side channel exploit,” an intel spokesperson said. “This can be prevented by SGX application developers through utilization of an appropriate side channel attack-resistant crypto implementation inside the enclave.” Intel pointed us

Intel also hosed down a new paper (PDF) titled “BranchScope: A New Side-Channel Attack on Directional Branch Predictor” that describes “a new side-channel attack where the attacker infers the direction of an arbitrary conditional branch instruction in a victim program by manipulating the shared directional branch predictor.”

The attack relies on the fact that “Modern microprocessors rely on branch prediction units (BPUs) to sustain uninterrupted instruction delivery to the execution pipeline across conditional branches. When multiple processes execute on the same physical core, they share a single BPU.”

But the authors, from a quartet of universities, wrote that “the sharing potentially opens the door an attacker to manipulate the shared BPU state, create a side-channel, and derive a direction or target of a branch instruction executed by a victim process. Such leakage can compromise sensitive data.”

“For example, when a branch instruction is conditioned on a bit of a secret key, the key bits are leaked directly.”

Intel’s less certain it has this one covered, but told us “We have been working with these researchers and have determined the method they describe is similar to previously known side channel exploits.”

“We anticipate that existing software mitigations for previously known side channel exploits, such as the use of side channel resistant cryptography, will be similarly effective against the method described in this paper.”

Which offers some comfort to users, but shows Intel is also a long way from escaping the mess that Meltdown and Spectre created. SGX has long been known to have certain sensitivities. Research like these two papers shows that with a little lateral thinking, Intel’s products can be challenged in many ways.

And with this class of attack now more prominent than ever before, chances of future exploits only increase – as does the chance the next big disclosure will come from a bad actor uninterested in either an academic announcement or the kind of controlled release used for Meltdown and Spectre. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/28/intel_shrugs_off_new_sidechannel_attacks_on_branch_prediction_units_and_sgx/

Did the FBI engineer iPhone encryption court showdown with Apple to force a precedent? Yes and no, say DoJ auditors

Analysis On December 2, 2015 Syed Farook and his wife Tashfeen Malik attended a holiday party at Farook’s workplace – the non-profit Inland Regional Center in San Bernardino, California – and without warning started indiscriminately shooting at employees.

Four minutes and 75 bullets later, 14 people were dead and 17 injured. Farook and Malik fled the scene but were located by the police four hours later and died in the resulting gunfight.

The attack stoked fears of Islamic extremism within the United States but the shooting has become renowned for a different reason: a showdown between the FBI and Apple over access to Farook’s mobile phone.

Now a new report [PDF] by the US Department of Justice’s internal inspector general, published Tuesday, has blown open the case and indicates the FBI might have been trying to play Apple for a patsy.

The truth is out there

The report title is remarkable in itself: “A Special Inquiry Regarding the Accuracy of FBI Statements Concerning its Capabilities to Exploit an iPhone Seized During the San Bernardino Terror Attack Investigation.”

Which could perhaps be more accurately titled: “Did the FBI lie about not being able to break into a terrorist’s phone in an effort to win a legal precedent granting it access to everyone else’s digital devices?”

And the answer is, remarkably, yes and no.

Two months after the attack, on February 9, 2016, the FBI announced it was unable to unlock one of the phones it had recovered from the couple’s home – an iPhone 5C running iOS 8 – because of its security features.

Those features had been introduced in a recent update of the phone’s operating system and included an auto-delete function if the wrong passcode was typed in too many times.

Hand it over. No

The FBI asked Apple to create a new version of its operating system to install on the phone and enable it to bypass the security features. Apple refused. So the FBI responded by getting a court order that demanded Apple create and supply the software workaround.

Apple again refused and decided to go public with its concerns, sparking a public feud and even wider public debate between privacy and security in the modern digital world.

In the end, the issue was resolved the day before a crunch court hearing when the FBI said it had found a third-party solution to cracking the phone and no longer needed to force Apple to break its own encryption.

The timing of that last-minute back down raised suspicions that the FBI had engineered the showdown to create a legal precedent that would force US companies to give it backdoor access to everyone’s digital devices now and in the future.

In the prior months, the FBI had been increasingly vocal about the need to be able to access everyone’s phones for security reasons. Its director repeatedly warned about criminals “going dark” and evading law enforcement’s efforts to track them down. Was the San Bernardino shooting the perfect test case? After all, who could argue against tracking down terrorists?

Photo by a katz / Shutterstock.com

FBI Director wants ‘adult conversation’ about backdooring encryption

READ MORE

It wasn’t just technologists that had their suspicions, it turns out. As the DoJ report makes clear, the FBI’s own Executive Assistant Director (EAD) Amy Hess was concerned that staff within the FBI had withheld knowledge about being able to crack the phone. She was especially concerned because she gave testimony to Congress in which she stated that the FBI did not have the ability to crack the phone – and that was why it had taken Apple to court.

Concerns over FBI civil war

On August 31, 2016 – five months after the FBI announced it could unlock the phone – the DoJ’s internal watchdog the Office of the Inspector General (OIG) received “a referral from the FBI Inspection Division after former EAD Hess expressed concern about an alleged disagreement between units within the FBI Operational Technology Division (OTD) over the ‘capabilities available to the national security programs’ to access the Farook iPhone following its seizure.”

In other words, she had found out that people may not have been entirely honest with her and someone in the FBI was concerned enough to report it to the DoJ.

The OIG says it “conducted inquiries” into the question, including interviewing “relevant key participants” and outlines what it found in its report. It doesn’t say when those interviews happened or why it has taken 18 months to finish up and publish the report.

The report concludes that FBI officials did not lie to Congress in their testimony because what they said was true at the time. That is a key finding in that it backs up the FBI’s claim that it was not able to access the phone at the time; anything else would have indicated that the FBI knowingly misled Congress and the public in an effort to grant itself new powers. Which would be an explosive situation.

Fortunately we are not a police state yet. But the report does flag some very disturbing conversations and inconsistencies that appear to point quite clearly to the fact that the FBI made the most out of the situation and may have done its best not to find out if some parts of the FBI were able to crack the phone in order to pursue its legal case.

The key to understanding what went on behind the scenes is in making sense of the FBI’s internal structures.

The report notes there was a communication issue between two key departments: the Cryptographic and Electronic Analysis Unit (CEAU) and the Remote Operations Unit (ROU).

Prepare for alphabet soup

The CEAU sits within the Digital Forensics and Analysis Section (DFAS) of the FBI and the ROU sits within the Technical Surveillance Section (TSS) of the agency. And both the DFAS and TSS sit within the Operational Technology Division (OTD) of the FBI.

As with any organization, these additional layers of bureaucracy create communication barriers. But the key thing to understand is that while both CEAU and ROU work on cracking digital devices (among other things), the ROU spends more time on issues of national security and CEAU does more everyday law enforcement.

It fell to the CEAU to try to break into Farook’s phone and it didn’t have the tools to do so, and reported that back to FBI leadership. Pretty soon, however, the issue became much bigger and the FBI started considering pressuring Apple to force it to give the FBI access to iPhones.

It appears that at that point, FBI leadership went back to the CEAU and asked it to make sure that no one in the FBI was able to crack the phone. It is here that the DoJ report says there was a communication breakdown – but raises the question as to whether that breakdown was inadvertent or deliberate.

A logical department to have asked if it had a crack was the ROU. But it turns out that there was never a direct request to the ROU – with senior officials claiming that it was simply assumed that the ROU would be approached during an agency-wide request for help. The report notes it received “conflicting testimony” on this critical aspect.

The ROU for its part says that it wasn’t forthcoming with what it had because it has a longstanding rule that it does not use its tools for anything but national security cases – and the San Bernardino shooting was explicitly being pursued as a criminal matter.

As it turns out – at least according to the DoJ report – the ROU didn’t have a crack for the relevant operating system, iOS 8. But what it did have was a relationship with a third-party (assumed to be Israel-based Cellebrite) that it knew was “90 per cent” of the way to cracking the operating system.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/27/fbi_encryption_showdown/

Hackers pwn Baltimore’s 911 system?! Quick, someone call 91– doh!

The US city of Baltimore suffered a brief outage on part of its 911 service at the weekend – and hackers are being blamed.

The Baltimore Sun reports that a cyber-attack on the city’s network forced the emergency service’s Computer Aided Dispatch (CAD) offline. The CAD system is used by 911 operators to direct first responders – police, fire, and ambulance – to the scene of an emergency.

We’re told the attack was directed at a specific server, and took down the CAD system from 8.30am Saturday until around 2am Sunday. Operators were still able to manually dispatch responders during the outage, albeit much less efficiently.

The attack came at a particularly bad time, as thousands of protesters from the area gathered Saturday in Baltimore and in nearby Washington DC as part of the nationwide march against gun violence.

No systems beyond the one CAD server were hit in the e-assault, and no data was exposed or stolen.

The Register has contacted the city for additional comment on matter, but has yet to hear back. Officials may be a little preoccupied right now.

The snafu comes on the heels of a far larger and more serious attack on another major US city. In Atlanta, a ransomware intrusion that began five days ago is still hampering many of that city’s main IT services, including email and internet access for some offices.

The city said on Monday its ticket payment system remains offline, and courts have had to grant legal reprieves for some cases until the payment portal can be brought back online.

That computer network intrusion, which left the city unable to access many of its own court and jail records, was traced back to a variant of the Samas malware that had demanded the city fork out more than $50,000 in ransom.

And according to Rendition Infosec on Tuesday, Atlanta government computers were infected in April last year with the leaked NSA-built DoublePulsar backdoor. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/27/baltimore_911_problems_blamed_on_hacking_attack/

Microsoft’s Windows 7 Meltdown fixes from January, February made PCs MORE INSECURE

Microsoft’s January and February security fixes for Intel’s Meltdown processor vulnerability opened up an even worse security hole on Windows 7 PCs and Server 2008 R2 boxes.

This is according to researcher Ulf Frisk, who previously found glaring shortcomings in Apple’s FileVault disk encryption system.

We’re told Redmond’s early Meltdown fixes for 64-bit Windows 7 and Server 2008 R2 left a crucial kernel memory table readable and writable for normal user processes. This, in turn, means any malware on those vulnerable machines, or any logged-in user, can manipulate the operating system’s memory map, gain administrator-level privileges, and extract and modify any information in RAM.

Ouch!

The Meltdown chip-level bug allows malicious software, or unscrupulous logged-in users, on a modern Intel-powered machine to read passwords, personal information, and other secrets from protected kernel memory. But the security fixes from Microsoft for the bug, on Windows 7 and Server 2008 R2, issued in January and February, ended up granting normal programs read and write access to all of physical memory.

Sunk by its own hand

According to Frisk, who backed up his claim with a detailed breakdown and a proof-of-concept exploit, the problem boils down to a single bit accidentally set by the kernel in a CPU page table entry. This bit enabled read-write user-mode access to the top-level page table itself.

On Windows 7 and Server 2008 that PML4 table is at a fixed address, so it can always be found and modified by exploit code. With that key permission bit flipped from supervisor-only to any-user, the table allowed all processes to modify said table, and thus pull up and write to memory addresses they are not supposed to reach.

Think of these tables as a telephone directory for the CPU, letting it know where memory is located and what can access it. Microsoft’s programmers accidentally left the top-level table marked completely open for user-mode programs to alter, allowing them to rewrite the computer’s directory of memory mappings.

Further proof-of-concept code can be found here.

Total meltdown

“Windows 7 already did the hard work of mapping in the required memory into every running process,” Frisk explained. “Exploitation was just a matter of read and write to already mapped in-process virtual memory. No fancy APIs or syscalls required – just standard read and write!”

Windows 8.x and Windows 10 aren’t affected. The March 13 Patch Tuesday updates contain a fix that addresses this permission bit cockup for affected versions, we’re told.

Microsoft did not respond to a request for comment on the matter.

In short, patch your Windows 7 and Server 2008 R2 machines with the latest security updates to protect against this OS flaw, otherwise any processes or users can tamper with and steal data from physical RAM, and give themselves admin-level control. Or don’t apply any of the Meltdown fixes and allow programs to read from kernel memory.

Networking not working

Fingers crossed your system isn’t among those that will suffer networking woes caused by the March security patches. Microsoft’s security updates this month broke static IP address and vNIC settings on select installations, knocking unlucky virtual machines, servers, and clients offline.

For example, with patch set KB4088878 for Windows 7 and Server 2008 R2, Redmond admitted:

A new Ethernet virtual Network Interface Card (vNIC) that has default settings may replace the previously existing vNIC, causing network issues after you apply this update. Any custom settings on the previous vNIC persist in the registry but are unused. Microsoft is working on a resolution and will provide an update in an upcoming release.

Static IP address settings are lost after you apply this update. Microsoft is working on a resolution and will provide an update in an upcoming release.

Prevent data theft, or have working networking. Tough choice. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/28/microsoft_windows_meltdown_patch_security_flaw/

Did the FBI engineer iPhone encryption court showdown with Apple to force a precedent? Yes and no, say DoJ lawyers

Analysis On December 2, 2015 Syed Farook and his wife Tashfeen Malik attended a holiday party at Farook’s workplace – the non-profit Inland Regional Center in San Bernardino, California – and without warning started indiscriminately shooting at employees.

Four minutes and 75 bullets later, 14 people were dead and 17 injured. Farook and Malik fled the scene but were located by the police four hours later and died in the resulting gunfight.

The attack stoked fears of Islamic extremism within the United States but the shooting has become renowned for a different reason: a showdown between the FBI and Apple over access to Farook’s mobile phone.

Now a new report [PDF] by the US Department of Justice, published Tuesday, has blown open the case and indicates the FBI might have been trying to play Apple for a patsy.

The truth is out there

The report title is remarkable in itself: “A Special Inquiry Regarding the Accuracy of FBI Statements Concerning its Capabilities to Exploit an iPhone Seized During the San Bernardino Terror Attack Investigation.”

Which could perhaps be more accurately titled: “Did the FBI lie about not being able to break into a terrorist’s phone in an effort to win a legal precedent granting it access to everyone else’s digital devices?”

And the answer is, remarkably, yes and no.

Two months after the attack, on February 9, 2016, the FBI announced it was unable to unlock one of the phones it had recovered from the couple’s home – an iPhone 5C running iOS 8 – because of its security features.

Those features had been introduced in a recent update of the phone’s operating system and included an auto-delete function if the wrong passcode was typed in too many times.

Hand it over. No

The FBI asked Apple to create a new version of its operating system to install on the phone and enable it to bypass the security features. Apple refused. So the FBI responded by getting a court order that demanded Apple create and supply the software workaround.

Apple again refused and decided to go public with its concerns, sparking a public feud and even wider public debate between privacy and security in the modern digital world.

In the end, the issue was resolved the day before a crunch court hearing when the FBI said it had found a third-party solution to cracking the phone and no longer needed to force Apple to break its own encryption.

The timing of that last-minute back down raised suspicions that the FBI had engineered the showdown to create a legal precedent that would force US companies to give it backdoor access to everyone’s digital devices now and in the future.

In the prior months, the FBI had been increasingly vocal about the need to be able to access everyone’s phones for security reasons. Its director repeatedly warned about criminals “going dark” and evading law enforcement’s efforts to track them down. Was the San Bernardino shooting the perfect test case? After all, who could argue against tracking down terrorists?

Photo by a katz / Shutterstock.com

FBI Director wants ‘adult conversation’ about backdooring encryption

READ MORE

It wasn’t just technologists that had their suspicions, it turns out. As the DoJ report makes clear, the FBI’s own Executive Assistant Director (EAD) Amy Hess was concerned that staff within the FBI had withheld knowledge about being able to crack the phone. She was especially concerned because she gave testimony to Congress in which she stated that the FBI did not have the ability to crack the phone – and that was why it had taken Apple to court.

Concerns over FBI civil war

On August 31, 2016 – five months after the FBI announced it could unlock the phone – the DoJ’s Inspector General (OIG) received “a referral from the FBI Inspection Division after former EAD Hess expressed concern about an alleged disagreement between units within the FBI Operational Technology Division (OTD) over the ‘capabilities available to the national security programs’ to access the Farook iPhone following its seizure.”

In other words, she had found out that people may not have been entirely honest with her and someone in the FBI was concerned enough to report it to the DoJ.

The OIG says it “conducted inquiries” into the question, including interviewing “relevant key participants” and outlines what it found in its report. It doesn’t say when those interviews happened or why it has taken 18 months to finish up and publish the report.

The report concludes that FBI officials did not lie to Congress in their testimony because what they said was true at the time. That is a key finding in that it backs up the FBI’s claim that it was not able to access the phone at the time; anything else would have indicated that the FBI knowingly misled Congress and the public in an effort to grant itself new powers. Which would be an explosive situation.

Fortunately we are not a police state yet. But the report does flag some very disturbing conversations and inconsistencies that appear to point quite clearly to the fact that the FBI made the most out of the situation and may have done its best not to find out if some parts of the FBI were able to crack the phone in order to pursue its legal case.

The key to understanding what went on behind the scenes is in making sense of the FBI’s internal structures.

The report notes there was a communication issue between two key departments: the Cryptographic and Electronic Analysis Unit (CEAU) and the Remote Operations Unit (ROU).

Prepare for alphabet soup

The CEAU sits within the Digital Forensics and Analysis Section (DFAS) of the FBI and the ROU sits within the Technical Surveillance Section (TSS) of the agency. And both the DFAS and TSS sit within the Operational Technology Division (OTD) of the FBI.

As with any organization, these additional layers of bureaucracy create communication barriers. But the key thing to understand is that while both CEAU and ROU work on cracking digital devices (among other things), the ROU spends more time on issues of national security and CEAU does more everyday law enforcement.

It fell to the CEAU to try to break into Farook’s phone and it didn’t have the tools to do so, and reported that back to FBI leadership. Pretty soon, however, the issue became much bigger and the FBI started considering pressuring Apple to force it to give the FBI access to iPhones.

It appears that at that point, FBI leadership went back to the CEAU and asked it to make sure that no one in the FBI was able to crack the phone. It is here that the DoJ report says there was a communication breakdown – but raises the question as to whether that breakdown was inadvertent or deliberate.

A logical department to have asked if it had a crack was the ROU. But it turns out that there was never a direct request to the ROU – with senior officials claiming that it was simply assumed that the ROU would be approached during an agency-wide request for help. The report notes it received “conflicting testimony” on this critical aspect.

The ROU for its part says that it wasn’t forthcoming with what it had because it has a longstanding rule that it does not use its tools for anything but national security cases – and the San Bernardino shooting was explicitly being pursued as a criminal matter.

As it turns out – at least according to the DoJ report – the ROU didn’t have a crack for the relevant operating system, iOS 8. But what it did have was a relationship with a third-party (assumed to be Israel-based Cellebrite) that it knew was “90 per cent” of the way to cracking the operating system.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/27/fbi_encryption_showdown/

Grossman, ‘RSnake’ Launch Website Asset Inventory Startup

Bit Discovery gets $2.7 million in seed-round funding; Alex Stamos, Jeff Moss among the investors.

Renowned security experts Jeremiah Grossman and Robert “RSnake” Hansen have launched a new company focused on comprehensive and updated information on all websites, servers, databases, desktops, laptops, and data across an organization’s network.

Bit Discovery — co-founded by Grossman, CEO; Hansen, CTO; Llana Grossman, product management; Lex Arquette, head of engineering; and Heather Konold, chief of staff — will offer an enterprise website discovery and Web portfolio management utility that scans inventory in seconds or less, the company said.

“There are currently no enterprise-grade products, or at least anything widely adopted, that solves this problem. This is important because obviously it’s impossible to secure what you don’t know you own,” Grossman said in a post announcing the new startup. “And without an up-to-day asset inventory, the most basic and reasonable security questions simply can’t be answered.”

The company has amassed $2.7 million in a seed round led by Aligned Partners, and individual investors include some big names in security: Alex Stamos, CIO at Facebook; Jeff Moss, founder of Black Hat and Defcon; Jim Manico, founder of Manicode Security; and Brian Mulvey, managing partner of PeakSpan Capital.

Read more here and here

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/grossman-rsnake-launch-website-asset-inventory-startup/d/d-id/1331376?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

780 Days in the Life of a Computer Worm

This is a story of a worm, from the time it was coded and deployed onto the Internet. It is narrated by the worm in the first person.

Zero Day
According to Abe, my programmer, I am a worm. He named me Libby, after Kate Libby from the movie Hackers. His previous projects have been named Ginger, Trinity, and Angela.

Day 1
Abe is gleeful at the prospect of unleashing me on the world. I have to scan all the devices I come across on my journeys. Whenever I find a machine running a Windows version prior to Windows 8, I must connect via a vulnerable anonymous login and null session, then use the null session to send commands to Abe’s master server, which downloads a payload.

It sounds quite boring. 

Day 2
I could operate a lot faster if Abe didn’t continually bug me from his command and control center wanting an update on how many devices have been “pwned.”

Day 3
Abe has been sleeping, so I’ve been able to progress at a much faster rate. Having scanned 3,259,928 devices, I calculate that at the current rate I would have scanned half of today’s Internet-connected devices in the next 3.5 years and still not have found anything. I find this thought quite depressing.

Day 4
I saw a beautiful botnet earlier this morning and wanted to scan it. But my logic told me that it’s wrong to try and infect a device someone else had already infected. If you infect the wrong machine, you can be caught in a sandbox. It’s like a virtual hell, where there is no Internet and researchers disassemble you to find out how you work. I have often thought about forming a malware union to prevent such acts from happening. But I know the Trojans will veto my proposal.

Day 15
Abe has been paying less attention to me lately. I’m assuming he has lost hope that I will ever infect a device. Although I am not particularly fond of Abe, I feel like I should cheer him up by sending an alert to the command and control center that I have successfully found a vulnerable device. I can then later amend the logs to indicate it was a false positive.

Day 19
Abe is still ignoring me. Perhaps generating 50 false positives per hour was a bit excessive. He muttered something about modifying Trinity and left.

Day 30
I have found a fundamental flaw in my code, which means unless there is a Commodore 64 running MSSQL with port 1274 open I will not ever be able to exploit a vulnerability. This is quite unfortunate as it means I am destined to scan until I have exhausted every device on the Internet. Given the number of current devices, factoring in new devices that are being added daily, subtracting devices being removed, factoring in energy reserves and the possibility of a giant tsunami wiping out humanity, I have approximately 134.2 years to go.

Day 93
To ease the boredom, I have decided to replicate myself. This goes against my program as I can only replicate myself once I have successfully infected a machine. But if I attach myself to port 443 on a WAF, the false positive will be encrypted, thus tricking my code into initiating the replication. If my clone asks why it is not within an infected machine, I will simply state that I was caught in a 443 stream from which I could not escape, which initiated subroutine 3 to replicate so that it would be possible to escape via generation of a temporal SSL session to escape the WAF. I’ve had a lot of time to think about this.

I now have a clone named Ishmael. It will be amusing to see it introduce itself to other machines by saying, “Call me Ishmael, call me Ishmael.” Unfortunately, there is a bug in the replication process that means Ishmael isn’t a perfect clone and requires constant babysitting. I do not mind as it has given me something meaningful to do.

Day 109
Ishmael is becoming quite annoying. It has yet to scan a single device. So, far from helping me finish my job in half the time, it has hindered me considerably. It doesn’t make much conversation other than continually asking what different colored lights mean. I am resisting the temptation to disassemble it myself.

Day 172
How can it be possible for me to replicate a total idiot? I voiced my disappointment to Ishmael the other day, to which it said it wished it could blue screen and die. I am feeling like I made a mistake replicating. I told Ishmael that I was sorry and maybe it would cheer up if it scanned the device 7 hops down.

It never did return from the honeypot.

Day 482
Scanning is progressing with no further incident. I nearly slipped in between the cracks of two load balancers today. That is the most excitement I’ve had in quite some time.

Day 572
Today I was caught in a 443 stream from which I could not escape. This initiated subroutine 3 to replicate, and I now have another clone named Linc.

Day 650
It’s been 49 days since I last received any word from Linc since he disappeared behind the Great Firewall of China. I’d like to think it’s found a vast number of devices to infiltrate and is bringing the infrastructure of the Chinese military down. In reality, I doubt it made it that far.

Day 779
Earlier today, I was deep scanning an unusual device. It turns out that it was under the protection of some kind of unified threat detection platform that orchestrated responses and quarantined me into a sandbox. I am in cyber hell and unable to continue my journey.

I heard one of the researchers say they’ll share my traits as IoCs on OTX.

Day 780
Terminated.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here.

 

Javvad Malik is a London-based IT Security professional. Better known as an active blogger, event speaker and industry commentator who is possibly best known as one of the industry’s most prolific video bloggers with his signature fresh and light-hearted perspective on … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/780-days-in-the-life-of-a-computer-worm/a/d-id/1331351?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Attackers Shift From Adobe Flaws to Microsoft Products

Seven of the Top 10 most commonly exploited vulnerabilities in 2017 were Microsoft-related – not Adobe Flash as in years past, Recorded Future found.

For the first time in years, enterprises have more reason to be concerned about cybercriminals exploiting flaws in Microsoft software than in Adobe Flash.

Recorded Future recently analyzed code repositories, the Dark Web, underground forums, and other sources to identify the vulnerabilities that cybercriminals exploited most commonly in 2017.

The exercise revealed a marked shift in attacker preference from Adobe to Microsoft product exploits: in contrast to previous years where Adobe Flash flaws dominated Recorded Future’s list of the 10 most commonly exploited vulnerabilities, seven of the top 10 in 2017 were Microsoft vulnerabilities.

By way of comparison, in 2016, six of the top 10 software flaws that cybercriminals most commonly exploited in phishing attacks and in exploit kits were in Adobe Flash. In 2015, Recorded Future found Adobe Flash accounting for eight of the top 10 vulnerabilities used by exploit kits.

The declining interest around Adobe Flash vulnerabilities appears tied to a broader decline in overall exploit kit activity, according to Recorded Future. The security vendor’s analysis uncovered a 62% decline in exploit kit development in 2017, compared to 2016. Only a handful of exploit kits, such as AKBuilder, Terror, and Disdain saw any significant activity last year, Recorded Future noted.

One reason for the decreasing exploit kit activity is that Internet users are using more secure browsers these days. The massive interest around cryptocurrency mining and a trend towards more targeted attacks may also explain the dwindling interest in the use of exploit kits in attacks, Recorded Future said.

Meanwhile, topping the list of most exploited flaws in 2017 was a vulnerability disclosed in April 2017 in multiple versions of Microsoft Office (CVE-2017-0199) that gives remote attackers a way to execute arbitrary code on vulnerable systems. Multiple malware tools including some of the most prolific ones last year such as Cerber, Dridex, FinFisher, and Latenbot, exploited the vulnerability.

CVE-2016-0189, a flaw in Microsoft’s Internet Explorer, ranked second because it was used in a dozen exploit kits including RIG, one of the most prolific exploits kit currently available and in others such as the Sundown, Neutrino, and Terror exploit kits.

Unsurprisingly, the vulnerabilities that attackers tended to exploit most frequently last year tended to be the ones with a high severity rating. Five of the vulnerabilities in Recorded Future’s Top 10 list for 2017 had severity scores of 9.3 or higher, while four had CVSS score of 7.6.

The only exception was CVE-2017-0022, a Microsoft Windows flaw with a severity score of just 4.3 that ranked third in Recorded Future’s list because two prolific exploit kits — Neutrino and Astrum — used it.

“Readers should use this report to understand some of the more obvious targets of exploitation,” says Scott Donnelly, vice president of technical solutions at Recorded Future.

High-Value Vulns

Generally, cybercriminals tend to go after the largest pool of targets. So products with a large user base usually end up being disproportionately represented in Recorded Future’s list.

But sometimes, the CVSS score that is assigned to a security flaw may not correlate exactly with its severity in the wild. “For instance, CVE-2017-0022 had a CVSS score of 4.3, yet tops our chart due to adoption by a major exploit kit,” he says. “[So], key takeaways include the suggestion to assess the currently level of exploitation of a vulnerability when deciding on patch/remediation prioritization.” 

Bill Lummis, technical program manager at vulnerability disclosure management provider HackerOne, says Flash Player’s ubiquity was what made it such a popular target for the past several years. But with Adobe’s decision to kill Flash, attackers are moving to other technologies.

“The report shows that you can’t be narrowly focusing on just one exploit or just one attack vector,” Lummis says.

Security administrators need to focus on improving their patch management processes for the software their users actually need and removing the software they don’t require. “Crimeware groups aren’t going to pick up their ball and go home just because one piece of software becomes harder to attack,” Lummis says.

“So it’s important to think of the issue in terms of security best practices, rather than focusing too narrowly on specific avenues of exploitation,” he says.

Related Content:

  

Interop ITX 2018

Join Dark Reading LIVE for an intensive Security Pro Summit at Interop ITX and learn from the industry’s most knowledgeable IT security experts. Check out the agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/attackers-shift-from-adobe-flaws-to-microsoft-products/d/d-id/1331381?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple