STE WILLIAMS

Crypto-busters reverse nearly 320 MEELLION hashed passwords

The anonymous CynoSure Prime “cracktivists” who two years ago reversed the hashes of 11 million leaked Ashley Madison passwords have done it again, this time untangling a stunning 320 million hashes dumped by Australian researcher Troy Hunt.

CynoSure Prime’s previous work pales compared to what’s in last week’s post.

Hunt, of HaveIBeenPwned fame, released the passwords in the hope that people who persist in re-using passwords could be persuaded otherwise, by letting websites look up and reject common passwords. The challenge was accepted by the group of researchers who go by CynoSure Prime, along with German IT security PhD student @m33x and infoseccer Royce Williams (@tychotithonus).

The password databases Hunt mined for his release were sourced from various different leaks, so it’s not surprising that many hashing algorithms (15 in all) appeared in it, but most of them used SHA-1. That algorithm was handed its death-note some time ago, and its replacement became untenable in February this year when boffins demonstrated a practical SHA-1 collision.

The other problem is its weakness: hashing is used to protect passwords because it is supposed to be irreversible: p455w0rd gets hashed to b4341ce88a4943631b9573d9e0e5b28991de945d, the hash gets stored in the database, and it’s supposed to be impossible to get the password from the hash.

The 15 different hashes in use were discovered using the MDXfind tool.

Along the way, the post looks at Hunt’s methodology and notes that some people are storing info beyond just the passwords in the hashes (for example, there are email:password combinations and other varieties of personally identifiable information, which CyptoSure Prime says Hunt didn’t intend to release).

Hunt told The Register the CynoSure Prime people did some “pretty neat” work, and that they’ve been cooperative.

He agreed that the data leaks involved carried “a bit of junk” because the original owners made mistakes in parsing, and as a result the leaked user lists include names where only passwords are expected.

While some of this landed in his release, Hunt said, those data sets are in “two files that anyone could download with a few minutes’ searching”. He’s working with the CryptoSure Prime data to purge it from the hashed lists hosted at HaveIBeenPwned.

When it comes to reversing the hashes, the post illustrates just how good the available tools have become: running MDXfind and Hashcat on a quad-core Intel Core i7-6700K system, with four GeForce GTX 1080 GPUs and 64GB of memory, the researchers “recover all but 116 of the SHA-1 hashes”.

CryptoSure Prime's character distribution in passwords

With the passwords reversed, here’s the distribution of character sets found by CryptoSure Prime

Most of the passwords in the HaveIBeenPwned release are between seven and 10 characters long. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/04/cryptobusters_reverse_nearly_320_meellion_hashed_passwords/

Asterisk RTP bug worse than first thought: think intercepted streams

One of the Asterisk bugs published last week is worse than first thought: Enable Security warns it exposes the popular IP telephony system to stream injection and interception without an attacker holding a man-in-the-middle position.

A reader (@kapejod, who collaborated with @sandrogauci on the work) alerted The Register to this advisory last published Friday.

In it, Enable Security explains that a bug it’s dubbed “RTPbleed” (the “RTP” stands for Real Time Protocol) which first emerged in September 2011, was patched in the same month, but was then reintroduced in 2013. As this page states, it doesn’t only affect Asterisk, because the bug’s in RTP proxy code.

The problem occurs when comms systems like IP telephony have to get past network address translation (NAT) firewalls. The traffic has to find its way from the firewall’s public IP address to the internal address of the device or server, and to do that, RTP learns the IP and port addresses to associate with a call.

The problem is, the process doesn’t use any kind of authentication.

For Asterisk, the bug is triggered when the system “is configured with the nat=yes and strictrtp=yes” – and because NAT is pretty much ubiquitous, those are default settings.

What’s special about this bug is that the attacker doesn’t need to be between the two ends of the conversation: a system with a vulnerable RTP implementation can be persuaded to reflect media streams towards the attacker.

“To exploit this issue, an attacker needs to send RTP packets to the Asterisk server on one of the ports allocated to receive RTP. When the target is vulnerable, the RTP proxy responds back to the attacker with RTP packets relayed from the other party. The payload of the RTP packets can then be decoded into audio.”

It’s a pretty knotty problem: admins can turn off the nat=yes flag, but only if they’re not using NAT; they can authenticate and encrypt media streams with Secure Real Time Protocol (SRTP), but only if both ends support it.

The Asterisk patch “limits the window of vulnerability to the first few milliseconds”, which is good because the other suggested mitigations could be troublesome for sysadmins: turning off NAT; or using Secure RTP (SRTP) which can authenticate and encrypt streams if both ends support it.

There are still issues with the patch:

Note that as for the time of writing, the official Asterisk fix is vulnerable to a race condition. An attacker may continuously spray an Asterisk server with RTP packets. This allows the attacker to send RTP within those first few packets and still exploit this vulnerability.

The official Asterisk fix also does not properly validate very short RTCP packets (e.g. 4 octets, see rtcpnatscan to reproduce the problem) resulting in an out of bounds read disabling SSRC matching. This makes Asterisk vulnerable to RTCP hijacking of ongoing calls. An attacker can extract RTCP sender reports containing the SSRCs of both RTP endpoints.

@kapejod links to his own contribution to fixing the issue. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/03/asterisk_rtp_bug_allows_intercepted_calls/

Alert: AT&T customers with Arris modems at risk of remote hacking, claim infosec bods

Infosec consulting firm Nomotion has reported vulnerabilities in Arris broadband modems and which it says are trivial to exploit, and could affect nearly 140,000 devices.

The report claims the modems carry hard-coded credentials, serious since a firmware update turned on SSH by default. That would let a remote attacker access the modem’s cshell service and take a leisurely walk through most of the devices’ controls and levers.

“The username for this access is remotessh and the password is 5SaP9I26”, Nomotion states.

The shell’s capabilities include “viewing/changing the WiFi SSID/password, modifying the network setup, re-flashing the firmware from a file served by any tftp server on the Internet” – and there’s also access to a kernel module “whose sole purpose seems to be to inject advertisements into the user’s unencrypted web traffic.”

That last isn’t in use in the modem, Nomotion’s Joseph Hutchins writes – but the code is present and vulnerable.

The modems in question are the Arris NVG589 and NVG599, which Nomotion notes are provided as standard customer premises equipment for ATT U-verse customers.

The bugs could have been added by ATT, the report says, since while “examining the firmware, it seems apparent that ATT engineers have the authority and ability to add and customize code running on these devices, which they then provide to the consumer (as they should).”

The cshell runs as root, which means any other possible exploit is also trivial to exploit. For example, he provides a demonstration of a command injection using its ping functionality.

Other vulnerabilities Hutchins says he’s found in the modems include:

Arris told Kaspersky’s ThreatPost it’s now analysing the report and will act to protect users if necessary. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/01/att_customers_with_arris_modems_at_risk_claim_infosec_bods/

Online file conversion services – why trust them?

Let’s imagine that you just received an attachment on your phone, such as an image, a document or a spreadsheet.

Imagine you need to edit it, resize it, convert it to a new format, or something similar, but you don’t have a suitable app on your phone, and you don’t have your laptop with you.

It’s a file you don’t intend to make public – perhaps it’s a picture of your children, or a copy of your latest tax return, or your sales targets for next quarter – but you nevertheless need to work on urgently (it happens!)…

…what now?

We’re assuming that you wouldn’t go into an internet cafe, or find an internet kiosk, and upload the file onto one of their computers to work on it.

We don’t think you’d ask the stranger sitting next to you on the train if you could borrow their laptop for a bit. (By the way, if you were that stranger, we’d advise you not to lend out your laptop – make a polite excuse, but be wary of geeks bearing GIFs.)

It’s all about trust.

But how many of us use online services – publicly accessible online servers – to do just that sort of thing?

By that, we mean websites that offer online document conversion, image editing, video transcoding, animated GIF creation, barcode generation, and so forth.

Simply put – cloud conversion.

Cloud as kiosk

The cynic’s definition of using the cloud is “doing your work on someone else’s computer”, with all the risks that brings.

Indeed, if you use a cloud service that involves uploading your own private data to manipulate it remotely, you are very much “doing your work on someone else’s computer”.

If that someone else is unscrupulous, they might deliberately keep a copy of your personal files after you’ve finished with them.

If they’re incompetent, they might accidentally let crooks get hold of your personal files while you’re working on them.

In other words, just like it was above, it’s all about trust.

And that trust has to be earned, not assumed, as our chums at ZDNet reminded us yesterday when they wrote about a file conversion server in France that had allegedly been hackable for more than a year due to the ImageTragick vulnerability.

ImageTragick was a security hole in a popular open source image conversion utility called ImageMagick, a toolkit used on many websites to handle the low-level file manipulation needed to convert, resize and tweak images. The bug allowed a crook to upload booby-trapped fake images that would trick the ImageMagick software into running system commands chosen by the attacker, leading to what’s known as a remote code execution (RCE) bug. A patch for the bug, known as CVE-2016–3714, was published in May 2016.

According to ZDNet, the French servers in the story hosted close to 50 different online conversion services, with names such as rtftopdf, svgtopng and pdftotext.

Apparently, the ImageTragick hole had already been used to open up remote access to unknown attackers, implying that any file you uploaded to the service, or downloaded from it, could have been intercepted, inspected, modified or copied by unseen assailants.

And file conversion sites are entirely about uploading and downloading your own files, which is presumably why the attackers were interested in a remote access backdoor in the first place.

What to do?

If you’re going to entrust your personal data to a cloud-based service – all the way from creating a login profile to uploading one of your own files – then don’t do it unless you have a good reason to trust that service.

This is just the same sort of “personal due diligence” you need to go through when selecting an app to which you’ll entrust your data.

Indeed, just avoiding online document converters in favour of offline, downloadable ones can end in tears too, as we wrote about last year when a free tool called EasyDoc Converter turned out to be a vehicle for infecting Mac users with a remote access Trojan, or RAT.

Here are three steps you can take when choosing a service, an app, or a combination of the two (many services come with a dedicated app, especially on mobile devices, so you don’t need to use your browser):

  • Avoid apps or online services with a poor or non-existent reputation. Don’t trust a cloud service or an app about which no one yet seems to know anything.
  • Don’t rely on reviews that come with the app or service. Even in curated marketplaces like Google Play, there’s little to stop the creator of an app or online service from publishing their own glowing reviews, or paying someone else to post it for them. Seek an opinion from someone in real life whom you already know and trust.
  • Don’t use search engines as an indicator of quality. In the ZDNet case, several of the allegedly vulnerable sites in the story appeared in the first page of Google results for terms such as “pdf convert” and “image convert”.

In short…

…if in doubt, don’t give it out!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/918KUZ5_kE4/

News in brief: Call to link encryption to ID; Facebook maps everyone; Mirai ‘blackmailer’ extradited

Your daily round-up of some of the other stories in the news

Call to withhold encryption unless you verify your ID

A lawyer has suggested that access to encryption technologies on social media should be denied to those who don’t “verify” their identities.

Max Hill QC, who is leading a review of the UK’s terrorism laws, told the London Evening Standard that “A discussion I have had with some of the tech companies is whether it is possible to withhold encryption pending positive identification of the internet user.” He added that he didn’t think this would “involved wholesale infringement on free speech use of the internet”.

Hill’s views seem to be building on a declaration by UK home secretary Amber Rudd that “real people don’t want unbreakable encryption“.

Naked Security’s Paul Ducklin has discussed the technical feasibility of intercepting encryption, and concluded then that Rudd “has as much chance of getting US firms to buy that idea as successfully hosting a mad-hatter’s tea party with a chocolate teapot”.

However, the idea of tying verified identities to encryption is a new development. We’ll be returning to this story in more detail next week – but in the meantime, what do you think?

Facebook knows where you live

Facebook knows where you live – and it knows where every other human on the planet lives, too, to within 15ft.

Janna Lewis, who manages innovation partnerships for Facebook, told the Space Technology and Investment Forum in San Francisco this week that the social media giant has created a data map of all the humans on the planet by combining census information with satellite data, reported CNBC on Friday.

The aim, said Lewis, is to help Facebook understand how it can deliver internet connectivity to everyone on Earth. “Our data showed the best way to connect cities is an internet in the sky,” she said, adding: “We’re trying to connect people from the stratosphere and from space, using high-altitude drone aircraft and satellites, to supplement earth-based networks.”

Alleged Mirai blackmailer extradited from Germany

A British man accused of being behind a cyberattack on two of the UK’s biggest banks has been extradited from Germany to face charges.

Daniel Kaye, 29, of Egham, Surrey, is facing nine charges under the Computer Misuse Act, two charges of blackmail and one of possession of criminal property. He’s accused of using the Mirai botnet to launch DDoS attacks on Lloyds, Halifax and Bank of Scotland over two days in January this year.

He’s alleged to have asked Lloyds for a ransom of £75,000-worth of Bitcoin, which was not paid. Kaye is also charged with endangering human welfare with an alleged attack against Liberia’s biggest ISP, Lonestar MTN.

The UK’s National Crime Agency said: “The investigation leading to these charges was complex and crossed borders. Our cybercrime officers have analysed reams of data on the way. Cybercrime is not victimless and we are determined to bring suspects before the courts.”

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pPmNA6jnpb0/

US cops can’t keep license plate data scans secret without reason

Police departments cannot categorically deny access to data collected through automated license plate readers, California’s Supreme Court said on Thursday – a ruling that may help privacy advocates monitor government data practices.

The ACLU Foundation of Southern California and the Electronic Frontier Foundation sought to obtain some of this data in 2012 from the Los Angeles Police Department and Sheriff’s Department, but the agencies refused, on the basis that investigatory data is exempt from disclosure laws.

So the following year, the two advocacy groups sued, hoping to understand more about how this data hoard is handled.

Automated license plate readers, or ALPRs, are high-speed cameras mounted on light poles and police cars that capture license plate images of every passing vehicle.

The LAPD, according to court documents, collects data from 1.2 million vehicles per week and retains that data for five years. The LASD captures data from 1.7 to 1.8 million vehicles per week, which it retains for two years.

Authorities use this data to investigate crimes, though most of the license plates captured are associated with drivers not implicated in any wrongdoing. Regardless, license plate images can reveal where drivers go, which may point to the people they associate with and the kinds of activities they engage in. And if combined with other data sets, like mobile phone records, an even more complete surveillance record may be available.

The ACLU contends [PDF] that indiscriminate license plate data harvesting presents a risk to civil liberties and privacy. It argues that constant monitoring has the potential to chill rights of free speech and association and that databases of license plate numbers invite institutional abuse, not to mention security risks.

EFF senior staff attorney Jennifer Lynch said the ruling demonstrates that the court recognizes the privacy implications of license plate data.

At the same time, making license plate data available to researchers seeking to understand the privacy implications is itself a privacy risk. The court recognized this conundrum in its ruling.

“Although we acknowledge that revealing raw ALPR data would be helpful in determining the extent to which ALPR technology threatens privacy, the act of revealing the data would itself jeopardize the privacy of everyone associated with a scanned plate,” the ruling says.

Accordingly, the California Supreme Court does not call for the release of this data; rather, it sends the plaintiffs’ record request back to the trial court, which will decide what data can be made public and whether some of it will need to be redacted or anonymized to protect driver privacy. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/01/police_cant_keep_license_plate_data_scans_secret/

Instagram warns users of API bug on heels of nude Bieber photos leak

Instagram’s API sprung a leak, with attackers snatching email addresses and phone numbers of “high-profile” users.

We don’t know who those high-profile users are, but we do know, as Variety reports, that somebody posted nude photos of Justin Bieber on to Selena Gomez’s Instagram account on Monday.

Bare Bieber photos weren’t up for long: within minutes, the account was offline, photos from (former couple) Gomez and Bieber’s 2015 vacation in Bora Bora were deleted, the account was re-secured, and back up it went.

Was it due to hackers exploiting what Instagram said was “a bug” in its API? Or just a coincidence? We can’t say. The two could be completely unrelated: after all, much, if not all, of the nude celebrities photo grabs of Celebgate versions 1, 2 and 3 were enabled by attackers phishing login credentials to iCloud and Google email accounts.

Or then again, it could be that the Instagram attacker did in fact exploit the flaw in the social media app’s API to peek at users’ profile information. As The Register notes, the API lets developers see profile information. That’s why Instagram and Facebook both changed their terms of service in March: to turn off the data spigot for developers who were mining the platforms for surveillance purposes.

At any rate, Instagram wasn’t forthcoming with details. But here’s what it did say in a statement sent to the New York Daily News:

We recently discovered that one or more individuals obtained unlawful access to a number of high-profile Instagram users’ contact information – specifically email address and phone number – by exploiting a bug in an Instagram API.

A source told the Daily News that one person found the API bug and used it to steal information.

Instagram says that it’s warned all of its verified users about the hack. It declined to say how many accounts were affected.

Censored versions of Bieber’s photos initially appeared in the Daily News, but the Full Monty versions later made their way online, Variety reports. This isn’t the first time his nude photos have been stolen. When it happened in 2015, also while he was on vacation in Bora Bora, he told Access Hollywood that it was a violation:

My first thing was like…how can they do this? Like, I feel super violated.

I’m not a Belieber, but I do beliebe he’s right: it is a violation when thieves get their grubby mitts on our intimate photos. Here are some ways to keep it from happening:

  • Don’t click on links in email and thus get your login credentials phished away. If you really think your ISP, for example, might be trying to contact you, rather than clicking on the email link, get in touch by typing in the URL for its website and contacting the company via a phone number or email you find there.
  • Use strong passwords.
  • Lock down privacy settings on social media (here’s how to do it on Facebook, for example).
  • Don’t add people on social media you haven’t met in real life, and don’t share photos with people you don’t know and trust. For that matter, be careful of those who you consider your “friends”. This isn’t the first time that Instagram content has been grabbed: one example of creeps posing as friends can be found on the creepshot sharing site Anon-IB, where users have posted images they say they took from Instagram feeds of “a friend”.
  • Use multifactor authentication (MFA) whenever possible. MFA means you need a one-time login code, as well as your username and password, every time you log in. That’s one more thing attackers need to figure out every time they try to phish you.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AWczZzcze3I/

Twitter struggles to deal with the sock-puppet and bot armies

Twitter botnets used for political propaganda might have hit on an ingenious new way to cause mischief – bombard accounts they dislike with fake followers and retweets in an attempt to get them suspended by the site’s anti-abuse systems.

Normally, such botnets – made up of thousands of automated “sock puppet” accounts controlled from a single point – are used to spam fake news stories or bombard target Twitter accounts with large numbers of hostile tweets.

In recent weeks, however, journalists and non-profit organisations have been affected by a new twist on an old tactic which one of those affected, cybersecurity writer Brian Krebs, describes as a “tweet and follower storm”.

The trigger provoking botnet attention in this case has been writing about Russian and US politics, as news site ProPublica discovered when it gave coverage to an analysis by Digital Forensic Research Lab (DFRLab) on alleged attempts by Russian propaganda to stir up political tensions in the US.

The response of pro-Russian bots was to retweet a Twitter condemnation of the story up to 23,000 times, ostensibly an attempt to blot out its post with the Twitter equivalent of white noise.

At the same time, DFRLab staff reported receiving intimidating tweets which, again, were amplified hugely by botnets, including on August 28 the bogus claim that one of its staff, Ben Nimmo, had died.

A journalist who covered this story, Joseph Cox of The Daily Beast, reported this week that it was retweeted 1,300 times by bots while he attracted 300 new, mostly Russian-language followers within a short period of time.

Then Cox’s account was suspended by Twitter with the following message:

Caution: This account is temporarily restricted. You’re seeing this warning because there has been some unusual activity from this account.

Presumably, Twitter had detected the suspicious retweets but incorrectly associated his account with them.

Two days later and journalist Brian Krebs avoided the same fate after he commented on the bot phenomenon and was overnight rewarded with 12,000 new followers and as many retweets. Commenting on the reasons behind Cox’s suspension, he said this:

Let that sink in for a moment: A huge collection of botted accounts — the vast majority of which should be easily detectable as such — may be able to abuse Twitter’s anti-abuse tools to temporarily shutter the accounts of real people suspected of being bots!

Twitter reinstated Cox’s account after a few hours, but one conclusion is, whether by design or accident, bots have hit on a new way to annoy Twitter users they take against.

According to Krebs, the 12,000 bot account unfollowed him but remain active on the service despite their suspicious behaviour.

On one level, this is not surprising – bots (in other words, automated accounts) are allowed under Twitter’s terms and conditions and have numerous legitimate uses. What isn’t allowed are fake accounts, which Twitter has been battling for years.

When fake accounts are corralled into bots, trouble follows, with some of the biggest networks reaching hundreds of thousands of accounts. Some of them even have names, for example the 90,000-strong “Siren” bot used to lure people to porn websites.

The larger question is why, after years or claimed improvements in its security protocols, Twitter still seems unable to spot accounts that look dubious and which breach its terms and conditions.

All the big social media platforms have a problem with fake accounts used for nefarious purposes but only on Twitter do malicious bots seem able to pull off what amounts to a denial-of-service attack on individual users.

Is there a defence? After being targeted, DFRLab could find only one that was capable of deterring the bot horde – copy @Twittersupport and @Twitter on any complaint.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fIFw1kDjOiY/

Ex-cop who won’t decrypt hard drives still in jail indefinitely

The former cop suspected of possessing child abuse imagery who’s rotting in jail because he can’t, or won’t, decrypt two hard drives?

He’s still rotting in jail. It’s going on two years now. His lawyers are still claiming that holding him breaches his Fifth Amendment right to not incriminate himself.

And the Department of Justice (DOJ) is still saying that he’ll continue to rot in jail indefinitely until he decrypts the drives, which are encrypted with Apple’s FileVault software.

We now know his name: the 17-year veteran and former sergeant of the Philadelphia Police Department is Francis Rawls. During an investigation into child abuse images in 2015, police seized the two hard drives from Rawls’ home. Rawls claims to have forgotten the passwords, but the government still isn’t buying it.

On Monday, Rawls’ lawyers filed a motion to vacate the contempt order (PDF) that’s keeping him behind bars. His legal team, once again arguing for Rawls’ right not to incriminate himself, requested that he be released on bail pending his final appeal to the US Supreme Court.

As it is, his lawyers said, Rawls has already been imprisoned for more than 18 months, which is the statutory maximum under 28 USC § 1826(a) for failure to comply with an order to testify or provide other information in federal judicial proceedings – in other words, failure to comply with a search warrant.

The suspect isn’t going anywhere, the government snapped back on Wednesday. In its response to the motion to vacate (PDF), prosecutors said that from the start, the government has deliberately chosen not to call Rawls as a witness so as to avoid the self-incrimination scenario.

That’s why the government hasn’t compelled Rawls to produce a password, it said in Wednesday’s response. Rather, Rawls was simply asked to perform a physical act to unlock the devices, without having to hand over his actual password. Rawls had, in fact, entered three incorrect passwords during a previous forensic examination.

Two appeals courts have decided that he could unlock the hard drives if he wanted to, however. Rawls has appealed his detention once in federal court and once in the 3rd US Circuit Court of Appeals. Both courts rejected his appeals.

At any rate, Rawls isn’t being held on that statute that has an 18-month maximum, the government said. Rather, he’s being held under the All Writs Act: an archaic statute that’s been around since 1789 and which allows courts to issue writs (orders) “necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law”.

Archaic the act may be, but the government’s dusted it off in compelled decryption cases before this one, in cases involving iPhones of the San Bernardino terrorist and a Brooklyn drug dealer.

It’s not about the Fifth Amendment, the government said in Wednesday’s reply. It’s all about complying with that search warrant.

There’s no limit to how long somebody can be held when they’re in contempt of court, the government said, pointing to a similar case in which someone was held nearly seven years for contempt. The appeals court in that earlier case found that there is…

…no temporal limitation on the amount of time that a contemnor can be confined for civil contempt when it is undisputed that the contemnor has the ability to comply with the underlying order.

Rawls isn’t a witness, so he can’t self-incriminate by testifying against himself. Rather, he’s just somebody being asked to produce something to confirm what the government already knows. Namely, the foregone conclusion is that child abuse images are on those drives, with hash values of known child abuse images being the blinking neon sign pointing to their presence.

While Rawls act of production of the decrypted computer would have testimonial elements, he was not called as a witness. He was ordered to perform a physical act. Before he sat down at the keyboard, he was not placed under oath.

Yes, Rawls has been in jail for near two years, the government notes. But given that the search warrant showed probable cause for crimes that carry a mandatory total of 20 years jail time, two years isn’t much.

The government concluded by suggesting that the court check in with Rawls to see if he just might be ready to decrypt the hard drives. Do it periodically, it said, in case he changes his mind one of these days.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7NFX4kXfknM/

Open source or proprietary: how should we secure voting systems?

The stakes are always high when it comes to software security, which is why the ongoing debate over open-source vs. proprietary tends to be passionate.

But the stakes rise to a new level when it comes to the security (and integrity) of a nation’s voting systems. Which makes a recent, relatively civil, squabble over the topic – 15 months out from the next national US election – both passionate and significant.

There isn’t much debate that something needs to be done to make voting systems – more than 8,000 jurisdictions in the 50 states – more secure.

While the US intelligence community concluded that Russian hackers were “probably unsuccessful” in tampering with votes in last year’s presidential election, that doesn’t mean they didn’t try, or that their chances of future success are low.

Richard Clarke, White House senior cybersecurity policy adviser for Presidents Bill Clinton and George W Bush, wrote before last year’s election that “the ways to hack the election are straightforward and are only slight variants of computer system attacks that we see every day in the private sector and on government networks in the US and elsewhere around the world”.

And to the argument that a jumble of thousands of different systems would make it difficult, he noted that it wouldn’t require a widespread attack. “In America’s often close elections, a little manipulation could go a long way,” he wrote.

Bloomberg reported two months ago that federal investigators found “incursions into voter databases and software systems” in 39 states – more than twice the number previously reported. The news agency said a classified National Security Agency (NSA) document reported p”otentially deep vulnerabilities in the US’s patchwork of voting technologies …” and cited former FBI director James Comey warning that the Russians are “coming after America. They will be back.”

So, what to do? That’s where the argument begins. According to former CIA director R James Woolsey and Brian J Fox, original author of the GNU Bash shell and longtime free software advocate, the “obvious solution” is to run US voting systems with open-source software.

In an op-ed in the New York Times, the two noted that the National Association of Voting Officials, a California nonprofit, is leading the campaign to “begin to use software based on open-source systems that can guard our votes against manipulation”.

They cited the standard arguments in its favor:

Despite its name, open-source software is less vulnerable to hacking than the secret, black box systems like those being used in polling places now. That’s because anyone can see how open-source systems operate. Bugs can be spotted and remedied, deterring those who would attempt attacks. This makes them much more secure than closed-source models like Microsoft’s, which only Microsoft employees can get into to fix.

But that prompted a rejoinder on the Lawfare blog from Matt Bishop of the University of California, Davis, with contributions from seven other experts at institutions ranging from MIT to the Center for Democracy and Technology, reminding us all of that uncomfortable reality that so far there is no such thing as bulletproof security, no matter what software is being used. As Bishop put it:

Making source code available to everyone for inspection makes it available to the attackers for inspection. And the attackers are often highly motivated to find vulnerabilities. Complicating this is the relative ease of identifying one vulnerability and the difficulty of finding them all. Attackers need to find just a single flaw in order to exploit a system.

Even perfect software doesn’t guarantee perfect security. “Consider a system that uses a difficult-to-guess password, but that password can be found on a website. No amount of scrutiny of the system will reveal this flaw,” Bishop wrote.

The group doesn’t object in principle to open source. “We believe there are excellent reasons to move to open-source voting systems,” Bishop wrote, including:

  • Allowing vendor claims to be verified.
  • Such software, running on commercial, off-the-shelf hardware, “could be far cheaper to acquire and maintain than proprietary voting systems”.
  • Promoting a “competitive market for technical support for local election officials”.
  • Making it easier to “audit against the paper trail more efficiently than commercial systems permit”.

“But adopting open-source systems would not by itself provide any assurance that computers used in voting are doing what they are supposed to do,” Bishop wrote.

Clarke provided a short list for what he called “minimal election security standards”:

  • Don’t connect any vote-recording machine to any network — including LANs and VPNs.
  • Create a paper copy of each vote recorded, and keep them secured for at least a year.
  • Conduct a verification audit within 90 days on a statistically significant level.

It is probably also useful to keep in mind that voting systems are designed, run, secured and overseen by humans. Which creates its own challenges that can confound the best technology.

One of them is the clueless worker. The late Kevin McAleavey, cofounder and chief architect of the KNOS Project and a malware analyst, said last fall that most of the recent breaches of campaigns, voter roll lists and other confidential information were “done with malware planted by an unsuspecting, authorized user of the systems who got phished and clicked on the bait”.

And then there is the challenge of those who are not clueless, but malicious. As one of the world’s most lethal dictators, Joseph Stalin, put it: “I consider it completely unimportant who in the party will vote, or how; but what is extraordinarily important is this – who will count the votes, and how.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/d-6J03GtEYc/