STE WILLIAMS

Western Australia’s Web votes have security worries, say ‘white hat’ mathematicians

The Western Australian government is pushing back against concerns about the security of its implementation of the iVote electoral system.

iVote is an electronic system already used in another Australian State, New South Wales, primarily as an accessibility tool because it lets the vision-impaired and others with disabilities vote without assistance.

Perhaps in response to last year’s Census debacle, Western Australia has decided to put in place denial-of-service (DoS) protection, and that’s attracted the attention of a group of veteran electronic vote-watchers.

Writing at the University of Melbourne’s Pursuit publication, the group notes that the DoS proxy is not in Australia: it’s provided by Imperva’s Incapsula DoS protection service.

That raises several issues, the academics (Dr Chris Culnane and Dr Vanessa Teague of the University of Melbourne, Dr Yuval Yarom and Mark Eldridge of the University of Adelaide, and Dr Aleksander Essex of Western University in Canada) note.

First: the TLS certificate iVote uses to secure its communications is signed not by the WA government, but by Incapsula; and second, that means Incapsula is decrypting votes on their way from a voter to the State’s Electoral Commission.

While it would be fatal to Incapsula’s business if it weren’t trustworthy, the academics are worried about votes existing in decrypted form anywhere but the Electoral Commission, because a suborned employee, someone wandering around Incapsula’s systems without authorisation, or US government agencies also stand as “possible eavesdroppers”.

The Western Australian Electoral Commission has issued a “calm down”, telling The West Australian votes have two layers of encryption: one when the vote is cast, and a second for transit (the TLS session that uses the Incapsula certificate).

That’s true, white-hat mathematician Dr Vanessa Teague told The Register, adding that the Javascript-based in-browser encryption of votes looks “pretty good” to the group.

However, problems remain, and for these, a little explanation is required.

First, iVote has processes designed to separate the voter’s identity from the vote they cast. It does so by using different servers for voter registration and vote-casting.

To register, a voter provides their name and a proof of identity, such as a Medicare number or passport number. From those details, the system generates a pseudonymous user ID and a login PIN.

To guarantee voter anonymity, the server processing votes only knows user IDs and PINs: it knows a registered voter is logging in, but not a voter’s identity.

Dr Teague pointed out to The Register that since both registrations and votes pass through the Incapsula proxy, it introduces a location from which an attacker could de-anonymise a voter (for legal reasons, this would be untestable against a live system).

As noted in the Pursuit article: “If you register and vote from the same web browser, a ‘cookie’ stored on your system by Incapsula allows it to link both interactions.”

Second, although the Javascript encryption of the vote is well-designed, because it’s passing through the Incapsula proxy the code itself is potentially visible to third parties. This raises a potential man in the middle attack to reveal votes.

The Register has asked the Western Australian Electoral Commission to comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/06/western_australian_online_votes_security_concerns/

Don’t worry, slowpoke Microsoft, we patched Windows bug for you, brags security biz

Video A computer security outfit claims to have plugged an information leak in Windows that was publicly revealed by Google before Microsoft had a patch ready. Could this third-party patching become a trend?

Last month, Google’s Project Zero team disclosed details of a trivial vulnerability in the Windows user-mode GDI library: the programming blunder can be exploited by dodgy enhanced metafiles (EMFs) to siphon sensitive stuff from memory. This flaw can be potentially abused by hackers to extract data from an application’s memory, or defeat ASLR to pave the way for reliable remote-code execution.

Google said it had given Microsoft 90 days to fix the issue and, as it hadn’t, the Chocolate Factory went public with both the flaw and a proof-of-concept exploit. Now Slovenia-based Arcos Security says it’s managed to produce a patch and has released it, via its 0patch tool, for those who want to give it a try.

“I have to kindly thank Mateusz Jurczyk of Google Project Zero for a terse and accurate report that allowed me to quickly grasp what the bug was about and jump onto patching it,” said Luka Treiber from Arcos.

He explained that flaw lies within the GDI library’s EMF image format parsing logic: it doesn’t check the dimensions specified in an incoming image file against the actual pixel count, thus allowing the document to trick the code into reading more memory than it should. To fix this, he added a checking function into the code, and he says that the patch will work for 64-bit Windows 10, Windows 8.1, and Windows 7, and 32-bit Windows 7.

Here’s a video of the patch catching an attempt to exploit the GDI bug.

Youtube Video

“While not the most severe issue, I get shivers thinking that … a malicious page could steal credentials to my online banking account or grab a photo of me after last night’s party from my browser’s memory,” Treiber said.

Redmond skipped its February Patch Tuesday update after hitting problems with its software build and distribution systems. This GDI bug is expected to be addressed in the next monthly patch dump, due on March 15, but a fix isn’t guaranteed.

“We’re unable to endorse unverified third party security updates,” a spokesperson for Microsoft said. “Our security updates are tested extensively prior to release, and we recommend customers enable automatic updates to receive the latest protections when available.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/microsoft_security_patch/

Wow, did you see what happened to Veracode? Oh no, no, it’s not dead. Worse – bought by CA

Investors in the cloudy app security biz Veracode are going to be celebrating after CA Technologies agreed to buy it up for $614m in cash.

CA announced the buy on Monday and said that it wanted to add Veracode’s application security testing to its security lineup and devops business, as well as keeping its cloud apps more secure. CA thinks Veracode, headquartered in Burlington, Massachusetts, will help it with larger enterprise customers while also snaffling the security firm’s larger punters.

“We provide over 1,400 small and large enterprise customers the security they need to confidently innovate with the web and mobile applications they build, buy and assemble, as well as the components they integrate into their environments,” said Bob Brennan, CEO of Veracode.

“By joining forces with CA Technologies, we will continue to better address growing security concerns, and enable them to accelerate delivery of secure software applications that can create new business value.”

The Veracode acquisition is the latest in a long line of purchases by New York City-based CA. The giant snapped up Israeli testing outfit BlazeMeter last year, and in 2015 snaffled identity management outfit IdMlogic, cloudy devops supplier Rally Software, and automated testers Grid-Tools. It is a corporate sponge, in other words. A bottomless devourer of technology. A blackhole of software.

“Software is at the heart of every company’s digital transformation. Therefore, it’s increasingly important for them to integrate security at the start of their development processes, so they can respond to market opportunities in a secure manner,” said Ayman Sayed, president and chief product officer, CA Technologies.

“Looking holistically at our portfolio, now with Veracode and Automic, we have accelerated the growth profile of our broad set of solutions. We now expect that the size of our growing solutions within our Enterprise Solutions portfolio will eclipse the more mature part of the Enterprise Solutions portfolio in FY19.”

The deal is expected to be concluded by April. Veracode has received roughly $114m in funding since its founding in 2006. CA predicted the biz gobble would add a couple of percentage points to its global revenues and have a “modestly adverse impact” on earnings per share and cash flow from operations over the next two fiscal years. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/ca_technologies_slurps_up_veracode/

Put down the coffee, stop slacking your app chaps or whatever – and patch WordPress

Internet scribblers who use WordPress must update their installation of the publishing tool following the disclosure and patching of six security holes.

Version 4.7.3 of the content management system includes fixes for the half dozen flaws that could allow for, among other things, cross-site scripting and request forgery attacks.

“This is a security release for all previous versions and we strongly encourage you to update your sites immediately,” WordPress says of the patch.

The three cross-site scripting errors were found in the handling of file metadata, YouTube video URLs, and taxonomy term names. The discovery was credited to researchers Chris Dale, Yorick Koster, Simon Briggs, Marc Montpas and Delta.

The cross-site-request forgery flaw was spotted in the Press This page sharing tool, and discovery was credited to researcher Sipke Mellema. Meanwhile, Cambridge University computer science student Daniel Chatfield took credit for reporting a flaw that could be used to circumvent URL validation checks, and Xuliang was credited for reporting a flaw that causes unintended files to be deleted when a WordPress plugin is removed.

WordPress said that in addition to patching the six security flaws now publicly disclosed, version 4.7.3 also addresses 40 maintenance issues in various WordPress components.

The 4.7.3 update comes just days after WordPress admins were alerted to a separate security crisis in NextGEN Gallery, a WordPress plugin vulnerable to SQL injection attacks. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/time_to_update_wordpress/

Shamoon malware spawns even nastier ‘StoneDrill’

Researchers following up on last November’s re-emergent Shamoon malware attacks have found something even nastier.

A quartet of Kaspersky researchers say the “StoneDrill” malware sits in a victim’s browser, and wipes any physical or logical path accessible with the target user’s privileges.

Although StoneDrill mostly seeks Saudi Arabian targets (and has Persian language resources in the code), Kaspersky’s authors Costin Raiu, Mohamad Amin Hasbini, Sergey Belov, and Sergey Mineev discovered it in Europe, and take this as a hint that the attackers might be widening their campaign.

There’s also a backdoor module that has a choice of four command and control servers. The commands the researchers found in the malware suggest an espionage operation, with screenshot and upload capabilities, and to help evade detection, it functions at the file level and doesn’t need to use disk drivers during installation.

StoneDrill also has better anti-emulation techniques, compared to Shamoon 2.0, they write.

Like Shamoon 2.0, StoneDrill was apparently compiled in October and November 2016 (going by timestamps the authors left in the debug directory).

The full report, here, identifies what Kaspersky looks for in Shamoon 2.0 and StoneDrill: Trojan.Win32.EraseMBR.a, Trojan.Win32.Shamoon.a, Trojan.Win64.Shamoon.a, Trojan.Win64.Shamoon.b, Backdoor.Win32.RemoteConnection.d, Trojan.Win32.Inject.wmyv, Trojan.Win32.Inject.wmyt and HEUR:Trojan.Win32.Generic. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/stonedrill_malware_goes_on_fresh_datadestruction_frenzy/

That big scary 1.4bn leak was basically nothing but email addresses

The “1.4 billion identity leak” that was hyped up before the weekend involved, no, not a database ransacking at Facebook, YouTube, or anything that important.

No, instead, a US-based spam-slinging operation accidentally spilled its treasure chest of email addresses used to deluge netizens with special offers, marketing crap and the like.

On Friday, Twitter user Chris Vickery teased world plus dog that he was going public on Monday with news of a massive data breach of 1.37 billion records. And that turned out to be 1.37 billion email addresses amassed by River City Media (RCM) – an internet marketing biz apparently based in Jackson, Wyoming, that claims to emit up to a billion emails a day.

Some of the records include real names, IP addresses, and physical addresses, it is claimed. Vickery said he “stumbled upon a suspicious, yet publicly exposed, collection of files,” and discovered they related to RCM. Among the millions and millions of contact details were chat logs and internal documents exposing the sprawling RCM empire. It turns out the spamming, er, marketing biz has many tentacles and affiliates, mostly dressed up as web service providers and advertising operations.

“Someone had forgotten to put a password on this repository,” Vickery said. The data was, basically, a backup held in a poorly secured rsync-accessible system. It is alleged that chat logs and internal files in the repository show RCM staff discussing Slowlaris-like techniques to overload mail servers and persuade them to accept hundreds of millions of messages.

It is understood RCM gathers its information from people applying for free gifts and online accounts, requesting credit checks, entering prize giveaways, and such things on the internet, or the information is bought from similar info-slurping outfits. Vickery said he managed to confirm that some of the data was real, although the addresses tended to be out of date.

RCM did not respond to a request for comment on Vickery’s findings. Meanwhile, anti-spam clearing house Spamhaus has blacklisted the organization’s entire infrastructure. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/07/rcm_email_megaleak/

Shamoon Data-Wiping Malware Now Comes with Ransomware Option

And: another data-destruction variant discovered, with similarities to Shamoon.

More signs that destructive data-annihilation attacks are on the rise: researchers at Kaspersky Lab today detailed a new family of data-wiping malware that uses more advanced methods of hiding out and evading detection.

The malware, dubbed StoneDrill by the researchers, has possible ties to the attack group behind the infamous and recently resurrected Shamoon data-wiping malware. In addition, the researchers discovered that Shamoon 2.0 also has a new feature in its arsenal: a ransomware component.

Shamoon after a nearly five-year hiatus reappeared late last year and again early this year in three waves of attacks targeting governments and civil organizations in Saudi Arabia and the Gulf States. The ransomware module for Shamoon 2.0 has not yet been seen deployed in the wild as yet, according to Kaspersky’s team, but could provide a layer of deniability for the attack group behind it by making it appear as a typical cybercrime gang out to make a quick bitcoin.

“It seems to suggest that ransomware sabotage and wiping go hand-in-hand in some ways,” Juan Andrés Guerrero-Saade, senior security researcher at Kaspersky Lab, said in an online press briefing today. “The notion is that either way, you’re holding the value of an enterprise hostage. It’s only a matter of a keystroke whether it [the data] will go away or not.”

One researcher not affiliated with Kaspersky Lab confirmed that some nation-state groups already have employed ransomware against their targets – mainly to appear as a cybercriminal group and not to tip their hands as an APT. The ransomware attack payment features are typically pilfered from a real cybercrime gang’s attack repertoire and victims don’t get their data back even if they do pay ransom. The APT group already will have wiped it and disabled the infected machines in those cases, the researcher says.

IBM X-Force researchers also have seen a ransomware feature with Shamoon 2, notes Mike Oppenheim, global lead in research for IBM X-Force IRIS. “It makes sense that Shamoon, a destructive malware, would have ransomware with it,” Oppenheim says. Like Kaspersky, IBM X-Force has not yet seen that feature in action in the wild.

StoneDrill, meanwhile, takes the data-wiping malware model to the next level by injecting itself into the memory process of the user’s browser of choice once it is installed on the victim’s machine. Andrés Guerrero-Saade says it’s unclear as yet how the attackers initially infect the victim, but StoneDrill remains under the radar and away from the prying eyes of sandbox technology. It’s similar in style to Shamoon, but with a different codebase, according to the findings.

Kaspersky Lab also found it had infected not only Middle East targets but also a European target, but the company declined to provide any details of the firm or the industry targeted. The researchers say it employs some of the same code previously used by the so-called NewsBeef or Charming Kitten APT. Kaspersky Lab makes it a policy not to identify attackers by their nation or other affiliation, but other research teams say NewsBeef/Charming Kitten are an Iranian APT.

Iran’s cyber espionage machine has revved up over the past few months, starting with Shamoon 2.0’s comeback, something Adam Meyers, vice president of intelligence at CrowdStrike, says is no coincidence. CrowdStrike expects more such attacks as the geopolitical climate continues to intensify.

“We are seeing that the Iranian tradecraft in offensive cyberattacks is maturing,” he says. “Some of that is being represented in this reporting,” he says of the new Kaspersky Lab findings.

The more advanced features and possible other evasion steps in the language settings used by the StoneDrill attackers to throw off investigators and threat hunters are par for the course. “There is an evolution of any sort of tool that a threat actor is going to do. They have to stay ahead of the companies trying to find them. We always see this as a cat and mouse game,” says IBM X-Force’s Oppenheim.

CrowdStrike’s Meyers, says part of the equation is studying who’s being attacked. “The more interesting part is not the ‘what’ but the ‘why.’ What are they targeting and why,” Meyers says, pointing out that most attacks have geopolitical ties to current events. He expects the US to be one of the next big targets given the increasingly tense political climate between the US and Iran.

Kaspersky Lab’s Andrés Guerrero-Saade says his team hasn’t yet seen Shamoon 2.0 or StoneDrill attacks against US organizations, however. “While there is no direct indication that the attackers are currently targeting US institutions, the severity of wiper operations, their ability to cripple organizations, and capacity to cause great financial and reputational damage should place them near the top of concerns for all organizations,” he says. He recommends beefing up attack defenses for these types of threats.

Despite the recent uptick in destructive malware attacks from Shamoon 2.0 and now StoneDrill, these are still not anywhere near as widespread as other targeted malware campaigns. These type of attacks over the past decade have been relatively rare.

“There have been less than ten in the past decade, which suggest how careful and unusual they are even for well-established APT actors,” Andrés Guerrero-Saade says.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/shamoon-data-wiping-malware-now-comes-with-ransomware-option/d/d-id/1328327?_mc=RSS_DR_EDT

Uber under fire for ‘Greyball’ program used to dodge enforcement officials

What do you do if you’re violating local government regulations and you know the local authorities are looking for you? Maybe you lay low. But if you’re Uber, you supercharge everyday “hiding” with an integrated assemblage of industrial-strength code, data analytics and whatever creative low-tech methods you can conjure up. So the New York Times reports, and Uber admits.

According to the Times, Uber unleashed its Greyball program “to identify and circumvent officials who were trying to clamp down on the ride-hailing service. Uber used these methods to evade the authorities in cities like Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea.” In some locations, says the NYT, Uber’s services were currently being “resisted by law enforcement”.

In other locations, such as Portland, Oregon, local government was taking the position that the low-cost UberX service is illegal – a claim that Uber vigorously disagreed with and chose to disregard.

Greyball used roughly a dozen markers to tag city inspectors. Some potential giveaways: calls made near city government offices; quick and repeated opening and closing of Uber’s app, and credit cards linked to police credit unions. Knowing that law enforcement often ran its stings from dirt-cheap feature phones, Uber also sent employees to local retailers to “look up device numbers of the cheapest mobile phones for sale,” and then flag calls based on this information. When it still wasn’t sure, “Uber employees would search social media profiles and other information available online.”

If you were tagged, the Times reports

Uber could scramble a set of ghost cars in a fake version of the app… or show that no cars were available. Occasionally, if a driver accidentally picked up someone tagged as an officer, Uber called the driver with instructions to end the ride.

All this was evidently pretty systematic, the Times says. Once Uber knew Greyball worked to deter law enforcement, its engineers “created a playbook with a list of tactics and distributed it to general managers in more than a dozen countries on five continents”. Looks like it worked: here’s a 2014 clip of Portland code enforcers trying and failing to catch Uber violating the city code.

Uber points out that Greyball has multiple uses in deterring violations of its terms of service, not all equally controversial. For example, it has used Greyball to protect drivers against physical attack – which has clearly occurred in some locations where local transportation providers have been threatened by its new service. So, too, Greyball attempts to halt “competitors looking to disrupt our operations”. From Uber’s standpoint, using Greyball to deter local code enforcement is a way to protect drivers from having their cars impounded for illegal commercial transport of passengers (oh, and also save Uber the costs of reimbursing them).

Legal observers in the US couldn’t say for sure if Uber’s actions were illegal (its own internal lawyers signed off, though some of the Times’s sources evidently had qualms). And Uber says that once city officials surrender and legalize the service, it ceases using Greyball to evade code enforcement.

Of course, all this once again raises the question: how might Uber wield the increasingly rich data patterns it can generate about your life and behavior? Might it ever geofence certain neighborhoods out of bounds, as traditional cab companies have been known to do informally? What are the implications of Uber’s massive data hoard even on those rare occasions when it’s trying to play nice?


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Bot_oTomyLM/

News in brief: Facebook tags ‘disputed’ news; products to be judged on security; smart meters snafu

Your daily round-up of some of the other stories in the news

Facebook rolls out ‘disputed’ stories tag

Stung by accusations that it hasn’t done enough to staunch the torrent of “fake news” on its platform, Facebook has started rolling out its system to alert users to disputed stories.

For now it’s only going to be seen by US users, although many might argue that the horse has already bolted there. There is a rising hum of concern about how fake news might influence voters in Germany, France and the Netherlands as elections loom in those countries – so much so that Germany is considering fast-tracking a law to force Facebook to respond more quickly to dodgy stories or face a big fine.

Once a story is flagged by users, or once Facebook’s own algorithm picks up irregularities, it will be forwarded to third parties including Snopes, ABC News, Politifact, FactCheck and the Associated Press for checking by humans. If they decide a story is dodgy, then it will be flagged as “disputed”.

Some argue that saying something is merely “disputed” doesn’t go far enough, but it is at least a start.

Consumer Reports to judge products on cybersecurity

We’ve covered many, many examples of shoddy security on consumer goods, from baby monitors to Barbie dolls and most recently connected teddy bears, as well as the fallout from the attack on Dyn resulting from the Mirai botnet that turned IoT devices into attack zombies.

So the news that Consumer Reports, the US consumer group, is to start including an evaluation of a product’s cybersecurity comes as good news. The group said that its move was prompted by some 65% of people saying in its Consumer Voices survey that they were either slightly or not at all confident that manufacturers were looking after their personal data properly.

CR says it has worked with a number of partners to develop a privacy standard by which it will judge items – and we’re particularly glad to see that one requirement of CR’s standard is that devices that connect to the internet should require users to pick unique usernames and passwords.

Smart meters report huge daily costs

Have you got a smart meter in your house keeping tabs on how much you spend on keeping the place warm and letting you manage it via your smartphone? If so, spare a thought for the customers of a UK energy company whose meters on Friday suddenly told them that they had run up bills of up to £33,000 for just one day’s use of gas and electricity.

Users took to Twitter to share pictures of the SSE app alerting them to huge bills, and meanwhile, Science Bulletin reported the results of research from the Dutch University of Twente, which found that some smart meters could give readings that are up to 582% higher than the energy consumption they’re supposed to be measuring.

SSE responded with a terse statement acknowledging the problem and blaming the problem on “a routine software upgrade”.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PYoC2alqjfQ/

‘Dozens’ of police departments maintain private DNA databases

Last year, San Diego police stopped, searched, handcuffed and detained one of a group of kids walking through a park. They found an unloaded handgun in his duffel bag.

A court later threw out evidence related to the gun, saying the search had violated the teenager’s Fourth Amendment rights. However, the court failed to order the destruction of a DNA sample the police had collected by swabbing the 16-year-old’s mouth without parental consent.

The American Civil Liberties Union (ACLU) announced last month that it’s suing the city over what it says is the police department’s unlawful policy allowing them to collect DNA samples from minors without first getting parental consent.

But the case has brought up issues that go well beyond San Diego. In fact, as the Associated Press has reported, dozens of police departments around the country are amassing their own, private DNA databases to track not only criminals, but people who haven’t been charged with crimes.

They say doing so helps them avoid backlogs that can cause 18- to 24-month delays in processing DNA samples. But legal experts say that keeping their own, private genetic databases also helps police departments to evade state and federal regulations about who they can get genetic samples from and how long the samples can be retained.

The AP quoted Jason Kreig, a law professor at the University of Arizona:

The local databases have very, very little regulations and very few limits, and the law just hasn’t caught up to them. Everything with the local DNA databases is skirting the spirit of the regulations.

San Diego is one example of the laxer rules found at the local level: according to the city’s police department procedures for DNA collection, a supervisor or a field lieutenant has to sign off on a mouth swab taken from a minor.

Plus, the officer is required to fill out a “Consent to Collect Saliva” form and to then get the minor to sign it – the same procedure used to get genetic samples from an adult. Parental notification is only required after the fact.

That differs from California law, which restricts compulsory collection of DNA from juveniles for inclusion in the state database, unless a juvenile’s been judged guilty of a felony. The ACLU’s lawsuit (PDF) says San Diego has “sidestepped” stricter state law by maintaining its local database separate from the state data bank.

It’s like treating kids as “miniature adults”, the lawsuit says. That goes against general legal practice in the US, in which children’s consent, given without a parent’s knowledge and counsel, is generally considered as not equivalent to that of an adult.

Jonathan Markovitz, one of the attorneys representing the 16-year-old and his family, told the San Diego Union-Tribune that “any consent the teen had given for the taking of his DNA sample was essentially coerced, given that the officers let his friends go after they each signed a form agreeing to let the officers swab the inside of their mouths to collect DNA”.

The AP quoted Bardis Vakili, an ACLU attorney who’s spearheading the lawsuit:

It’s hard to imagine it’s anything other than coerced or involuntary.

I think they are trying to avoid transparency and engage in forms of surveillance. We don’t know what’s done other than it goes into their lab and is kept in a database.

Besides the issue of compulsory DNA collection from minors without parental consent, the AP reports that some police departments are using their local DNA databases to store samples taken from people who’ve never even been arrested for a crime.

Better to have your own database with your own rules than to let a burglar keep burgling while you wait for the state to get a sample analyzed, said one early adopter of the practice of keeping a local database.

Frederick Harran, the public safety director in Bensalem Township, Pennsylvania, told the AP that since the town’s database was created in 2010, arrests have gone up because of DNA collection, and robberies and burglaries have gone down.

The Pennsylvania state lab takes up to 18 months to process DNA taken from a burglary scene, he said. Going through a private lab – paid for with money from assets seized from criminals – results come back within a month.

The AP quoted Harran:

If they are burglarizing and we don’t get them identified in 18 to 24 months, they have two years to keep committing crimes.

It’s hard to know how many police departments are keeping their own DNA databases, since they’re not subject to state or federal oversight. Harran says he knows of about 60. Besides California, the AP reports that police have publicly mentioned their local databases in Florida, Connecticut and Pennsylvania.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LxlZsD_xyzw/