STE WILLIAMS

Ex-NSA guru builds $4m encrypted email biz – but its nemesis right now is control-C, control

Analysis A security startup founded by a former NSA bod has launched an encrypted email and privacy service, aimed initially at ordinary folks.

The ongoing revelations of PRISM and other US-led internet dragnets, fueled by leaks from whistleblower Edward Snowden, may render the premise of upstart Virtru laughable. However, that would be unfair to Virtru, which is trying to make encryption and decryption of email, plus the revocation of messages and other privacy controls, easy to use.


Its execs told El Reg that Virtru aims to do for secure email what Dropbox has done for sync-and-share. Crypto-protected email will be offered for free, and more advanced features, such as finding out where sent emails are forwarded, will carry a price tag. There are also plans in the works to license Virtru’s encryption technology to businesses.

The startup has developed plugins that, in theory, let users control how emails and attachments sent to others are shared and viewed. The technology – today in beta – is compatible with Gmail, Yahoo!, Outlook mail providers, and Chrome, Firefox and iOS 7 Mail software (wider platform support is in the works).

Just tell us which cipher they’re using

Virtru uses the tough AES-256 algorithm to encrypt every message with perfect forward secrecy before it leaves a computer or device, which is a good start. It wraps each missive in a container that requires permission from Virtru’s servers to unlock it. This way, the startup can claim it never holds the actual data sent – just the encryption keys needed to decrypt a message. If you don’t like the idea of Virtru’s cloud holding your keys, you can set about creating your own one if you ask nicely.

So, having received an encrypted mail via your email provider, your mail client needs to contact the Virtru key store to get the unlock key and decrypt the message on your device or computer. Each message has its own unique key.

Thus, in theory, this mechanism can be used to revoke emails at any time, by refusing to hand over the decryption key and rendering the message and any attachments unreadable. The sender can also, again in theory, restrict the forwarding of a message because whoever ends up with the email may not have permission to download the unlock key from the key store. Similarly, you can finely control who exactly can open a Virtru-encrypted memo by restricting access to the decryption key. There’s also the ability to give emails an expiration date.

The technology uses the Trusted Data Format (TDF PDF), an open-source security wrapper created by Virtru co-founder Will Ackerly. It’s used by the intelligence community to secure sensitive data, we’re told. Ackerly served for eight years at the NSA as a cloud security architect prior to founding Virtru in 2012.

Virtru complements the TDF technology with patented encryption-key management that, the company claims, makes it possible to control the fate of an email and its attachments even after it has left the sender’s outbox. You can use OpenID and OAuth protocols to verify your identity to the key store via your email provider – whether that’s Gmail, Yahoo! or Microsoft.

A bare-bones explanation of how Virtru’s technology works can be found on its website, here. There’s more info about the company’s backend systems, here and even source code, here.

Recipients view encrypted emails in Virtru’s Secure Reader web plugin, which handles the cryptography and access controls, as demonstrated in this brief video:

Youtube video

The in-browser reader plugin is written in JavaScript, with a mix of Component, JQuery, SJCL and Caja and others. A spokeswoman told us: “In addition, we have cryptographic components written in C and compiled to NaCl [Google Native Client] for accelerating encryption of attachments, but we have not yet released that version.”

Breaking Virtru

So, say Alice uses Virtru to send an encrypted message with attachments to Bob, with settings in place to prevent Bob from forwarding the missive and ultimately revoking access to it in 24 hours. What is stopping Bob from cut’n’pasting the contents of the email before the expiry deadline, or saving the attached decrypted files to disk, and then giving the supposedly protected data to his friend, Eve?

We put this to Virtru, which told us in a statement that, at the moment, there’s nothing stopping Bob from dumping the plain text out of the Virtru system:

In short, today we are not guaranteeing that a user won’t have persistent access to plain text once they authenticate and are granted access to a TDF key. However, we will be rolling out persistent protection options in the coming weeks, As such, we explicitly allow copy and paste and unwrapping of all attachments, until we release these additional features.

We are in the midst of testing ‘persistent protections’ across our platforms. Given the importance of assuring that these features cannot be subverted, we are waiting to release them until we have such assurances across the spectrum of platforms.

Even if Virtru is able to disable control-C, control-V in the browser, the decrypted plain text will be in memory on the device, and a user will be able to extract that – there are many programming tools that can freeze the browser application and root out the unencrypted goods.

Virtru told us it will consider using anti-piracy tech in modern browsers to keep TDF data away from prying debuggers, or simply watermark the message so that any leaks can be traced:

For the browser we are pursuing the use of emerging technologies such as Encrypted Media Extensions (EME) to help ensure such protections even in an open source client, and in the mean time leveraging HTML5 features like Canvas to flatten and watermark content before it is injected into the webpage.

Our next thought was: bypass the browser plugin, and just download the keys from Virtru to decrypt the TDF package using your own software and save the plain text to disk. Or hijack the key-fetching code in the plugin using debugging tools.

Virtru reckons it can thwart that by only handing decryption keys to trusted applications – which presumably are programs that can cryptographically prove their authenticity to the server. The reader may have to be digitally signed to prove it hasn’t been compromised to leak plain texts and keys. A spokeswoman for the startup told us:

For TDF, any file or message without persistent protection obligations may be opened by any app that supports TDF, even one you write on your own. Where there are obligations such as copy/print and unwrap protection, we must ensure that we are delivering keys to an application we can trust, and there are some techniques that may be leveraged that have varying levels of assurance.

Gaining this trust will vary per application and per platform. Some of the strongest mechanisms are available in modern mobile devices, but are weaker on older desktop environments. In many cases it will require signed code, and we may be able to rely on delivering EME extensions when the technology matures.

Essentially, Virtru will have to play a game of whack-a-mole with anyone attempting to break its system. There are many avenues of attack, each of which the startup will have to secure, if that’s even possible given the available software interfaces: only one slip up will blow the thing out of the water.

Its developers will have to trust so many layers of code, from the browser down to the operating system, to enforce its touted message access system. At least revoking a message sent in error will work as expected, provided it’s revoked before the decryption key is fetched.

Ultimately, Bob could use a camera, or a screenshot tool, to leak the information to Eve, or simply share his Gmail password. Virtru, or its users, could use watermarking, or the old-fashioned technique of slightly altering each document, to trace and identify leakers. Simple steps that render the aforementioned elaborate defences redundant.

But with millions of dollars in funding, Virtru is serious about secure email – even vowing to fight government demands for folks’ decryption keys.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2014/01/24/ex_nsa_cloud_guru_email_privacy_startup/

Spam drops as legit biz dumps mass email ads: Only the dodgy remain

Spam email was down in volume last year, but junk mail messages still comprise two in three items of electronic communication sent over the interwebs.

Kaspersky Lab reports the portion of spam in email flows was as high as 69.6 per cent in 2013 – which is 2.5 percentage points lower than 2012. The biggest sources of spam were China (23 per cent) and the US (18 per cent), according to the Russian security firm.


The drop in spam mail last year follows a steady decrease since 2010. Kaspersky experts reckons that unscrupulous marketeers are turning away from email because it’s becoming a less and less effective medium to promote their dubious wares.

In the last three years, the share of unsolicited messages has fallen by 10.7 per cent. It appears that advertisers increasingly prefer the various types of legitimate online advertising that are now available, which generate higher response rates at lower costs than unsolicited email can offer.

In some spam categories [such as travel and tourism], commercial advertising is being gradually displaced by criminal mailings, such as spam messages advertising illegal goods or pornography.

The percentage of emails with malicious attachments was 3.2 per cent – 0.2 percentage points lower than in 2012. Kaspersky adds that almost a third – 32.1 per cent – of phishing attacks were targeted at social networks. More than occasionally, these malicious missives were disguised as messages from genuine antivirus vendors, featuring attachments contaminated with banking Trojans such as ZeuS.

Darya Gudkova, head of content analysis at Kaspersky Lab, commented: “For the third year in a row the most prevalent malware spread by email were programs that attempted to steal confidential data, usually logins and passwords for internet banking systems. At the same time, however, phishing attacks are shifting from bank accounts to social networking and email.”

She added: “This can be partly explained by the fact that today’s email accounts often give access to a lot of content, including email, social networking, instant messaging, cloud storages and sometimes even a credit card.”

More details on spam and malicious email trends in 2013 can be found in a blog post on Kaspersky Lab’s official Securelist blog here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2014/01/24/spam_dropping_huzzah/

Facebook coughs up $33.5k… its BIGGEST bug bounty EVER

Facebook has awarded its highest bug bounty to date after the discovery of a vuln which could have been used to spray Facebookers with drive-by download-style malware exploits.

Brazilian web security researcher Reginaldo Silva earned $33,500 for giving the social network a heads-up about an XML external entity vulnerability within a PHP page hosted on its servers that handled OpenID authentication. The flaw disclosed Facebook’s etc/passwd.


If the flaw were to be left unresolved, malicious crackers who came across the vulnerability could have abused it to change Facebook’s use of Gmail as an OpenID provider to a hacker-controlled URL, before servicing requests with malicious XML code. That by itself is bad enough, but Silva might have been onto something even worse.

Silva discovered the vulnerability back in November before disclosing it to Facebook, whose engineers immediately saw the significance of the flaw and fixed it hours later. This thwarted Silva’s strategy of seeing whether the bug could have been developed into a remote code execution vulnerability.

“I was very impressed and disappointed at the same time,” Silva wrote in a blog post about his find. “But since I knew just how I would escalate that attack to a Remote Code Execution bug, I decided to tell the security team what I’d do to escalate my access and trust them to be honest when they tested to see if the attack I had in my mind worked or not.”

Remote code execution vulnerabilities would lend themselves to types of attack that throw malware at surfers visiting a vulnerable website – the most serious category of risk – and therefore earn a bigger payout under Facebook’s bug bounty programme.

Facebook initially wanted to pay out only for the credential disclosure aspect of the flaw. But it relented after a few back-and-forth emails with Silva, deciding to classify the vulnerability as an even higher risk flaw.

“We discussed the matter further,” Facebook explained in a statement about the payout on its site. “Due to a valid scenario he theorised involving an administrative feature we are scheduled to deprecate soon, we decided to re-classify the issue as a potential RCE [Remote Code Execution] bug.”

Amichai Shulman, CTO at data security firm Imperva, said the speed at which Facebook was able to fix the flaw was exceptionally fast and a possible sign that the bug was outside the “critical application path so the risk of breaking [something] was low”

“Facebook is one of the companies that probably have invested the most in their application security over the past years,” Shulman said. “The fact that critical vulnerabilities still pop up in their application should serve as a warning sign to anyone who believes that writing vulnerability-free applications is possible.”

“Remote execution flaws are a tidal phenomenon,” added Shulman. “Usually people find a way to abuse a specific infrastructure (in this case OpenID) and then suddenly we see many flaws being reported in different places that use this infrastructure. Are critical flaws hard to find? Sadly, the answer is no.”

Additional security commentary on the handling of the bug bounty negotiations in this particular case can be found in a blog post by Joshua Cannell, a malware intelligence analyst at Malwarebytes, here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2014/01/24/facebook_bug_bounty_payout/

How To Get The Most Out Of Risk Management Spend

Even with most security budgets these days growing or at least staying flat for 2014, no organization ever has unlimited funds for protecting the business. That’s where a solid risk management plan can be a life-saver.

Dark Reading recently spoke with a number of security and risk management experts to offer up practical tips for getting the most out of risk management. They say that smart risk management strategies can make it easier to direct security funds to protect what matters most to the business. Organizations that use them typically can base their spending decisions based on actual risk factors for their business, rather than employing a shot-gun strategy that chases after every threat under the sun. Here are a couple of ways to start making that happen.

Establish A Risk And Security Oversight Board
If an organization is going to get more for its IT risk management buck, the first thing it has to remember is that security risk is only one facet of business risk. Which is why it is important to engage with cross-functional teams, says Dwayne Melancon, chief technology officer for Tripwire, who explains that makes it easier to look at risk holistically.

Melancon says that he’s seen many customers establish “Risk and Security Oversight Boards” that are made up with leaders like the CFO, chief legal counsel, and other stakeholders from across the business.

“This board discusses, prioritizes, and champions actions and investments based on a risk registry developed through cross-functional debate and agreement,” he says. “This approach ensures that the business ‘puts their money where their mouth is’ and helps align different parts of the business around the short list of risks that have the potential to cause most harm to the business.”

Get A Second Opinion
Even if an oversight board may not be practical, getting a second opinion from the business as to where IT risk management should focus stands as a crucial way to set priorities.

“One way we’ve seen success with this is to engage with legal, finance, and PR instead of the IT executives,” says J.J. Thompson, CEO and managing director for Rook Security. “They identify the real issues with simplicity and have not been brainwashed by the IT industry, who still struggles to realize what really matters to business.”

For example, in one consulting engagement Thompson was at, his CIO contact was caught up in focusing on standard ISO 27000x practices around SOC services Rook would offer his firm. But when his consultants talked to that firm’s legal department, they were most concerned about how that SOC outsourcing would affect their largest defense contractor client. That was the number one risk priority.

“The business was simply concerned about the highest area of risk: that which directly pertained to their largest client,” Thompson says. “We shifted focus to the controls that directly reduced the risk of such a compromise occurring and tailored custom control monitoring that focused on creating a sensitive data map, and setting custom anomaly detection triggers when the sensitive data is accessed. ”

[Are you getting the most out of your security data? See 8 Effective Data Visualization Methods For Security Teams.]

Map Risk To A Business Bloodline
What’s the business bloodline for your company? In other words, what are the areas of the business for which security threats could truly disrupt the way the organization operates. This is exceedingly important to determine—and one which that second opinion should help deliver. Once you figure that out, start mapping technical elements to it in order to understand what kind of events could do the organization the most harm, says Amichai Shulman, chief technology officer for Imperva.

“For some companies, a POS system or its database full of credit cards may be its most valuable assets, for some it may be social security numbers and the personal information attached,” he says. “For a company that bases its livelihood on transactions and uptime, the loss of revenue or customer loyalty caused by a DDoS could be devastating.”

Communicate Risk Visually
A big part of risk management is communicating identified risks both up to senior management and down to the security managers that will put practices in place to mitigate them. One of the most effective ways to do that is to make those results visual.

“Pursuing risk management purely within security can help you make better decisions, but it can’t help you get the right level of funding unless you can show people outside what you’re doing,” says Mike Lloyd, chief technology officer for RedSeal Networks. “Helping executives outside to understand is hard. Doing this with formulae won’t work – you will need pictures.”

For example, Rick Howard, chief security officer for Palo Alto Networks, says that any time he starts a proposal to the executive suite he starts by showing them a business heat map that shows the top 10- to 15 business risks to the company on a grid. Typically cyber risk is in that top 15, which makes it easier to get them to address those risks more fully.

“Once that is done, I like to build a risk heat map just for cyber,” he says. “I take the one bullet on the business heat map and blow it up to show all of the cyber risks that we track. Again, this is not technical, it is an overview. We are not trying to show the 1,000 potential ways that an adversary can get into the network. We want to show the C-suite who the adversary is.”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/risk/how-to-get-the-most-out-of-risk-manageme/240165618

US govt watchdog slams NSA snooping as illegal, useless against terrorism

The Privacy and Civil Liberties Oversight Board, a federal panel set up to advise the US government on policy, has published report concluding that the bulk collection of data on US citizens by the NSA is illegal and ineffective at stopping terrorism.

“Based on the information provided to the board, including classified briefings and documentation, we have not identified a single instance involving a threat to the United States in which the program made a concrete difference in the outcome of a counterterrorism investigation,” the report [PDF] stated.


The 238-page study found that the NSA’s policy of collecting vast amounts of metadata using Section 215 of the Patriot Act was illegal, and raised serious concerns with regard to breaking the First and Fourth Amendments – covering freedom of speech and unlawful search and seizure of evidence.

The dossier states that this particular bulk data collection program started in 2001 shortly after the September 11 attacks, and was cleared by the Bush administration in 2006 under the terms of the Patriot Act. The authors conclude Section 215 data collection was a “subversion” of the law and was used to “shoehorn a pre-existing surveillance program into the text of a statute,” noting that it also violated the Electronic Communications Privacy Act.

As a result the board recommends the practice should be stopped immediately, a view not shared by President Obama based on his speech on the matter last Friday, and that any data stored by the NSA should be deleted after three years, not five.

Other recommendations in the board’s report do dovetail very nicely with Obama’s own plans, such as having public representation in the Foreign Intelligence Surveillance Court – the NSA’s secret oversight court. The panel also backed the President’s promise to reduce the scope of surveillance dragnets on targets: rather than snoop on up to three “hops” of the target’s contacts, as is the case today, two hops should be monitored instead. As an example, if you’re on the NSA watch list and you email your brother, who calls his girlfriend, who texts her mother – the mother is the third hop.

The report’s authors were split on their views, however. Rachel Brand, who served as assistant attorney general for legal policy at the US Department of Justice (DoJ) between 2005 and 2007, said wrote in a dissenting opinion that the bulk data collections were legal and warned that if there was another major terrorist attack “the public will engage in recriminations against the intelligence community for failure to prevent it.”

Co-author Elisebeth Collins Cook also disagreed with some of the report’s conclusions on the legality of Section 215 data sweeps, saying that the terms of use should be modified, but that collection should continue. Collins Cook, who served as assistant attorney general for legal policy at the DoJ from 2008 to 2012, also questioned [PDF] the conclusions on the data slurping’s effectiveness against terrorism.

The five-person Privacy and Civil Liberties Oversight Board was set up in 2006 on the recommendation of the 9/11 Commission Report to provide advice on the use of surveillance in anti-terrorism investigations. It has limited legal weight, but the conclusions have already drawn political comment.

Representative Mike Rogers (R-MI), who as chairman of the House Intelligence Committee is supposed to oversee the activities of the NSA, was sharply critical of the report’s findings. In a statement to Fox News Rogers said the legality of the Section 215 interpretation had been confirmed multiple times in federal courts.

“I am disappointed that three members of the board decided to step well beyond their policy and oversight role and conducted a legal review of a program that has been thoroughly reviewed,” he said.

Ex-NSA techie turned whistleblower Edward Snowden may have some views of his own to express when he holds a QA at 1200 PT, 2000 UTC, 1500 EST today. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2014/01/23/us_government_watchdog_report_finds_nsa_spying_is_illegal_and_useless/

Snowden speaks: NSA spies create ‘databases of ruin’ on innocent folks

Ex-NSA contractor turned whistleblower Edward Snowden used his first public QA to call for the US to lead a global initiative to ban mass surveillance of populations. He also wants governments to ensure that intelligence agencies can protect national security while not invading everyday privacy.

“Not all spying is bad. The biggest problem we face right now is the new technique of indiscriminate mass surveillance, where governments are seizing billions and billions and billions of innocents’ communication every single day,” he said.


“This is done not because it’s necessary – after all, these programs are unprecedented in US history, and were begun in response to a threat that kills fewer Americans every year than bathtub falls and police officers – but because new technologies make it easy and cheap.”

Snowden said the vast amounts of data being stored about everyone is harmful in two key ways. Firstly, the fear that everything is being recorded will change our personal behavior for the worse, and secondly that the data amounted to “databases of ruin”, storing embarrassing or harmful details can be plucked out in retroactive investigations.

As for the decision to go public, Snowden said he had no choice. Contractors are not covered under existing whistleblowing statutes and said that although some NSA analysts were very concerned about the situation, no one was prepared to put their careers on the line.

He cited the experience of Thomas Drake as an example of what the agency does to those that complain. Drake went public with the NSA’s decision to spend billions on a bulk data collection system called Trailblazer rather than use a more targeted and cheaply built internally developed scanning tool called ThinThread.

Drake was arrested and charged with breaking the Espionage Act (similar to the charges Snowden himself faces for leaking thousands of top-secret documents) and was offered multiple plea offers involving prison time. Just before the trial the government dropped most charges, and Drake agreed to cop to one misdemeanor count for exceeding authorized use of a computer.

As for press reports in which serving intelligence agents made threats against his life, Snowden said he found them discouraging rather than frightening.

“That current, serving officials of our government are so comfortable in their authorities that they’re willing to tell reporters on the record that they think the due process protections of the 5th Amendment of our Constitution are outdated concepts. These are the same officials telling us to trust that they’ll honor the 4th and 1st Amendments. This should bother all of us,” he said.

Snowden denied stealing coworkers’ passwords to get the NSA files, and said that press reports that he had hacked into his colleagues’ systems in order to obtain the files were wrong. Great care had been taken to make sure staff were not be compromised by the information that has been released, he said.

“Returning to the US, I think, is the best resolution for the government, the public, and myself, but it’s unfortunately not possible in the face of current whistleblower protection laws, which through a failure in law did not cover national security contractors like myself,” he said.

“The hundred-year old law under which I’ve been charged, which was never intended to be used against people working in the public interest, and forbids a public interest defense. This is especially frustrating, because it means there’s no chance to have a fair trial, and no way I can come home and make my case to a jury.”

In an interview with MSNBC on Thursday, the US attorney general Eric Holder said offering Snowden amnesty was “going too far.” He also declined to refer to Snowden as a whistleblower, saying he preferred the term “defendant.”

“We’ve always indicated that the notion of clemency isn’t something that we were willing to consider. Instead, were he coming back to the US to enter a plea, we would engage with his lawyers,” Holder said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2014/01/24/snowden_speaks_nsa_whistleblower_calls_for_global_privacy_standards/

1.1 Million Payment Cards Exposed In Neiman Marcus Data Breach

Neiman Marcus today disclosed details of a data breach it suffered over a three-month period last year that resulted in the theft of 1.1 million customers’ debit and credit cards. The attackers hacked into the high-end retailer’s computer systems and planted malware that siphoned customer card information during transactions.

There is no indication thus far that customers who shopped online with Neiman Marcus were exposed in the hack, nor were customers’ social security numbers and birth dates, Neiman Marcus Group president and CEO Karen Katz said in a letter on the retailer’s website. Neiman Marcus and Bergdorf Goodman payment card accounts have not been seen being used fraudulently, she said.

“We deeply regret and are very sorry that some of our customers’ payment cards were used fraudulently after making purchases at our stores. We have taken steps to notify those affected customers for whom we have contact information. We aim to protect your personal and financial information,” Katz said.

PINs were not exposed because the retailer doesn’t use PIN pads in its stores, according to the retailer. Visa, MasterCard, and Discover have notified Neiman Marcus that some 2,400 customer payment cards used for purchases in its Neiman Marcus and Last Call stores were used fraudulently.

“While the forensic and criminal investigations are ongoing, we know that malicious software (malware) was clandestinely installed on our system. It appears that the malware actively attempted to collect or ‘scrape’ payment card data from July 16, 2013 to October 30, 2013. During those months, approximately 1,100,000 customer payment cards could have been potentially visible to the malware,” Katz said.

Neiman Marcus confirmed earlier this month that had suffered a breach of customer payment cards, after Target announced it had been hit, but had not revealed further details on the extent of the breach until now. Target announced in late December that it had suffered a breach that affected some 40 million credit and debit cards in its stores between Nov. 27 and Dec. 15, and this month revealed that names, mailing addresses, phone numbers, or email addresses for up to 70 million people also were stolen in the attack — a number that may have some overlap with the payment card victims.

The FBI, meanwhile, has reportedly issued a warning to retailers to be ready for more attacks, after investigating some 20 breach cases in the past year that used the same type of malware used in the Target attack. This so-called “memory-parsing” or RAM-scraping malware infects POS systems, such as cash registers and credit-card swiping equipment in stores.

The malware scrapes the payment card information from the computer memory, when it’s unencrypted.

“We believe POS malware crime will continue to grow over the near term, despite law enforcement and security firms’ actions to mitigate it,” the FBI said in its report obtained by Reuters.

“The accessibility of the malware on underground forums, the affordability of the software and the huge potential profits to be made from retail POS systems in the United States make this type of financially motivated cyber crime attractive to a wide range of actors,” the FBI said.

Meanwhile, Neiman Marcus says it has no “knowledge of any connection” to Target’s data breach. The retailer said “a leading forensics firm” first found signs that Neiman Marcus had been breached, and an investigation is still in progress. The malware that was found has been “disabled,” the company says.

Michael Sutton, vice president of security research at Zscaler, says it remains to be seen if the Neiman Marcus breach is related to Target’s. “While the method of infection appears similar, the timeframes do not overlap and the stolen data was not sent to the same location,” he says.

“[I am] glad to see the disclosure by Neiman Marcus’s Chief Executive. We have known for some time that several retailers have been breached by organized crime gangs using sophisticated malware specifically designed to run on Point of Sale machines to capture credit cards from retail in-store transactions,” says Anup Ghosh, founder and CEO of Invincea. “While traditionally consumers and retailers have felt safer with ‘card present’ transactions, these breaches from 2013 now lay bare the false sense of security.”

Rob Sadowski, director of technology solutions for RSA, says retailers will continue to get hit by sophisticated cybercriminals seeking payment card information. “This latest breach disclosure reinforces that merchants will continue to face attacks from sophisticated, determined cybercriminals seeking to compromise their customers’ payment card data. They are going after the biggest and highest profile targets because they know they can succeed,” Sadowski says.

Most retailers don’t have the ability to detect the attackers before they siphon the customer data, he says. “The length of time the attackers remained on the network without detection is evidence,” he says.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/attacks-breaches/11-million-payment-cards-exposed-in-neim/240165643

Chrome lets web pages secretly record you?! Google says no, but…

A design flaw in the Chrome browser allows malicious websites to use your computer’s microphone to eavesdrop on you, one developer has claimed, although Google denies this is the case.

“Even while not using your computer – conversations, meetings and phone calls next to your computer may be recorded and compromised,” Israeli developer Tal Ater wrote in a blog post on Wednesday.


According to Ater, the vulnerability arises when sites aren’t completely forthright about when they are using the microphone.

Ordinarily, users must explicitly give permission to each site that requests to use the mic, and Chrome displays a blinking red dot in the page’s tab as long as the site is recording. But Ater says that’s not enough to prevent malicious sites from hiding what they’re doing.

“When you click the button to start or stop the speech recognition on the site, what you won’t notice is that the site may have also opened another hidden pop-under window,” Ater wrote. “This window can wait until the main site is closed, and then start listening in without asking for permission. This can be done in a window that you never saw, never interacted with, and probably didn’t even know was there.”

For secure HTTPS sites, Chrome will even remember that you gave a site permission to use the microphone and will maintain that permission between browser sessions without asking you again.

Ater says he alerted Google to the dangers of this behavior last September. But although the web kingpin’s engineers acted immediately, a patch was created to address Ater’s concerns, and Ater’s bug disclosure was even nominated for a bug bounty, the patch has yet to be merged into the mainstream Chrome code base.

According to Ater, the Chocolate Factory’s engineers are still in discussions with its internal web standards group to determine the best course of action – which is why he ultimately chose to publish exploit code on Github.

No bug here, says Google

But when El Reg asked Google to comment on Ater’s claims, we heard a different side of the story. “The security of our users is a top priority, and this feature was designed with security and privacy in mind,” a spokesperson told us.

For one thing, per Google’s documentation, the blinking red light in the browser tab isn’t the only way Chrome lets you know when it’s using cameras or microphones. You can also check which browser window or tab is recording by clicking a persistent icon in the Windows system tray or the OS X status menu – an icon you can actually see in action in this video demo of Ater’s exploit (look for the camera icon in the upper left):

Chrome Bug Lets Sites Listen to Your Conversations

For another, Google argues that the recording feature works how it was meant to work. Chrome first gained voice input support with the release of Chrome 25 last February. But what made it possible is the Web Speech API, a recent spec from the W3C, the web’s primary standards body.

“The feature is in compliance with the current W3C specification, and we continue to work on improvements,” a Google spokesperson told The Reg.

Ater, on the other hand, maintains that the Web Speech API requires browsers to abort speech input sessions whenever the user changes windows or tabs, to prevent the kind of abuse he describes. But the language that mandates that behavior was removed from the spec in a later errata, so that no longer appears to be the case.

And yet something seems to be fishy, because when we tried out some Web Speech API demos here at Vulture Annex in San Francisco – including Ater’s exploit code and even Google’s own demo – no persistent icon appeared in the system trays of our Windows machines or the status menu of our OS X computers while Chrome was listening, contrary to Google’s online documentation.

It’s possible that this feature was removed from recent builds of Chrome in the four months since Ater first demonstrated his exploit. If so, that would seem to make Ater’s claims all the more valid, since it makes it even harder to spot when the microphone is active. Google so far has only offered a canned statement, and has yet to respond to our request for clarification on this apparent change.

Still, while it’s debatable whether Chrome does enough to alert users when it’s accessing their cameras or microphones, El Reg knows of at least one surefire way for Chrome users to be sure they’re not being listened in on. From the main menu, choose Settings, click “Show advanced settings…”, click Content Settings, then scroll down and select “Do not allow sites to access my camera and microphone.” Problem solved. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2014/01/23/chrome_speech_spying_vulnerability/

US govt watchdog slams NSA spying as illegal, useless against terrorism

The Privacy and Civil Liberties Oversight Board, a federal panel set up to advise the US government on policy, has published report concluding that the bulk collection of data on US citizens by the NSA is illegal and ineffective at stopping terrorism.

“Based on the information provided to the board, including classified briefings and documentation, we have not identified a single instance involving a threat to the United States in which the program made a concrete difference in the outcome of a counterterrorism investigation,” the report said, AFP reports.


The 238-page report found that the NSA’s policy of collecting vast amounts of metadata using Section 215 of the Patriot Act was illegal, and raised serious concerns with regard to breaking the First and Fourth Amendments – covering freedom of speech and unlawful search and seizure of evidence.

The report states that the bulk data collection program started in 2001 shortly after the September 11 attacks, and was cleared by the Bush administration in 2006 under the terms of the Patriot Act. The authors conclude Section 215 data collection was a “subversion” of the law and was used to “shoehorn a pre-existing surveillance program into the text of a statute,” noting that it also violated the Electronic Communications Privacy Act.

As a result the board recommends the practice should be stopped immediately, a view not shared by President Obama based on his speech on the matter last Friday, and that any data stored by the NSA should be deleted after three years, not five.

Other recommendations in the board’s report do dovetail very nicely with Obama’s own plans, such as having public representation in the Foreign Intelligence Surveillance Court – the NSA’s secret oversight court. The panel also backed the President’s promise to reduce the scope of surveillance dragnets on targets: rather than snoop on up to three “hops” of the target’s contacts, as is the case today, two hops should be monitored instead. As an example, if you’re on the NSA watch list and you email your brother, who calls his girlfriend, who texts her mother – the mother is the third hop.

The report’s authors were split on their views, however. Rachel Brand, who served as assistant attorney general for legal policy at the US Department of Justice (DoJ) between 2005 and 2007, said wrote in a dissenting opinion that the bulk data collections were legal and warned that if there was another major terrorist attack “the public will engage in recriminations against the intelligence community for failure to prevent it.”

Co-author Elisebeth Collins Cook also disagreed with some of the report’s conclusions on the legality of Section 215 data sweeps, saying that the terms of use should be modified, but that collection should continue. Collins Cook, who served as assistant attorney general for legal policy at the DoJ from 2008 to 2012, also questioned the conclusions on the data slurping’s effectiveness against terrorism.

The five-person Privacy and Civil Liberties Oversight Board was set up in 2006 on the recommendation of the 9/11 Commission Report to provide advice on the use of surveillance in anti-terrorism investigations. It has limited legal weight, but the conclusions have already drawn political comment.

Representative Mike Rogers (R-MI), who as chairman of the House Intelligence Committee is supposed to oversee the activities of the NSA, was sharply critical of the report’s findings. In a statement to Fox News Rogers said the legality of the Section 215 interpretation had been confirmed multiple times in federal courts.

“I am disappointed that three members of the board decided to step well beyond their policy and oversight role and conducted a legal review of a program that has been thoroughly reviewed,” he said.

Ex-NSA techie turned whistleblower Edward Snowden may have some views of his own to express when he holds a QA at 1200 PT, 2000 UTC, 1500 EST today. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2014/01/23/us_government_watchdog_report_finds_nsa_spying_is_illegal_and_useless/

Startup Tackles Security Through Microsoft Active Directory

A startup company has developed a new technology that enables enterprises to identify potential threats by monitoring the traffic between Microsoft’s widely used Active Directory (AD) and the network devices it manages.

Aorato, based in Israel, Tuesday launched its Directory Services Application Firewall (DAF), a new technology that leverages AD, which is used in all Windows environments to store information about users and their access privileges. The new technology will enable enterprises to monitor the behavior of end users and their devices, revealing anomalies that might indicate security issues.

“Active Directory is the Achilles heel of most enterprises,” says Idan Plotnik, founder and CEO of Aorato. “Virtually all traffic goes through it, and it provides the main components for authorization and authentication, yet most enterprises don’t take full advantage of it from a security perspective.”

Aorato monitors the network traffic between AD servers and the various network entities it controls, including end users and the devices they use. The technology then uses this traffic to build a model of the observed relationships — the Organizational Security Graph (OSG) — using the AD traffic and other visible information.

Once Aorato has established a baseline of behavior through the OSG, it uses the data to seek out anomalies that could represent attack behavior, as well as evidence of security policy violations, such as clear-text passwords or users who have been deleted or disabled. The DAF raises alerts on suspicious activities and uses them to build an Attack Timeline, helping security professionals to connect the dots between seemingly harmless activities that, together, might indicate an attack or data leak.

“It’s a bit like Facebook — it gives the story of what the user has been doing, establishes a context for that behavior, and recognizes when there are anomalies,” Plotnik says.

Have a comment on this story? Please click “Add a Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/end-user/startup-tackles-security-through-microso/240165583