STE WILLIAMS

5 Steps To Managing Mobile Vulnerabilities

On the second Tuesday of every month, information technology and security groups rush to fix vulnerabilities in their desktop systems, reacting to the regularly scheduled Patch Tuesday implemented by Microsoft and Adobe.

Yet, in most cases, the plethora of smartphones and tablets carried by employees and the hundreds of applications on those devices are not managed, and fixing vulnerabilities on those systems is left up to the user. While the software ecosystem surrounding mobile devices typically means that mobile applications are regularly updated, the risk of those software programs is typically an unknown for most companies.

Businesses need to start paying attention to the mobile software coming in the front door to make sure their data is not headed out that same portal, says Chris Wysopal, chief technology officer for application-security firm Veracode.

“Mobile application management is becoming as important as mobile device management,” he says. “The app layer is where all the risky behavior is happening.”

While mobile applications are relatively new vectors of attacks, security researchers and applications developers have shown that vulnerabilities do exist. The Master Key and SIM card vulnerabilities demonstrated at the Black Hat security conference show that platform issues can lead to vulnerabilities that can be exploited. Yet more common are rogue applications that are legitimate but use aggressive advertising frameworks or tactics to collecting a disproportionate amount of information on the user.

[At Black Hat USA, a team of mobile-security researchers show off ways to circumvent the security of encrypted containers meant to protect data on mobile devices. See Researchers To Highlight Weaknesses In Secure Mobile Data Stores.]

Currently, Veracode and other companies are seeing interest in managing mobile vulnerabilities and risk from the largest enterprises, those with the most at risk. Yet, with the proliferation of mobile devices, more companies will have to worry vulnerable and risky apps, Bala Venkat, chief marketing officer at application-security firm Cenzic, said in an e-mail interview.

“The explosion of mobile devices, growing number of new applications on devices and the access of data anywhere from any device or platform poses a very challenging security environment for organizations.”

For companies that want to tame the risk from their mobile applications, Venkat and other security experts recommend the following five steps.

1. Focus on the apps, not the device
While many companies have mobile-device management (MDM) systems to help them deal with their fleet of device, the bring-your-own-device (BYOD) movement has left a gap in their coverage. The devices are no longer owned by the businesses, so managing them can be a policy problem. In addition, the threat is less about the device and more about the applications, says Domingo Guerra, founder and president of Appthority.

With businesses having thousands of employees and hundreds of applications on the devices, managing the applications should be the focus for most companies, he says.

“There are a lot of different points of possible data breaches,” Guerra says.

2. Catch vulnerabilities at development
While the vulnerabilities in mobile applications are not handled in the same way as with desktop systems, one area of commonality exists. Companies that develop their own in-house applications need to adopt a secure development lifecycle to catch and root out vulnerabilities.

“It is important for companies to ensure its application developers and administrators have a thorough knowledge of the common application attacks, the tools available for detecting vulnerabilities and the procedures for fixing them,” says Cenzic’s Venkat.

Vetting third-party code used in the development process is also important. The advertising frameworks used by many mobile developers typically take actions of which the developer may not be aware. Other frameworks should be check out as well, says Appthority’s Guerra.

“Because it is not all internal code, companies have to be wary,” he says.

3. Measure app reputation
Another way to assess the risk of third-party applications is to use one of the application reputation services. These services, such as Appthority and Veracode’s Mobile Application Reputation Service (MARS), check out mobile application based on runtime and static analysis and create a risk profile for each.

“It is the applications that are purported to be legitimate, but are being monetized through information harvesting that are the bigger risks,” says Veracode’s Wysopal.

In many cases, companies can apply their own policies to the assessment results and generate white- and black lists of mobile applications allowed to access business data or that can be on devices managed by MDM solutions.

4. Encrypt data on the device and in transit
A key consideration for many companies is whether information on the mobile devices used by employees for work encrypt data. Mobile containerization technology can wrap applications in code that enforces encryption and allows the company to manage the keys, letting the business enforce encryption.

Companies should also worry about unencrypted communications to cloud services, says Cenzic’s Venkat.

“Storing unencrypted sensitive data on often-lost mobile devices is a significant cause for concern, but the often unsecured Web services commonly associated with mobile applications can pose an even bigger risk,” he says.

5. Make security easy to use
Finally, employees will get around security measures unless they are easy to use. To retain productivity gains, businesses should support the way that employees work, says Veracode’s Wysopal.

“People want to be able to grab a file off of Dropbox,” he says. “If people cannot interact between a corporate environment and the personal environment, then users will complain and reject the monolithic corporate apps and security,” he says.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/vulnerability/5-steps-to-managing-mobile-vulnerabiliti/240164636

Patch Tuesday December 2013

The updates for Microsoft’s December 2013 Patch Tuesday are out.

As promised, there are eleven bulletins, with six of them fixing remote code execution holes.

Five of those are rated by Microsoft as critical.

Fortunately, only one of them gets the most severe rating from SophosLabs – a level we also denote with the word “Critical,” as a way of noting an exploit that is either already being used by malware, or about to be used.

That rating was given to bulletin MS13-096, which patches a hole known as CVE-2013-3096, a bug in how Windows handles TIFF files.

In-the-wild abuse of this vulnerability was reported just before November’s Patch Tuesday, and anyone who isn’t a cybercrook hoped that Microsoft would be able to rush out a fix back then.

That didn’t happen, perhaps because Microsoft had already published a Fix it tool that prevented the bug from showing its face, but the TIFF fix did make it into this month’s patches.

Not fixed yet is the recently-announced zero-day in the Windows XP (and Server 2003) kernel driver NDPROXY.SYS, part of the telephony API.

That hole doesn’t itself allow crooks to break into your computer, but if they are already in (or find a way in), this bug allows what’s called an Elevation of Privilege, or EoP.

It looks as though patching this XP kernel hole will have to wait until next month – after which, of course, there will only be three official monthly updates to go before XP is put out to pasture forever.

As we mentioned before, this Patch Tuesday affects:

  • Windows end-user operating systems
  • Windows server operating systems
  • Office
  • Lync
  • Internet Explorer
  • Exchange
  • Microsoft Developer Tools

Server Core installs need patching too, along with all other versions of Windows, and a reboot is required.

Time, therefore, to get busy!

Happy patching, and don’t forget that if you still have XP, you’re running out of patches and ought already to have prepared for the end of XP in April 2014.

Get advice about dealing with the end of XP:

(Audio player not working? Download MP3, or listen on Soundcloud.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZcI-7Rmnvps/

Patch Tuesday December 2013 – TIFF exploit patched, XP kernel flaw not fixed yet

The updates for Microsoft’s December 2013 Patch Tuesday are out.

As promised, there are eleven bulletins, with six of them fixing remote code execution holes.

Five of those are rated by Microsoft as critical.

Fortunately, only one of them gets the most severe rating from SophosLabs – a level we also denote with the word “Critical,” as a way of noting an exploit that is either already being used by malware, or about to be used.

That rating was given to bulletin MS13-096, which patches a hole known as CVE-2013-3096, a bug in how Windows handles TIFF files.

In-the-wild abuse of this vulnerability was reported just before November’s Patch Tuesday, and anyone who isn’t a cybercrook hoped that Microsoft would be able to rush out a fix back then.

That didn’t happen, perhaps because Microsoft had already published a Fix it tool that prevented the bug from showing its face, but the TIFF fix did make it into this month’s patches.

Not fixed yet is the recently-announced zero-day in the Windows XP (and Server 2003) kernel driver NDPROXY.SYS, part of the telephony API.

That hole doesn’t itself allow crooks to break into your computer, but if they are already in (or find a way in), this bug allows what’s called an Elevation of Privilege, or EoP.

It looks as though patching this XP kernel hole will have to wait until next month – after which, of course, there will only be three official monthly updates to go before XP is put out to pasture forever.

As we mentioned before, this Patch Tuesday affects:

  • Windows end-user operating systems
  • Windows server operating systems
  • Office
  • Lync
  • Internet Explorer
  • Exchange
  • Microsoft Developer Tools

Server Core installs need patching too, along with all other versions of Windows, and a reboot is required.

Time, therefore, to get busy!

Happy patching, and don’t forget that if you still have XP, you’re running out of patches and ought already to have prepared for the end of XP in April 2014.

Get advice about dealing with the end of XP:

(Audio player not working? Download MP3, or listen on Soundcloud.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/omrWupwK5mU/

How’s it going, Microsoft users? Patching your PCs? You SHOULD be

5 ways to prepare your advertising infrastructure for disaster

Patch Tuesday Brace yourselves, users and administrators, Microsoft and Adobe have released another monthly batch of critical security updates for their products.

The December edition of Patch Tuesday will fix five critical vulnerabilities in Microsoft software, two which are being exploited in the wild by miscreants.


The first of the critical flaws lies within the handling of TIFF image files in Windows Vista, Server 2008, Lync and Office 2010, 2007 and 2003. If exploited, an attacker could use the bug to remotely execute code on the targeted system with full administrative rights.

The second critical fix addresses a flaw in the WinVerifyTrust security component which could be exploited to bypass code-signing protections in the operating system, thereby allowing an attacker to inject malicious code into a trusted executable that’s run when the tainted program is unwittingly launched. This affects all supported versions of Windows and Windows Server. Microsoft said the bug has been exploited in the wild in targeted attacks.

Of the remaining updates from Microsoft, three are rated critical but have not yet been exploited in the wild. Those bulletins include fixes for remote-code execution flaws in the Scripting Runtime in all supported Windows, Internet Explorer and Exchange.

An additional six patches will address flaws that have been rated by Microsoft as “important”. One of these bugs has been exploited in the wild and is a security bypass hole in Microsoft Office. Other fixes squash an information-disclosure bug in Office, the ability to elevate privileges on Windows using driver-level programming blunders, and a remote-code execution flaw in SharePoint.

You can find a summary of the updates over at Microsoft’s security response blog.

Adobe, meanwhile, has issued its own monthly updates to remedy security vulnerabilities in Flash Player and Shockwave. The company said that both updates will close holes that, if exploited, could allow an attacker to remotely execute code on a targeted machine. Adobe recommends that all Windows, OS X and Linux users update their copies of Flash, Air and Shockwave in order to protect against attack.

Adobe made a point to emphasize that neither of the patches concern issues related to its customer database breach in October, which resulted in the leaking of sensitive account information. ®

Disaster recovery protection level self-assessment

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/10/microsoft_and_adobe_deliver_critical_fixes_on_patch_tuesday/

Thought your Android phone was locked? THINK AGAIN

5 ways to prepare your advertising infrastructure for disaster

Android has taken another step to cement its place behind Java in the world of repeatedly-vulnerable software, with German group Curesec discovering that an attacker can get past users’ PINs to unlock the phone.

In fact, the Curesec post states, the bug – present in Android 4.0 to 4.3 but not 4.4 – exposes any locking technique: PINs, passwords, gestures or facial recognition.


“The bug exists on the ‘com.android.settings. ChooseLockGeneric class’. This class is used to allow the user to modify the type of lock mechanism the device should have,” Curesec writes.

A user changing the type of lock they’re using should have to enter their previous lock – so if you swap from PIN to gesture, you would have to provide your PIN before Android allows the change.

The problem is that the program flow in the ChooseLockGeneric class lets an attacker bypass the confirmation:

“We can control the flow to reach the updatePreferencesOrFinish() method and see that IF we provide a Password Type the flow continues to updateUnlockMethodAndFinish(),” the post states. “Above we can see that IF the password is of type PASSWORD_QUALITY_UNSPECIFIED the code that gets executed and effectively unblocks the device.

“As a result, any rogue app can at any time remove all existing locks.”

Curesec has proof-of-concept code in its post. The bug has been designated CVE-2013-6271, and Curesec says it decided to go public after the Android Security Team stopped responding to its e-mails about the issue. ®

Disaster recovery protection level self-assessment

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/10/android_has_lockbypass_bug/

Poker ace’s vanishing hotel laptop WAS infected by card-shark – F-Secure

5 ways to prepare your advertising infrastructure for disaster

A laptop apparently stolen from a top-flight poker pro’s hotel room and mysteriously returned while he played in a card tournament was infected by spyware.

That’s according to security firm F-Secure, which today said it had analyzed the computer, owned by ace player Jens Kyllönen. The Java-written malware on the machine could allow a attacker, perhaps a card-shark, to remotely view screenshots and log activity on the PC.


While such spyware is hardly uncommon, the F-Secure researchers were intrigued by the way in which the software nasty was apparently installed.

Kyllönen, who rocked up at the antivirus biz’s HQ in an Audi R8 with the laptop to inspect, believes the infection occurred while he played in a poker tournament at a resort in Barcelona. He said during a break he returned to his room and found his laptop missing, only for it to be returned later with signs of a possible infection.

According to F-Secure, the notebook was in fact infected with a remote monitoring tool that activated upon system startup. Researchers believe that the malware was installed via a USB device and that a similar infection was introduced to the computer of another player staying in the same room.

That poker aces, who win big both on and offline, would be subjected to a spyware installation is no accident, say the researchers. By installing tools to covertly snoop on the screen of high-stakes online players, a rival could gain the upper hand in a game by spying his opponent’s hand.

“This is not the first time professional poker players have been targeted with tailor-made trojans,” F-Secure said in its report.

“We have investigated several cases that have been used to steal hundreds of thousands of euros. What makes these cases noteworthy is that they were not online attacks. The attacker went through the trouble of targeting the victims’ systems on site.”

Such well-targeted, “spear phishing” operations rely on detailed reconnaissance to gather information about the individual which can be exploited to carry out an attack.

It’s possible Kyllönen’s machine was infected in some other way, but that doesn’t marry with his claim that the laptop went missing.

In any case, F-Secure suggests that anyone who could be subject to such an attack, be they a poker pro or an executive on a business trip, consider real-world protections for their systems, such as device locks and room safes. If you trust the safe and hotel staff, of course. ®

Disaster recovery protection level self-assessment

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/11/poker_pros_call_shenanigans_over_hotel_malware_infections/

‘Mystery’ Malware Files Often Missed In Cleanup

The newly disrupted ZeroAccess botnet was previously spotted putting a new spin on infecting a user: injecting itself into the download process of Adobe Flash. It used a new variant of the infamous Trojan that the victim’s anti-malware program didn’t yet recognize.

“It was pretty clever because it was combining social engineering with technical prowess. Sometimes you see attacks based solely on tricking users, so it’s weird to see both together in one attack,” says Zulfikar “Zully” Ramzan, principal engineer of the Security Business Group at Cisco’s Sourcefire.

Ramzan says the Flash application was legitimate, but ZeroAccess quietly injected itself into the Flash download, thus infecting the user. The malware-laden file was then able to remain under the radar, and the AV program didn’t catch it.

ZeroAccess’s nifty trick of hiding from anti-malware and other tools is just an example of how many malware cleanup processes today miss some elements of the malware. Leftover infected files that appear legit and don’t get detected often remain behind after a malware cleanup, causing the machine to become reinfected over and over, Ramzan says.

“We see that kind of behavior about 20 percent of the time: seeing the thing that got dropped by the original malware, without seeing the original malware right away. ZeroAccess is an example of where the actual initial threat goes undetected, but we see the stuff that gets on after that point,” he says. “It happens very frequently that we see the detection taking place, and there’s actually a broader infection under that initial detection.”

And most malware creates new files, seven-eighths of which are deemed unknown, Ramzan says. “We don’t know if the file is good or bad,” he adds.

Anti-malware programs in those cases don’t have a signature for those files, he says.

Ramzan says three-quarters of the time his group sees new malware on a corporate system, the malware was created by an unknown file. “Often times, these unknowns should have been marked as malicious, but they just weren’t. The key is really looking at the unknowns that are created and that created something.”

[Microsoft, FBI, and Europol say they have disrupted ZeroAccess, a botnet that infected more than 2 million machines. See Microsoft Teams With Law Enforcement, Disrupts ZeroAccess Botnet.]

These residual malicious files don’t get detected, and the machine ends up infected all over again. “If you don’t clean up that mystery file, there’s a good chance you’ll stay in a persistently infected state,” Ramzan says. The files may do nothing more than bring in other files, but the bottom line is the machine remains in an infected state, he says.

Anti-malware software typically misses those related files, which are designed to evade AV software. “You have to know what the file did, and all the files around it. Is there a guilt-by-association happening?”

Where does such an undetected file typically reside? “It can be all over the place. Sometimes it’s directly on the file system. Some systems of malware will create a hidden system file layer,” he says. “It’s not completely invisible, but it’s invisible to simple checks. Once something is on your system and compromises it, there’s a good chance that it’s going to embed itself so deeply that it will be hard to find except by really deep inspection.”

At the heart of the problem is that malware writers continue to raise the bar in the way their code infects, hides, and spreads, security experts say.

“It’s smarter, shadier, and stealthier,” says John Shier, senior security adviser for Sophos, which published a new report today that shows how malware is getting better at hiding and persistence. “There’s been an evolution of malware techniques.”

Shier says the ZeroAccess botnet is a good example of how botnets are also becoming more resilient to takedowns. “Some 500,000 nodes were taken down in a sinkholing [operation] in the summer. Then they responded … and increased the number of droppers, so within weeks it was back up again,” he says.

Meanwhile, technology alone isn’t enough to ensure malware is completely eradicated, Cisco’s Ramzan says: “You cannot detect it using traditional techniques. “You can look for [related] behaviors to ZeroAccess,” for example, in other files.

“It’s a paradigm shift because people typically focus on detection, which is really about saying if something is good or bad based on what you’re able to see in the content,” he says. “But you need to look at the file and the overall context around it, and make sure you have that visibility as your overall foundation.”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/attacks-breaches/mystery-malware-files-often-missed-in-cl/240164631

7 Habits Of Highly Secure Database Administrators

Whether database administrators, information security professionals or a combination of both, the team tasked with safeguarding the information held within a company’s databases must establish certain habits to accomplish their security goals. These practices lie at the foundation of a solid data security program, but according to the results of the Independent Oracle Users Group (IOUG) 2013 Enterprise Data Security Survey released this week, enacting many of them is still a reach goal for the majority of organizations.

This year’s IOUG data security survey in particular looked at the database security landscape with a lens towards what leaders versus laggards have accomplished in 2013 with their database security programs. The survey defined leaders as those organizations that have accomplished three baseline activities (all of which are included on this list): awareness of databases containing sensitive or regulated information, encryption of data at rest or in motion, and monitoring production database for unauthorized access or changes. Meanwhile, laggards were those organizations that did none of those things. According to survey results, approximately 22 percent were classed as leaders, 20 percent as laggards and the rest were just in the middle.

Leaders, unsurprisingly, reported that they were three times less likely to experience a data breach than laggards. Examining how these groups perform common database security practices can offer a valuable lesson in how to improve a database security program.

1. They know where sensitive data resides

Unless an organization knows where its most sensitive data resides within the organization, it will have a difficult time placing the appropriate controls around them. According to the IOUG study, 70 percent of organizations today report that they know exactly where all the databases are that contain sensitive or regulated information. There’s marked improvement on this front since three years ago. Back in 2010 it was just a little over half of organizations that could say the same.

Not only is this important in setting up controls, but once those controls are in place it ensures an organization is better apprised of a breach when it happens rather than having it reported to them by someone else.

“Most folks who have a data breach really don’t know until they find about it from a third party,” says Roxana Bradescu, director of database security product management at Oracle. “You don’t want to find out about your data breaches in the media or third party. Having those controls in place and at least being able to know whether you’ve been breached or not is a huge first step (in data security).”

2.They audit frequently

Organizations are increasingly auditing the way databases are accessed, but the frequency could still use improvement. Back in 2010, the percentage of organizations performing data security audits at least once a month was just 15 percent. Today that number is 23 percent.

This is an area where leaders have the clear advantage over laggards, with 33 percent of leaders reporting monthly or better audits and just 8 percent of laggards reporting the same figures.

Joseph McKendrick , the research analyst for Unisphere Research who conducted the IOUG study, warns that auditing may actually only be skin deep, though.

For example, one unnamed respondent to the survey told him, “We do audit access of privileged users but not the specifics of what they are doing. We could tell when and who accessed a database but not what elements within on most instances. We are in process of implementing additional auditing in this area.”

3. They monitor database activity and system changes
Audits are good, but continuous monitoring is even better for spotting potential problems early on and even preventing disastrous breaches. Unfortunately, only a very small percentage of organizations have monitoring practices and technology in place that allows them to achieve the kinds of results they need to detect unauthorized activity. The survey shows that just 37 percent of organizations can detect and correct unauthorized database access or changes in less than 24 hours.

“The number of folks that don’t have safeguards in place is really huge,” says Bradescu. “We want to have policies within the database itself, we want to be monitoring activity coming to the database.”

While the number of organizations that have taken on monitoring things like privileged user activities, failed log-ins and sign-in activity has tipped over the 50 percent mark, other more specific monitoring is less prevalent. For example, only 37 percent of organizations keep track of writes to sensitive tables or columns and just 31 percent watch over reads of sensitive tables and columns.

[Are you using your human sensors? See Using The Human Perimeter To Detect Outside Attacks.]

4. They encrypt to prevent database bypass
Even if a database has all of the most advanced controls and monitoring in place, without solid encryption in place all of that investment could be for naught. The trouble is that without some kind of masking or encryption to obfuscate data stored in the database, it could be possible for an attacker to completely bypass the database platform itself and instead find ways to open files that the database uses to store data, Bradescu says.

“So unless we’ve got encryption for data , then we can’t prevent database bypass,” she says. “Data encryption is really the foundation of database security because you’re only going to be able to put in effective security controls within the database if you’ve got this foundation in place.”

The IOUG survey shows that there’s been steady improvement in the area of database encryption over the past five years. Back in 2008 only 57 percent of organization reported encrypting data at rest in some or all of their databases. Today it adds up to 70 percent of organizations.

5. They institute measures to prevent application bypass
In a similar vein, organizations that execute strong database security understand the importance of ensuring that there’s no end-around to accessing information stored in a database beyond the application that was meant to connect to it.

“We want to make sure people can’t access the database unless they’re going through the application,” Bradescu says.

According to the IOUG survey, there’s a ten point differential between leaders and laggards in this area. Only 28 percent of leader organizations allow users to access data directly from the database using ad hoc tools or spreadsheets, while 38 percent of laggards allow the same activities.

6. They manage privileged user access
The super-user accounts that offer up the proverbial keys to the database kingdom must be managed well to ensure the integrity of a database’s contents. This includes not only the administrative accounts used by DBAs to manage databases, but also the application accounts that are typically machine-controlled but which have been given inordinate amounts of database privileges in order to make the lives of developers easier when programming connections to the database.

“More organizations are monitoring their data assets and are taking measures to keep tabs on super-users,” wrote McKendrick. “However, most are still not in a position to monitor the online activities of privileged users.”

This is one area of stark differences between leaders and laggards in database defense. Nearly half of leaders report that they have measures in place to prevent privileged users from tampering with sensitive information, while only 22 percent of laggards can say the same. The percentage across all organizations is 34 percent, which is up 10 percentage points since 2010, when fewer than a quarter of organizations said they could thwart privileged user tampering.

7. They keep production data in production databases
The sloppy spread of production data in areas like staging QA and development has long been an Achilles heel of database security programs. Strong database security programs depend on production data remaining in database environments with all the controls, rather than other environments which lack the same level of security.

According to the IOUG survey, half of respondents still use live production data outside the data center.

“In addition, despite any heightened sense of data security exhibited in recent years, there has been a surge in the shipping of live production data off-site since the first time this question was asked in 2008,” McKendrick reports.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/database/7-habits-of-highly-secure-database-admin/240164634

French gov used fake Google certificate to read its workers’ traffic

The business case for a multi-tenant, cloud-based Recovery-as-a-Service solution

A French government agency has been caught signing SSL certificates and impersonating Google.

The bogus certificates were endorsed by the certificate authority of the French Treasury, DG Trésor. And the Treasury’s own authorisation certificate was, in turn, vouched for by IGC/A (Infrastructure de Gestion de la Confiance de l’Administration) and ultimately ANSSI, the French equivalent of the CESG assurance wing of GCHQ.


It seems the French Treasury department created the counterfeit certificate in order to monitor employee traffic that would otherwise pass through its network wrapped in encryption. The dodgy certificate allowed man-in-the-middle SSL interception, a heavily frowned on practice that violates the trust model of internet security. The practical upshot was that any email or other data sent between French Ministry of Finance officials and Google was wide open to snooping by the French government and perhaps others.

In a blog post, Google security engineer Adam Langley said it acted immediately so that its Chrome browser blocked all certificates issued by the intermediate CA before alerting ANSSI and other browser vendors and requesting an explanation.

ANSSI has found that the intermediate CA certificate was used in a commercial device, on a private network, to inspect encrypted traffic with the knowledge of the users on that network. This was a violation of their procedures and they have asked for the certificate in question to be revoked by browsers. We updated Chrome’s revocation metadata again to implement this.

This incident represents a serious breach and demonstrates why Certificate Transparency, which we developed in 2011 and have been advocating for since, is so critical.

ANSSI issued a statement (English language version here) blaming “human error” for the creation of the dodgy certificates and downplaying the significance of the issue by arguing it had “no consequences on security for [the] general public”.

As a result of a human error which was made during a process aimed at strengthening the overall IT security of the French Ministry of Finance, digital certificates related to third-party domains which do not belong to the French administration have been signed by a certification authority of the DGTrésor (Treasury) which is attached to the IGC/A.

The mistake has had no consequences on the overall network security, either for the French administration or the general public. The aforementioned branch of the IGC/A has been revoked preventively. The reinforcement of the whole IGC/A process is currently under supervision to make sure no incident of this kind will ever happen again.

Security experts have raised a quizzical eyebrow to claims that the “human errors” might “accidentally” lead to spoofed digital certificates, an explanation that would have seemed a tad unlikely even before the Snowden revelations cast a harsh light on the sneaky practices of signals intelligence agencies. The incident has prompted an indignant missive to the EU Commission requesting a privacy investigation.

The incident is not without precedents. Last year SSL certificate authority Trustwave admitted it issued a digital “skeleton key” that allowed an unnamed private biz to spy on SSL-encrypted connections within its corporate network. Trustwave came clean without the need for pressure beforehand. Even so its actions have split security experts and prompted calls on Mozilla’s Bugzilla security list to remove the Trustwave root certificate from Firefox. Ultimately Trustwave escaped the “death penalty” and escaped with a strong rebuke over the SSL skeleton key incident.

One difference is that Trustwave owned up voluntarily while in the latest case Google spotted something was amiss thanks to the use of pinning technology.

Back in 2011 hackers broke into the systems of Comodo and DigiNotar, granting rights to issue themselves with fake digital credentials. The fraudulent DigiNotar certificates1 were later used in a man-in-the-middle attack on ordinary internet users in Iran. Users in the Islamic Republic who thought they were talking directly to Gmail, Skype and other services were actually going through an intermediary who would have been able to read their traffic, logs at DigiNotar revealed.

Audits of DigiNotar revealed systemic security failures that prompted browser developers to revoke its trusted status, the same sanction a minority would like to see applied against Trustwave and now (to a lesser extent) ANSSI.

A technical discussion of the issue can be found on the Bugzilla mailing list here. A primer on how the chain of trust in SSL certificates works can be found in a blog post by Paul Ducklin of Sophos. ®

Rootnote

1Recent revelations from NSA whistleblower Edward Snowden related to hacks against Brazilian oil company Petrobras “implies that the 2011 DigiNotar hack was either the work of the NSA, or exploited by the NSA,” according to noted cryptographer Bruce Schneier. Other specialists dispute this interpretation. However what’s not in dispute is that using false digital certificates to run man in the middle attacks is a common tactic of intelligence agencies and other capable attackers that underlines the fragility of system that has underpinned e-commerce and secure communication on the net.

Email delivery: Hate phishing emails? You’ll love DMARC

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/10/french_gov_dodgy_ssl_cert_reprimand/

Exploits no more! Firefox 26 blocks all Java plugins by default

The business case for a multi-tenant, cloud-based Recovery-as-a-Service solution

The latest release of the Firefox web browser, version 26, now blocks Java software on all websites by default unless the user specifically authorizes the Java plugin to run.

The change has been a long time coming. The Mozilla Foundation had originally planned to make click-to-run the default for all versions of the Java plugin beginning with Firefox 24, but decided to delay the change after dismayed users raised a stink.


Beginning with the version of Firefox that shipped on Tuesday, whenever the browser encounters a Java applet or a Java Web Start launcher, it first displays a dialog box asking for authorization before allowing the plugin to launch.

Users can also opt to click “Allow and Remember,” which adds the current webpage to an internal whitelist so that Java code on it will run automatically in the future, without further human intervention.

Mozilla’s move comes after a series of exploits made the Java plugin one of the most popular vectors for web-based malware attacks over the past few years. So many zero-day exploits targeting the plugin have been discovered, in fact, that the Firefox devs have opted to give all versions of Java the cold shoulder, including the most recent one.

Screenshot showing Firefox plugin click-to-run dialog box

You can whitelist certain pages if you want – just be sure you know what you’re doing

Generally speaking, Mozilla plans to activate click-to-run for all plugins by default, although the Adobe Flash Player plugin has been given a pass so far, owing to the prevalence of Flash content on the web.

In addition to the change to the default Java plugin behavior, Firefox 26 includes a number of security patches, bug fixes, and minor new features. The official release notes are available here and a full list of changes in the release can be had here.

As usual, current Firefox installations can be upgraded to version 26 using the internal update mechanism, and installers for the latest release are available from the Firefox homepage. ®

Email delivery: Hate phishing emails? You’ll love DMARC

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/10/firefox_26_blocks_java/