STE WILLIAMS

How To Train Your Users

Let’s face it: getting users to understand and practice good security is hard. Really hard. It would be difficult enough if the technology environment remained constant for a while, but we all know how often that happens. That’s why it’s especially important that we focus on raising user awareness of basic security concepts that are independent of specific technologies. One example is helping people understand what needs to be protected and why. I have encapsulated the basics of this in a mnemonic I call “The Four Cs.”

The Four Cs are computers, credentials, connections, and content. If you can get your users into a mindset of thinking about protection in these four areas, it will be one small but important step toward a secure chair-to-keyboard interface.

Let’s review each of the Cs in a bit more detail. I use the term “computers” here as a shorthand, but it also includes smartphones (C is also for “cell”) and tablets. Users have to understand that sensitive data is only as secure as the device used to access it. Ask them to imagine an attacker who can see everything they do on their home PCs — can the attacker see customer data or trade secrets? If so, what are the users doing to make sure this doesn’t happen?

Credentials include passwords, security tokens, and anything else users need to log in to company or work-related systems. Again, ask your users to imagine the worst. What are users trusted to do or see on behalf of the company? What might a motivated enemy do if she had the same access? What would happen to the user whose credentials were used to carry out a successful attack?

With all the publicity recently around the NSA leaks, it should be easy to get users interested in connection security. Encourage them to shift their thinking from government snoops to garden variety criminals and industrial saboteurs. Would your users (deliberately) leave customer data or proprietary information sitting unattended on a table in a hotel lobby? If not, they shouldn’t leave it floating around unencrypted on the hotel Wi-Fi, either.

Content protection is about which data goes where. Remind users how easy it is to forward an email or send a file to someone, sometimes even accidentally. Perhaps they shouldn’t make it that easy for a colleague or third party to do the same with patient records or customer credit card numbers. Content protection also ties into the previous three areas: should important information be stored someplace that requires only a single shared password to access it? Do users trust the security of their computers as much as the security of the company’s servers?

Too often, user education about security starts with the how, skipping right over the what and the why. The Four Cs don’t cover every important aspect of user behavior; resistance to social engineering, for example, is notably absent. They, do, however, offer a solid base of understanding for how users contribute to an organization’s collective security. Once this idea is in users’ heads, the questions about how to protect the computers, credentials, connections, and content will inevitably follow. And those are the questions that any security professional should be happy to hear.

Article source: http://www.darkreading.com/sophoslabs-insights/how-to-train-your-users/240161089

Keep Calm, Keep Encrypting — With A Few Caveats

Encryption remains a key security tool despite newly leaked documents revealing the National Security Agency’s efforts to bend crypto and software to its will in order to ease its intelligence-gathering capabilities, expert say. But these latest NSA revelations serve as a chilling wake-up call for enterprises to rethink how they lock down their data.

“The bottom line is what Bruce Schneier said: for all of these [NSA] revelations, users are better off using encryption than not using encryption,” says Robin Wilton, technical outreach director of the Internet Society. “But if you’re a bank [or other financial institution] and you rely on the integrity of your transactions, what are you supposed to be doing now? Are you compromised?”

The New York Times, The Guardian, and ProPublica late last week reported on another wave of leaked NSA documents provided by former NSA contractor Edward Snowden, revealing that the agency has been aggressively cracking encryption algorithms and even urging software companies to leave backdoors and vulnerabilities in place in their products for the NSA’s use. The potential exposure of encrypted email, online chats, phone calls, and other transmissions, has left many organizations reeling over what to do now to keep their data private.

[Concerns over backdoors and cracked crypto executed by the spy agency is prompting calls for new more secure Internet protocols–and the IETF will address these latest developments at its November meeting. See Latest NSA Crypto Revelations Could Spur Internet Makeover.]

[UPDATE 9/11/13 7:30am: The New York Times reported last night that the Snowden documents “suggest” that the NSA “generated one of the random number generators used in a 2006 N.I.S.T. standard — called the Dual EC DRBG standard — which contains a back door for the N.S.A.,” The Times article said.]

Still a mystery is which, and if any, encryption specifications were actually weakened under pressure of the NSA, and which vendor products may have been backdoored. The National Institute of Standards and Technology (NIST), which heads up crypto standards efforts, today issued a statement in response to questions raised about the encryption standards process at NIST in the wake of the latest NSA program revelations: “NIST would not deliberately weaken a cryptographic standard. We will continue in our mission to work with the cryptographic community to create the strongest possible encryption standards for the U.S. government and industry at large,” NIST said.

NIST reiterated its mission to develop standards and that it works with crypto experts from around the world–including experts from the NSA. “The National Security Agency (NSA) participates in the NIST cryptography development process because of its recognized expertise. NIST is also required by statute to consult with the NSA,” NIST said in its statement.

The agency also announced today that it has re-opened public comments for Special Publication 800-90A and draft Special Publications 800-90B and 800-90C specs that cover random-bit generation methods. These specifications have been under suspicion by some experts because the NSA was involved in their development, and NIST says if any vulnerabilities are found in the specs, it will fix them.

The chilling prospect of the NSA building or demanding backdoors in encryption methods, software products, or Internet services is magnified by concerns that that would also give nation-states and cybercriminals pre-drilled holes to infiltrate.

“There’s a strong technological argument that putting backdoors in encryption is just a foolish thing to do. Because if you do that, it’s just open to abuse” by multiple actors, says Stephen Cobb, security evangelist for ESET. “This makes it very complicated for businesses. I would not want to be a CSO or CIO at a financial institution right now.”

So how can businesses ward off the NSA or China and other nation-states or Eastern European cybercriminals if crypto and backdoors are on the table?

Use encryption
Encryption is still very much a viable option, especially if it’s strong encryption, such as the 128-bit Advanced Encryption Standard (AES). “Don’t stop using encryption, review the encryption you’re using, and potentially change the way you’re doing it. If you’ve got a Windows laptop with protected health information, at least be using BitLocker,” for example, says Stephen Cobb, security evangelist for ESET.

David Frymier, CISO and vice president at Unisys, says even the NSA would be hard-pressed to break strong encryption, so using strong encryption is the best bet. Even Snowden said that, Frymier says.

Still unclear is whether the actual algorithms the NSA has cracked will be revealed publicly or not.

“Most algorithms are actually safe,” says Tatu Ylonen, creator of the SSH protocol and CEO and founder of SSH Communications Security.

Beef up your encryption key management
David Frymier, CISO and vice president at Unisys, is skeptical of the claims that the NSA worked to weaken any encryption specifications. “I just don’t find that [argument] compelling. All of these algorithms are basically published in the public domain and they are reviewed by” various parties, he says.

Even so, the most important factor is how the keys are managed: how companies deploy the technology, store their keys, and allow access to them, experts say. The security of the servers running and storing that code is also crucial, especially since the NSA is reportedly taking advantage of vulnerabilities much in the way hackers do, experts note.

Dave Anderson, a senior director with Voltage Security, says it’s possible for the NSA to decrypt a financial transaction, but probably only if the crypto wasn’t implemented correctly or there keys weren’t properly managed. “A more likely way that the NSA is reading Internet communications is through exploiting a weakness in key management. That could be a weakness in the way that keys are generated, or it could be a weakness in the way that keys are stored,” Anderson says. “And because many of the steps in the lifecycle of a key often involve a human user, this introduces the potential for human error, making key lifecycle management never as secure as the protection provided by the encryption itself.”

Keep your servers up-to-date with patches, too, because weaknesses in the operating system or other software running on the servers that support the crypto software are other possible entryways for intruders or spies.

One of the most common mistakes: not restricting or knowing who has access to the server storing crypto keys, when, and from where, according to SSH’s Ylonen. “And that person’s access must be properly terminated when it’s no longer needed,” he says. “I don’t think this problem is encryption: it is overall security.”

Ylonen says it’s also a wakeup call for taking better care and management of endpoints.

Not having proper key management is dangerous, he says. One of SSH Communications’ bank customers had more than 1.5 million keys for accessing its production servers, but the bank didn’t know who had control over the keys, he says.

“There are two kinds of keys–keys for encryption and keys for gaining access that can give you further access to encryption keys,” he says. And access-granting keys are often the worst-managed, he says. “Some of the leading organizations don’t know who has access to the keys to these systems,” he says.

“If you get the encryption keys, you can read [encrypted data]. If you get the access keys, you can read the data, and you can modify the system … or destroy the data,” he says.

Conduct a risk analysis on what information the NSA, the Chinese, or others would be interested in

Once you’ve figured out what data would be juicy for targeting, double down to protect it.

“Whatever that is, protect it using modern, strong encryption, where you control the endpoints and you control the keys. If you do that, you can be reasonably assured your information will be safe,” Unisys’ Frymier says.

In the end, crypto-cracking and pilfered keys are merely weapons in cyberspying and cyberwarfare, experts say.

“The NSA wants access to data … they want access to passwords and credentials to access the system so it can be used for offensive purposes if the need arises, or for data collection,” Ylonen says. “They want access to modern software and applications so they are later guaranteed access to other systems.”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/authentication/keep-calm-keep-encrypting-with-a-few/240161105

Men are twice as likely to spy on their partner’s phone

Mobile phone. Image courtesy of Shutterstock.A study has found that men are almost twice as likely to snoop on their partner’s mobile phone, peeking without permission to read “incriminating messages or activity” that might point to infidelity.

The study, which came out of www.mobilephonechecker.co.uk and was written up by The Telegraph, found that out of 2,081 surveyed UK adults currently in a relationship, 62 per cent of men said they’ve peeked at a current or former partner’s mobile phone without permission.

That compares to 34 percent of women who admitted to doing so.

Most of those – 89 percent – who admitted to mobile snooping said they did so to check on conversations that might stray into the romantic or sexual and thereby indicate signs of infidelity.

How did the snoops crack the phone’s passcode or password? Easy enough, no hacking involved: 52 percent said that they already knew the credential.

Out of those who mobile-spied, 48 percent confessed that they did, in fact, find incriminating evidence of unfaithfulness.

And out of that lot, 53 percent said they got tipped off by reading text messages, while 42 percent got wind of it through direct Facebook messaging – the two most common means of ferreting out infidelity.

Many of us, evidently, cherish fidelity over our loved ones’ rights to privacy. So how do the survey participants feel about all that?

The study found that 31 percent of those surveyed said they’d end their relationship if they found that their partner had rifled through their messages.

Another 36 percent said that they wouldn’t wind up in that situation to begin with, given that they’d never find themselves in a position where their partner could conduct a highly personalized, mini-NSA surveillance campaign against them.

But how exactly do you prevent your partner – or random strangers who find your lost phone, or thieves for that matter – from reading your private text or Facebook messages?

(None of this is to condone sneaking around on your partner, mind you. Better off signing up with the polyamorous camp and being honest about it all, if you ask me.)

Privacy is privacy and deserves to be protected, I submit, whether we’re talking about covering philandering tracks, avoiding data abused in recrimination after a bad breakup, or protecting political activists.

How do you keep your partner from prying? Fortunately, it’s safe to say that most of us are not dating the NSA, so we can assume that the goal is achievable.

An obvious step to take, of course, is to avoid sharing your mobile phone passcode or password with your partner.

And regardless of protecting one’s dalliances, it’s important to protect mobile phone data, given what’s at stake if you lose your phone.

As Sophos found in an October 2012 study, 42 percent of devices that were lost or left in insecure locations had no active security measures to protect data.

Dire as the consequences of discovered infidelity might be, lost devices point to a much wider world of jeopardized privacy.

We’re talking here about mobile data that could give access to work email, potentially exposing confidential corporate information; sensitive personal information such as national insurance numbers, addresses and dates of birth; payment information such as credit card numbers and PINs; and access to social networking accounts via apps or web browser-stored cookies.

Sophos offers mobile protection for businesses, as well protection for personal Androids (free).

It’s your data, and it’s no-one else’s business.

So keep it safe!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DO1AcMuOx0o/

Brit and Danish boffins propose NSA-proof crypto for cloud computing

Free ESG report : Seamless data management with Avere FXT

It’s more likely that the NSA has devoted its efforts to key capture and side-channel attacks rather than brute-forcing its way through ciphertext en masse – but it’s also true that our crypto maths won’t last forever.

Which draws attention to projects like this one (PDF), which is looking at protection of multi-party computation (MPC) activities.


According to Phys.org: “The idea behind Multi-Party Computation is that it should enable two or more people to compute any function of their choosing on their secret inputs, without revealing their inputs to either party. One example is an election; voters want their vote to be counted but they do not want their vote made public.”

As The Register understands the system, this might also be useful in cloud-based collaboration, since it would protect Average Joe’s data against the rest of the world, including Average Joe’s boss, if it so happened that her machine were compromised.

The aim of the work by a UK-Danish collaboration is to strap the supercharger onto a protocol called SPDZ – pronounced Speedz – to give it real-world performance.

In SPDZ, two machines working on a multi-party computation problem can do so without revealing their data to each other. They describe SPDZ as: “secure against active static adversaries in the standard model, is actively secure, and tolerates corruption of n-1 of the n parties. The SPDZ protocol follows the preprocessing model: in an offline phase some shared randomness is generated, but neither the function to be computed nor the inputs need be known; in an online phase the actual secure computation is performed.”

Let’s unpick this a little. The claims of security aren’t remarkable, and the protocol is designed so that your data will remain secure even if everybody else is compromised (“n-1 of the n parties”).

The protocol relies on a message authentication code (MAC, just to make sure there’s a confusion with Media Access Control) – and this made it computationally demanding. The MAC is partly shared between the parties, and parties had to reveal their shares of the code to communicate.

The problem with this is that revealing the code meant for every communication it had to be renegotiated – hence its slow performance. Other issues were that key generation was also demanding, covert security was considered weak, and the proposed new system is more secure “in the offline phase”.

The system as a whole is described on Slashdot this way:

“MPC is similar in concept to the “zero knowledge proof” – a set of rules that would allow parties on one end of a transaction to verify that they know a piece of information such as a password by offering a different piece of information that could be known only to the other party. The technique could allow secure password-enabled login without requiring users to type in a password or send it across the Internet. Like many other attempts at MPC, however, SPDZ was too slow and cumbersome to be practical.”

If the paper – which will be presented at this week’s ESORICS 2013 conference – holds up, it’ll eventually add a new string to the bow of those that want to protect information, rather than snoop on it. ®

Free ESG report : Seamless data management with Avere FXT

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/10/boffins_propose_more_spookproof_crypto/

New Apple iPhone 5s to feature “Touch ID” fingerprint authentication

After months of speculation, Apple has unveiled the latest iteration of its iPhones, with the usual fanfare and drama from Apple, and obsessive queuing from fans.

Catching most headlines have been the usual details of improved camera and battery life, and the availability of an “affordable” model, the 5c, a plastic affair in a wide range of colours. The main metal model will offer only gold, silver or grey.

Of most interest from a security viewpoint is a fingerprint-based authentication system in the top-of-the-line 5s, referred to as “Touch ID”.

The authentication system, based on a new material for the home button and a metal sensor ring around it, has been the subject of numerous rumours and leaked photos and specs already.

Speculation about Apple’s interest in fingerprints goes back at least as far as 2009, resurfaces each time a new version of the iPhone is launched, and has grown steadily ever since Apple’s pricey acquisiton of fingerprint tech firm AuthenTec last summer.

Today’s confirmation at the iPhone 5s/5c launch ceremony makes it all official at last.

According to Apple’s promotional material, the sensor:

uses advanced capacative touch to take, in essence, a high-resolution image of your fingerprint from the sub-epidermal layers of your skin. It then intelligently analyses this information with a remarkable degree of detail and precision.

As well as unlocking the phone, the sensor will be able to approve purchases at the Apple store.

Fingerprint authentication has been a common sight in laptops for some time, with major vendors including Dell, Lenovo and Toshiba pushing their own built-in variations, usually available as an option alongside more traditional login methods.

There are also a range of other implementations available, including many smartphone apps and external readers supported by the Windows Biometric Framework and some leading password managers.

Fingerprints thus probably rank a little above facial recognition as the most widely-deployed biometric authentication technique at the moment.

In the past, however, they have proven rather unreliable, plagued with security worries, although suspected flaws are not always proven. Nevertheless, many fingerprint scanners seem to be open to spoofing.

Fingerprints are not secret: we leave copies of them wherever we go, even if we’re trying hard not to, as cop show afficionados will be well aware.

Once someone devious has got hold of a copy, purely visual sensors can be fooled by photographs, while more sophisticated techniques which measure textures, temperatures and even pulses are still open to cheating using flesh-like materials, or even gelatin snacks.

Just how hard it will be to defeat Apple’s recognition system remains to be seen, but as crypto guru Bruce Schneier has pointed out, there’s a big danger in using fingerprints to access online services: the temptation to store the fingerprint info in a central database.

Unlike passwords, of course, if your fingerprint data is lifted from a hacked database, you can’t simply change it, short of getting mediaeval on your hands with acid, sandpaper or some other hardened-gangster technique.

So, as expected, Apple has opted to keep all information local to the iPhone – indeed, it is apparently kept in a “secure enclave” on the new A7 chip and can only be accessed by the print sensor itself.

Expect this storage area and the connections to it to become the subject of frenzied investigations by hackers of all persuasions.

Of course, Apple is not alone in looking into fingerprints, with arch-rivals Samsung also rumoured to be making moves in that direction. (Samsung was a major customer of AuthenTec before it was acquired.)

In the long term, how similar their approaches are may be a significant issue for all of us, whatever our smartphone affiliation and whether or not we worry much about privacy, and not just thanks to the inevitable legal rumpus.

There are two basic approaches to security: either the way things work is kept proprietary and secret, as far as possible, or it’s made open for general consumption, and more importantly for verification.

A cross-vertical group, the FIDO Alliance, was set up earlier this year to develop open specifications for biometric authentication standards, with members including Google, PayPal, hardware makers like Lenovo and LG, and a raft of biometrics and authentication specialists. Beleaguered phonemaker BlackBerry is the latest big-name inductee.

The alliance’s aim, to create a universal approach to implementing biometrics in combination with existing passwords and two-factor dongles, is a noble goal.

Sadly, given Apple’s history of playing well with others, it’s pretty likely that, as with their connector cables and DRM systems, their fingerprint setup will remain aloof from any attempts to build a truly universal consensus.

Even if a two-culture system prevails, widespread deployment in mass-market handhelds may well be a gamechanger for the adoption of biometric authentication. Touch ID and its inevitable followers could be a major part of all our futures.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oQtSTqHTIdw/

Torvalds shoots down call to yank ‘backdoored’ Intel RdRand in Linux crypto

Free ESG report : Seamless data management with Avere FXT

Linux supremo Linus Torvalds has snubbed a petition calling for his open-source kernel to spurn the Intel processor instruction RdRand for generating random numbers – feared to be nobbled by US spooks to produce cryptographically weak values.

Torvalds branded the England-based bloke who created the petition, Kyle Condon, “ignorant”. The head Penguista said anyone who backed the call to remove RdRand from his operating system kernel should learn how crypto works.


The fiery Finn wrote at the bottom of Condon’s call to action on Change.org: “Where do I start a petition to raise the IQ and kernel knowledge of people? Guys, go read drivers/char/random.c. Then, learn about cryptography. Finally, come back here and admit to the world that you were wrong.

“Short answer: we actually know what we are doing. You don’t.”

Torvalds argued in his mild outburst that the values from RdRand are combined with other sources of randomness, which would thwart any attempts to game the processor’s output – but it’s claimed that mix is trivial (involving just an exclusive OR) and can be circumvented by g-men.

Posted on 9 September, the petition drew just five signatures and now features an update message reading “petition closed”. Condon ignited Torvalds’ ire by calling for signatures to his motion that Torvalds, as Linux Kernel maintainer: “Please remove RdRand from /dev/random, to improve the overall security of the Linux kernel.”

The catalyst for the petition seems to be the belief that the RdRand instruction in Intel processors has been compromised by the NSA and GCHQ, following the latest disclosures from whistleblower Edward Snowden.

The pseudo-device /dev/random generates a virtually endless stream of random numbers on GNU/Linux systems, which are crucial for encrypting information in a secure manner. RdRand is an instruction [PDF] found in modern Intel chips that stashes a “high-quality, high-performance entropy” generated random number in a given CPU register. These, hopefully, unpredictable values are vital in producing secure session keys, new public-private keys and padding in modern encryption technology. It’s feared that spooks within the US intelligence agencies have managed to persuade Intel to hobble that instruction or otherwise ensure its output produces values that weaken the strength of encryption algorithms relying on that random data.

According to the latest clutch of Snowden documents published by ProPublica, The New York Times and The Guardian last week, the NSA and GCHQ have broken basic encryption on the web – mostly by cheating rather than defeating the mathematics involved: unnamed chipsets are believed to have been compromised at the design stage so that encrypted data generated on those systems is easier to crack by spooks armed with supercomputers.

The details are short, but the implication is that American and British spies can crack TLS/SSL connections used to secure HTTPS websites and virtual private networks (VPNs), allowing them to harvest sensitive data such as trade secrets, passwords, banking details, medical records, emails, web searches, internet chats and phone calls, and much more.

Given RdRand is present in quite a few PCs and servers powering or using chunks of the internet, conspiracy theorists are terrified that RdRand is compromised. Given Linux on Intel also run large parts of the internet and systems talking to each other online, the reasoning seems to be traffic running on Linux boxes can also be seen by spooks thanks to RdRand. QED: it should be disabled in Linux.

However, as Torvalds pointed out in response to the petition RdRand is one of many inputs used by the Linux kernel’s pool to generate random characters.

The kernel chieftain wrote: “We use rdrand as _one_ of many inputs into the random pool, and we use it as a way to _improve_ that random pool. So even if rdrand were to be back-doored by the NSA, our use of rdrand actually improves the quality of the random numbers you get from /dev/random. Really short answer: you’re ignorant.”

Random-number generation for the kernel space were implemented in 1994 by Theodore Ts’o using secure hashes instead of ciphers. As Tso wrote here following the latest selectively released information by journalists allied to Snowden:

I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction… Relying solely on the hardware random number generator which is using an implementation sealed inside a chip which is impossible to audit is a BAD idea.

®

Supercharge your infrastructure

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/10/torvalds_on_rrrand_nsa_gchq/

How to train your users

Let’s face it: getting users to understand and practice good security is hard. Really hard. It would be difficult enough if the technology environment remained constant for a while, but we all know how often that happens. That’s why it’s especially important that we focus on raising user awareness of basic security concepts that are independent of specific technologies. One example is helping people understand what needs to be protected and why. I have encapsulated the basics of this in a mnemonic I call “The Four Cs.”

The Four Cs are computers, credentials, connections, and content. If you can get your users into a mindset of thinking about protection in these four areas, it will be one small but important step toward a secure chair-to-keyboard interface.

Let’s review each of the Cs in a bit more detail. I use the term “computers” here as a shorthand, but it also includes smartphones (C is also for “cell”) and tablets. Users have to understand that sensitive data is only as secure as the device used to access it. Ask them to imagine an attacker who can see everything they do on their home PCs — can the attacker see customer data or trade secrets? If so, what are the users doing to make sure this doesn’t happen?

Credentials include passwords, security tokens, and anything else users need to log in to company or work-related systems. Again, ask your users to imagine the worst. What are users trusted to do or see on behalf of the company? What might a motivated enemy do if she had the same access? What would happen to the user whose credentials were used to carry out a successful attack?

With all the publicity recently around the NSA leaks, it should be easy to get users interested in connection security. Encourage them to shift their thinking from government snoops to garden variety criminals and industrial saboteurs. Would your users (deliberately) leave customer data or proprietary information sitting unattended on a table in a hotel lobby? If not, they shouldn’t leave it floating around unencrypted on the hotel Wi-Fi, either.

Content protection is about which data goes where. Remind users how easy it is to forward an email or send a file to someone, sometimes even accidentally. Perhaps they shouldn’t make it that easy for a colleague or third party to do the same with patient records or customer credit card numbers. Content protection also ties into the previous three areas: should important information be stored someplace that requires only a single shared password to access it? Do users trust the security of their computers as much as the security of the company’s servers?

Too often, user education about security starts with the how, skipping right over the what and the why. The Four Cs don’t cover every important aspect of user behavior; resistance to social engineering, for example, is notably absent. They, do, however, offer a solid base of understanding for how users contribute to an organization’s collective security. Once this idea is in users’ heads, the questions about how to protect the computers, credentials, connections, and content will inevitably follow. And those are the questions that any security professional should be happy to hear.

Article source: http://www.darkreading.com/sophoslabs-insights/how-to-train-your-users/240161089

Exostar Introduces Remote Identity Proofing Offering With Experian

HERNDON, VA, September 10, 2013 ─ Exostar, whose cloud-based solutions enable secure, cost-effective business-to-business collaboration, today announced it has added a standards-compliant remote identity proofing option to its existing identity and access management solution suite. The new service offering allows organizations with large and/or geographically-dispersed communities of employees, partners, and other third-parties to:

– Simplify and streamline the identity vetting and credentialing process;

– Reduce the cost of authenticating remote users, without sacrificing security;

– Support growing communities seamlessly, regardless of scale; and

– Enhance productivity and security across the community by facilitating adoption of stronger authentication requirements for access to applications and data.

“With organizations increasingly relying on global communities, providing and managing access for individuals throughout the community becomes more complex, costly, and critical. Our new remote identity proofing capability makes it easier for our customers and their communities to collaborate effectively while mitigating risk,” said Vijay Takanti, Exostar’s Vice President of Security and Collaboration. “By combining strong identity verification with equally-strong identity assurance and authentication, we are delivering a compelling single sign-on experience that lets individuals access the information, applications, and supply chain data they need, while simultaneously meeting the security and compliance obligations enterprises demand.”

Exostar’s remote proofing service significantly reduces the cycle times needed to on-board individuals into a community while balancing identity proofing requirements. The service includes live video-based and online credit bureau-based identity vetting options. The latter leverages the Precise IDSM platform from Experian, one of the leading identity proofing solutions in the public sector. Precise ID has achieved Federal Identity Credential Access Management (FICAM) recognition at Assurance Level 3 for identity proofing. Through Precise ID, Exostar conducts a knowledge-based verification process where an individual must answer questions posed by Experian drawn from the credit-related information it maintains in its databases about that individual.

“Our wins with large government agencies and our FICAM recognition at Assurance Level 3 prove we are clearly moving ahead of the competition when it comes to granting online access to services while also protecting individual users’ identities and safeguarding agency and constituent data,” said Barbara Rivera, President and General Manager of Experian’s public sector business. “By working with Exostar to extend its proven identity and access management cloud-based solutions, we are excited to raise Precise ID’s profile in highly-regulated industries such as Aerospace and Defense and Life Sciences where collaboration and protection of sensitive data and intellectual property are paramount.”

Exostar’s identity and access management suite aligns with levels of assurance 1-4 defined by the National Institute of Standards and Technology (NIST). The new remote proofing service expands Exostar’s portfolio of identity proofing solutions, which today satisfy NIST Level 2 to Level 4 identity assurance requirements. Exostar currently is deploying its remote offering to validate identities and issue credentials for a customer with a partner community of nearly 50,000 individuals worldwide.

About Exostar

Exostar powers secure business-to-business information sharing, collaboration and business process integration throughout the value chain. Exostar supports the complex trading needs of many of the world’s largest companies in aerospace and defense, life sciences, and other industries. Exostar’s cloud-based identity assurance products and business applications reduce risk, improve agility and strengthen trading partner relationships and profitability for nearly 100,000 companies worldwide, including BAE Systems, Bell Helicopter, The Boeing Company, Computer Sciences Corporation, Lockheed Martin Corp., Merck, Newport News Shipbuilding, Northrop Grumman, Raytheon Co. and Rolls-Royce. For more information, please visit www.exostar.com.

About Experian

Experian is the leading global information services company, providing data and analytical tools to clients around the world. The Group helps businesses to manage credit risk, prevent fraud, target marketing offers and automate decision making. Experian also helps individuals to check their credit report and credit score, and protect against identity theft.

Experian plc is listed on the London Stock Exchange (EXPN) and is a constituent of the FTSE 100 index. Total revenue for the year ended March 31, 2013, was US$4.7 billion. Experian employs approximately 17,000 people in 40 countries and has its corporate headquarters in Dublin, Ireland, with operational headquarters in Nottingham, UK; California, US; and So Paulo, Brazil.

For more information, visit http://www.experianplc.com.

Article source: http://www.darkreading.com/end-user/exostar-introduces-remote-identity-proof/240161096

Wombat Security Technologies Unveils Integrated Anti-Phishing Assessment And Education Solution

PITTSBURGH, PA–(Marketwired – September 10, 2013) – Wombat Security Technologies (Wombat), a leading provider of cyber security awareness and training solutions, today announced their anti-phishing training suite. The anti-phishing training suite enables security officers to assess vulnerabilities via simulated phishing attacks and provide in-depth anti-phishing education to change user behavior. It gives corporate security officers the ability to auto-enroll employees in follow-up training after they fall for a simulated attack. All of Wombat’s training solutions utilize learning science principles to engage the user, deliver practical knowledge, and ensure employees retain information they are taught in brief 10-minute training sessions. When combining simulated phishing attacks with Wombat’s interactive training, customers have experienced a greater than 80% reduction in employee susceptibility to attack.

“Anyone can send mock phishing attacks to employees, but education is critical and our combination of simulated attacks coupled with innovative in-depth training delivers effective education and behavior change,” said Joe Ferrara, President and CEO of Wombat Security Technologies. “Our anti-phishing training suite enables customers to provide positive reinforcement to end users, for example those who did not fall for a simulated attack, those who scored high on training modules, etc.”

Wombat’s new anti-phishing training suite is released just in time as security breaches are making headlines daily and phishing attacks, the most common form of cyber-crime, continue to rise globally. According to a recent report by security firm Kaspersky Lab, 37.3 million users around the world were subjected to phishing attacks in the last year, which represents a whopping 87% increase over the same period in the previous year.

High profile breaches like the New York Times are being driven by crafty forms of spear phishing used by cyber gangs and criminals. The bad guys are pulling out all the stops to profile executives and their employees and then crafting an alluring e-mail carrying a Web link to get them to click on the payload.

“As evidenced by the recent high profile breaches, organizations need to empower their employees to defend against attacks,” said Eric Ogren, founder and principal analyst of The Ogren Group. “Effective security awareness training for the entire employee base has become a necessity for any company that wants to take a proactive stance against the growing threat of phishing and other cyber security attacks.”

Wombat’s anti-phishing training suite enables security officers to assess and reduce their company’s vulnerability to cyber threats by integrating simulated phishing attacks and two interactive follow-up training modules. Wombat’s customers who have already implemented this solution have found a significant increase in non-mandatory security awareness training completion when combining simulated phishing attacks and training.

Highlights of the new anti-phishing suite include:

New and improved reporting and additional functionality in award winning PhishGuru

Enhanced reporting capabilities including detailed campaign summaries, event and repeat offender reports as well as a network map that pinpoints IP addresses where an employee’s request originated

Follow-up campaign scheduling enables security officers to automatically re-assess the employees who fall for attacks

Multiple administrator defined fields for managing contact groups

New URL training module which teaches employees how to determine which URLs are safe and which are fraudulent

New version of Email Security training module with a new look and updated content added to this already effective and interactive training module

Training auto-enrollment ensures employees who fall for a mock phishing attack are assigned training modules to educate users at a time convenient for them

The latest version of Wombat’s anti-phishing suite is available today. For pricing and/or more information about this award-winning product, please visit www.wombatsecurity.com.

About Wombat Security Technologies

Wombat Security Technologies helps organizations combat cyber security threats with uniquely effective software-based training solutions for employees. Wombat offers fully automated, highly scalable solutions, built on learning science principles. They offer mock attacks with brief embedded training, as well as a full complement of 10-minute software training modules. Wombat’s training solutions have been shown to reduce employee susceptibility to attack over 80%. Wombat is helping Fortune 1000 customers, large government agencies and small to medium businesses in segments such as finance, banking, higher education, retail, technology, energy, insurance, and consumer packaged goods strengthen their cyber security defenses.

Article source: http://www.darkreading.com/vulnerability/wombat-security-technologies-unveils-int/240161094

BitSight Technologies Launches Information Security Risk Rating Service

Cambridge, MA – September 10, 2013 – BitSight Technologies, a startup that recently secured a $24M Series A funding round, today launched the first in a series of new cybersecurity offerings that deliver accurate and timely ratings on the information security effectiveness of organizations around the world. The ratings, which are based on externally visible network behavior, are generated daily to keep track of the continuously shifting nature of an organization’s security state.

BitSight’s new service offering – the BitSight Partner SecurityRating – provides objective and up-to-date ratings on the information security health of a company’s partner ecosystem so it can better protect sensitive business and customer data shared with third-party vendors. The information security ratings, which range from 250 to 900, are similar to consumer credit scores, with higher ratings indicating better security postures.

According to a February 2013 Ponemon Institute survey, 65% of organizations transferring consumer data to third-party vendors reported a breach involving the loss or theft of their information. In addition, nearly half of organizations surveyed did not evaluate their partners before sharing sensitive data.

“Traditional approaches to measuring and mitigating partner security risk, including network security audits and assessments, have fallen short,” said Stephen Boyer, co-founder and CTO of BitSight. “These methods fail to deliver an objective and simple way to understand the effectiveness of an organization’s network security practices. BitSight Partner SecurityRating delivers a single, daily rating that encapsulates the information security integrity of any third-party network, allowing customers to make data-driven, risk-based decisions. ”

How the BitSight Platform Works

Using online sensors placed at strategic points around the Internet, the BitSight platform collects and analyzes publicly available Internet traffic flowing to and from an organization. Suspicious behaviors, such as participation in a DDoS attempt or communication with a known botnet, are analyzed for severity, frequency, duration and confidence to create an overall rating of the organization’s current security health. Ratings are derived entirely from the outside; no special disclosures are required and no intrusive testing is conducted on the rated company.

“BitSight’s unique, data-driven approach to information security rating provides organizations with valuable insight to more confidently mitigate risk,” said Charles J. Kolodgy, Research Vice President of Security Products for IDC. “On a broader scale, it should also help the industry reduce the overall number of third-party data breaches.”

“Throughout my career, organizations have always wanted a better way to protect themselves against the weak links in computer networks that are not their own,” said Shaun McConnon, CEO of BitSight. “BitSight tackles that problem in a unique and more effective way, ensuring that information sharing between partners is protected, yet remains open.”

Currently, Fortune 1000 companies in the healthcare, financial services and retail industries use BitSight Partner SecurityRating to protect the sensitive data they share. Delivered as a SaaS offering, key features of the service include:

– Up-to-Date Partner Ratings – BitSight processes and analyzes terabytes of data daily to rate thousands of organizations, including the world’s most popular data and outsourced service providers in the hosting, storage, manufacturing, advertising, HR and legal sectors. New ratings are presented daily via the Customer Portal.

– Timely Alerts – BitSight customers are alerted of significant changes to their partner ratings so they can quickly and proactively take steps to mitigate and prevent possible data breaches. In addition, BitSight delivers detailed information on individual risk vectors so that the sources of risk can be identified and shared with partners.

– In-depth Analytics – BitSight provides customers with analytical tools that assess trends, compare individual ratings against industry benchmarks, and rank ratings within their portfolio. Partner groups can be created based on size, industry, type of data being shared, or business objective in order to help organizations better manage partner risk.

For more information on the BitSight Partner SecurityRating service, visit www.bitsighttech.com.

About BitSight Technologies

BitSight Technologies is transforming how companies manage information security risk with objective, evidence-based security ratings. The company’s SecurityRating Platform continuously analyzes vast amounts of external data on security behaviors in order to help organizations make timely risk management decisions. Based in Cambridge, MA, BitSight is backed by Commonwealth Capital Ventures, Flybridge Capital Partners, Globespan Capital Partners, and Menlo Ventures. For more information, please visit www.bitsighttech.com or follow @BitSight on Twitter.

Article source: http://www.darkreading.com/management/bitsight-technologies-launches-informati/240161097