STE WILLIAMS

Who’s to blame for security problems? Surveys say, EVERYONE

People. Image courtesy of ShutterstockLast week a cluster of surveys were released, showing some contrasting views of the main sources of IT security risk, and some revealing overlaps.

The studies all asked professional IT workers what their main worry points were, and who they thought were the main causes of security incidents in their organisations.

The biggest study was conducted by forensics and risk management firm Stroz Friedberg. They covered businesses across the US, and found that most were pretty worried about cyber dangers.

Their main highlight was the risky behaviour of senior management. 87% of top brass send work files to personal email or cloud accounts so they can work on them from home or the road, while 58% have sent sensitive data to the wrong person and more than half admit taking company files or data with them when leaving a post.

60% of those questioned gave their firm a “C” grade or worse when asked how well they were prepared to combat cyber threats.

Stroz Friedberg provide an executive summary of the survey’s findings, plus an infographic-style full report.

A second study, this time from Osterman Research and again speaking mainly to mid-sized businesses (averaging 10,000 users) across the US, also found high levels of anxiety about how people behave on their work computers.

Employees introducing malware into company networks was cited as a serious concern by more than half of respondents – 58% for web browsing and 56% for personal webmail use. 74% said their company networks had been penetrated by malware introduced via surfing, and 64% through email, in just the last 12 months.

Backing this up is another study conducted by SecureData, which found that 60% of those questioned thought the biggest risk to their firm’s security was simple employee carelessness.

It also found security matters were given a worryingly low priority in some organisations, with 44% saying the main responsibility for security decision-making rested on the shoulders of junior IT managers.

This all cycles back in to the Stroz Friedberg stats, in which around half of C-level management admitted they themselves should be taking more of a leading role in pushing for better security, while a similar level of lower-grade employees thought the responsibility really lay with specialist IT security staff rather than themselves or their corporate leaders.

Stroz Feinberg - On the Pulse

As always with mass studies, the sample size is pretty important, and all of these are on the small side – 764 people were questioned for the Stroz Friedberg report, 157 for the Osterman study and just 110 for the SecureData survey.

The choice of questions is also a major factor in surveys, as answers can vary wildly with just a minor tweak in wording.

But despite these opportunities for inaccuracy and bias, these overlapping studies all seem to be coming to similar conclusions. We’re all very concerned about malware and other security risks, but for the most part we tend to hand off responsibility for avoiding them to others, and continue to indulge in risky behaviours ourselves.

People just aren’t getting the message and understanding how risky it can be to do personal stuff on our work systems, or to take sensitive work files home to our own, less well secured machines.

We’re not being cautious enough with our web browsing, email and social sharing, with phishing continuing to be a problem despite years of alerts and user education.

This is especially true with sensitive accounts some people have to use in their jobs – as the ongoing success of the Syrian Electronic Army in embarrassing the social media arms of large firms shows.

Mouse on mousetrap. Image courtesy of ShutterstockEven after multiple breaches all over the place, which you’d think would put most people in similar positions on their guard, it’s still possible to fool people into handing their login details over.

So perhaps it’s time to stop worrying about who’s the most to blame and who needs to take charge, and face up to the fact that IT security only works if we all do our bit.

We can’t rely on software or policies to combat our own stupidity, laziness or desires – we need to take some responsibility, pay some attention and put some more effort into making sure we’re not the weakest link.


Image of people and mouse on mousetrap courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yUdpgpzevbk/

Security warnings do better if they use scammers’ tricks, research finds

Virus alert. Image courtesy of Shutterstock.There’s been much fiddling around with security warnings to see which versions work best: should they be passive and not require users to do anything? (Not so good.) Active? (Better.) Where should the dialogue boxes be positioned? What about the amount of text, message length or extent of technical gnarliness?

Now, in a systematic attempt to determine what gets people to comply with warning messages, two researchers at the University of Cambridge’s Computer Laboratory actually modeled their security warnings on scammers’ messages in their research.

In their recently published paper, “Reading This May Harm Your Computer: The Psychology of Malware Warnings”, Professor David Modic and Professor Ross Anderson describe their efforts to figure out what aspects of computer security warnings are effective.

The researchers assumed that crooks are doing something right, given that their messages are skillfully crafted to lure potential victims into clicking on bogus security messages that lead them to malware downloads.

From the paper:

[W]e based our warnings on some of the social psychological factors that have been shown to be effective when used by scammers. The factors which play a role in increasing potential victims’ compliance with fraudulent requests also prove effective in warnings.

According to prior research into persuasion psychology, these factors influence decision making:

  • Authority. People tend to comply with warnings if they think they’re coming from a trusted source.
  • Social influence. People tend to comply if they think that’s what other members of their communities are doing.
  • Risk preference. The researchers figured that giving “a concrete threat [that] clearly describes possible negative outcomes” would increase compliance more than a vague one.

The researchers wrote security warnings for five different conditions and used the same number of participants for each condition:

  • Control Group: The researchers used real anti-malware warnings that are currently used in Google Chrome.
  • Authority: “The site you were about to visit has been reported and confirmed by our security team to include malware.”
  • Social Influence: “The site you were about to visit includes software that can damage your computer. The scammers operating this site have been known to operate on individuals from your local area. Some of your friends might have already been scammed. Please, do not continue to this site.”
  • Concrete Threat: “The site you are about to visit has been confirmed to include software that poses a significant risk to you. It will try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you.”
  • Vague Threat: “We have blocked your access to this page. It is possible the page contains software that may harm your computer. Please close this tab and continue elsewhere.”

Anderson and Modic recruited 583 men and women through Amazon Mechanical Turk to take their survey.

Some of the findings:

  • Respondents said they were more likely to click through if their friends or – even more so – their Facebook friends told them that it was safe. In spite of this, the power of social influence was actually “much less effective than it is fashionable to believe,” the researchers found.
  • The warning messages that worked the best were clear and concrete – for example, messages that informed users that their computers would be infected with malware or that a malicious website would steal the user’s financial information.

Anderson and Modic advised software developers who create warnings to follow this advice:

  • The text should include a clear and non-technical description of the possible negative outcome.
  • The warning should be an informed, direct message given from a position of authority.
  • The use of coercion (i.e., threatening people so that they feel like they have no option but to do as told) tends to be counterproductive, whereas persuasion (i.e., getting people to voluntarily change their beliefs or behaviour) tends to get better results.

In short, knowledge is power, the researchers said:

When individuals have a clear idea of what is happening and how much they are exposing themselves, they prefer to avoid potentially risky situations.

Professor Modic’s work was funded by Google and by the Engineering and Physical Sciences Research Council (EPSRC), United Kingdom.

Image of virus alert courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/1mKOk4XdG2Q/

Target issues apology letter

A Naked Security reader just emailed us to say, “I received a message from Target about the breach. It talks about customers, and people who shopped at the company’s stores, and names me in the breach. But I’ve never acutally shopped at Target.”

The concerned reader also pointed out that the statement was published on Target’s website back on 13 January 2014, but the email she received only arrived on 16 January 2014.

She admitted that the email didn’t look dangerous: it had no links to login pages and no suspicious attachments, so it didn’t seem to be anything to worry about.

Except for the fact that she received it at all – and apparently three days’ late at that.

Giving bad news in a good way

That’s always a problem for a company faced with delivering tough security news by email: it’s hard to make the message look obviously different from an email sent by crooks trying to capitalise on the disaster.

Let’s see how well Target did, and if there is anything they might have done differently.

Here is the web version in full:

The message is short, it doesn’t try to pretend the breach didn’t happen, and it offers a sincere apology.

All those things are good.

But there are two things I’d definitely have done differently.

Make it abolutely clear who had what stolen

Firstly, I’d have avoided mixing the words “guests”, “customers” and those “who shopped,” since it’s not clear whether they are intended as synonyms, or merely as different, possibly overlapping, groups of victims.

That, in turn, means it isn’t clear which group Target thinks each victim was in.

It certainly seems, from our reader’s confusion, that “guests” (who lost details like name, address and phone number) include people who have had something to do with Target, somewhere, somehow, but who have never actually have bought any products there recently, or even at all.

So here’s what I think Target is trying to say:

  1. We have a large database of people who have interacted with us, called “guests”, including customers (who have bought something from us at some stage), and others (who have shared personal information with us, but never actually purchased anything). If you are in this group, your name, address, phone number and email address were stolen by the crooks.
  2. We have a subset of the above database, consisting of customers who actually bought something from one of our stores (not online), using a payment card, between 2013-11-27 and 2013-12-15 inclusive. If you are in this group, your payment card details, such as number and expiry date, were stolen as well.

If you are in group 2, you can have a year of credit monitoring for free, but if you’re only in group 1, you can’t.

Target really needs to clarify who was in which group (or both), and which recipients of the email quality for the free monitoring.

It’s obvious that there are people whose names and addresses were stolen but who didn’t buy anything recently, who didn’t lose any payment card details in this breach and who therefore don’t qualify for the free credit monitoring.

Target has mixed together the warning sent to people who lost non-payment information only and the warning to those who lost payment card data as well.

That, in my opinion, is a recipe for confusion.

Don’t trust the caller

Secondly, if I were Target, I would not have said this:

Never share information with anyone over the phone, email or text, even if they claim to be someone you know or do business with. Instead, ask for a call-back number.

If you don’t know and trust someone who calls you, why would you trust any phone number or web URL they might give you?

Just bear in mind how successful the so-called fake support call scammers have been in recent years.

Those are the guys who phone you out of the blue, falsely claim that there is a virus on your computer, dishonestly use the Event Viewer to show you errors that “prove” their claim, and then fraudulently charge you $300 for cleanup.

If you ask for a number to call those guys back, or simply use the number that came up on your CallerID/CLI display, which amounts to the same thing, you will almost certainly be given a legitimate-sounding local landline number.

If you’re in Sydney, Australia, for example, the number will probably be something like +61.2.8xxx.xxx; if in Oxford, England, +44.1865.7xx.xxx; and so on.

And if you call that number back, guess what?

You will get through to the company that just called you.

Having an honest-looking local phone number doesn’t mean the caller is an honest, local person; the same applies to website domain names.

Domain names can, and frequently do, redirect anywhere; the same is true of phone numbers these days.

Don’t rely on information that could have come from a scammer to help you determine if that person is a scammer.

Use some objective, independent means of finding out the contact details of anyone who claims to represent a company you do business with and who asks you to disclose any personal information.

In the case of your bank, for example, you can probably find the number to call to dispute a payment card transaction on the card itself, or on a recent statement, or (as I notice many banks are doing now) on-screen at any ATM, without needing to insert a card.

And if you are an organisation that finds itself needing to call customers to look into suspicious activity on their account, try to do the right thing from your side.

Please avoid opening the call, as many companies still seem to do, by saying:

Hello, I am X from Y, investigating a possible problem with your account Z. Before we start, for security purposes, I need you to give me your date of birth and your security codeword.

Try a refreshingly different approach, such as this:

Hello, I am X from Y, investigating a possible problem with your account Z. But because you don’t know me, don’t tell me anything yet. I’m going to hang up; I’d like you to take out your {payment card, last statement, contract} and look in the top right corner where it gives an emergency contact number. Then call us back!

As for Target, let’s hope the company distinguishes the various parts of the breach a bit more clearly.

If the crooks got at your payment card data, did they get your home address and phone number, too?

If they got your address and phone number, should you cancel your cards even if you didn’t shop at Target recently?

Clarification along those lines would be very handy.

In the meantime, here’s a short podcast in which we offer some advice on dealing with suspicious callers who try to scam you or your friends and family over the phone:

(Audio player not working? Download to listen offline, or listen on Soundcloud.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HUCMHxo8Zv4/

Businesses are building shopper profiles based on sniffing phones’ WiFi

Smartphone. Image courtesy of Shutterstock.In May 2013, retailer Nordstrom said that yes, it was sniffing customers’ WiFi to track their movement through 17 US stores.

Nordstrom was collecting anonymised, aggregate information, a spokeswoman said at the time, and wasn’t identifying personal information tied to a phone’s owner.

Therefore, Nordstrom said, it wasn’t using the WiFi data to market specific products at specific individuals.

Tracking customers and not using the data to target marketing and advertising might sound innocuous, but there are those who question the privacy ramifications as the technology begins to proliferate.

As the Wall Street Journal reports, location analytics companies are now creating portraits of some 2 million people’s habits as they go about their daily lives, traveling from yoga studios to bars to sports stadiums to nightclubs and everywhere else in between.

The WSJ says that one of those analytics companies, Turnstyle Solutions, is at the forefront of the trend.

The company’s about a year old. One of the uses of its location-tracking technology has been to place sensors at some 200 businesses in downtown Toronto, to track shoppers as they move around the city.

One of Turnstyle’s customers is Fan Zhang, owner of the Asian restaurant Happy Child in downtown Toronto. He told the WSJ that he knows that 170 of his customers went to nightclubs in November, 250 went to gyms, and 216 came from the upscale neighborhood of Yorkville.

What has he done with that information? He ordered in workout tank-tops with his restaurant’s logo, catering to his customers’ tracked gym visits.

Turnstyle’s not the only one doing this, of course.

Verizon Wireless, for its part, last year began to run location analytics on its own rich store of data to help retailers see information such as what neighborhoods their clients arrived from or what restaurants they drove past to get there, the WSJ reports.

The weekly reports Turnstyle sends to clients rely on anonymised, aggregate numbers, though the company does collect names, ages, genders and social media profiles of some people who log in with Facebook to a free WiFi service it runs at some restaurants and coffee shops.

It’s becoming increasingly common for data firms to collect information about shoppers based on cellphone location and how they use their phones while in stores or other businesses – information they could use to better target their marketing and advertising.

This is yet another reminder of how much information our smartphones leak about us.

WiFi. Image courtesy of Shutterstock.Smartphones with WiFi turned on regularly broadcast their MAC address (a more-or-less unique device ID) as they search for WiFi networks to join. That address acts like a unique cookie that can be used to identify an individual over repeated visits. In an area with a few detectors in place it can also be used to determine a device’s location.

The only way to stop a phone broadcasting its MAC address is to turn off WiFi and only use it when you need it.

Privacy advocates are concerned about this tracking being done without consumers’ permission.

For its part, the City of London in August told a trashbin company to stop its practice of collecting MAC addresses broadcast by the phones of passersby.

The company, Renew, had rigged the bins with gadgets to sniff passing mobile phones, and was selling advertising space on the the internet-connected bins.

The collection of anonymous data through MAC addresses is legal in the UK, though it exists in a grey area.

That’s because the UK and the EU have strict laws about mining personal data using cookies – small bits of data sent from a website that can be used to uniquely identify people and then monitor their behaviour across different websites.

Under UK and EU law, companies that want to use cookies to track us in the virtual world must gain our consent to do so.

However, no such consent is required by UK and EU law to track us in the real world using our devices’ MAC addresses.

As far as the US goes, privacy groups, along with New York’s US Senator Charles Schumer and a number of location analytics companies, in October unveiled a new code of conduct so that shoppers will clearly know when they’re being tracked through their phones in stores and will receive instructions for opting out, according to TechCrunch.

Some don’t like the notion of being tracked even if the data is anonymised. As AOL and others have shown, making data truly anonymous is hard and leaked data that isn’t quite anonymous enough cannot be un-leaked.

But Schumer’s code of conduct isn’t going that far. As far as the Schumer code is concerned, data collection without opt-in can continue if it’s not tied to specific users.

It’s the targeted data collection that should be opt-in, the Schumer code stipulates.

Turnstyle and other location analytics companies are OK with this approach.

But as TechCrunch points out, there was a notable absence among the groups who came with Schumer to sign on to the code: namely, the retailers who would use targeted data for marketing purposes.

Until all the parties involved sign on to a code that requires an opt-in model for targeted marketing, our choices are either to turn off our mobile phone’s WiFi when we step foot in or drive by a retailer, or brace ourselves for a future of some eerily well-informed advertisements.

Images of smartphone and WiFi courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2Ft9UiGpGU8/

Apple slapped over shabby sales security in the App Store

Apple is understandably proud of its App Store.

Firstly, it’s been a runaway commercial success, making bucket-loads of money for Apple.

Small buckets, by Apple’s standards, to be sure, but bucket-loads nevertheless.

Secondly, for all that Apple extracts an impressive 30% from paid apps just for brokering their sale and download, the App Store has been a fruitful (sorry!) source of largesse to the developer community.

App developers, in fact, took home a collective 70% of the $10,000,000,000 that the App Store turned over in 2013.

Thirdly, Apple’s unilateral control of what gets into the App Store has kept it as good as free from iOS malware.

→ Apple’s unyielding regulation of the App Store has not been universally popular. But as a side-effect it has left the mobile malware problem almost exclusively to Android. Google’s more liberal approach to alternative software markets has gone hand-in-hand with widepsread malware, and therefore, Apple might argue, has made Android a much riskier platform for work or play.

But not everyone has been entirely happy with Cupertino’s acumen in application delivery.

According to the US Federal Trade Commission, Apple was a bit too keen – sneaky is the word the FTC didn’t use, but probably could have done – in the way it allowed applications and their accoutrements to be sold to children.

Apple facilitates not only the sale of iOS apps, but also the processing of in-app purchases.

A game creator might decide to give his game away for free, for example, to encourage new players to try it out.

That helps him build a community; he makes his money later by charging during gameplay for stuff that helps make keen players keener still.

Power-up pills, for example, swashbuckling swords, invisibility cloaks, even battle ostriches.

With all of these things costing real money, it’s easy to see why customers wouldn’t want Apple to make it easy for their children to acquire artificial objects in imaginary worlds, merely by clicking a button labelled [Buy].

The FTC’s complaint againt Apple is that the company did, indeed, go some way down that road.

Here’s a neat and very useful mini-infographic prepared by the FTC that explains the two main things it didn’t like about Apple’s in-app purchasing system :

Firstly, the process didn’t make it clear to parents, at the final password entry screen, what they were actually buying, or even that they were proceeding with an in-app purchase at all.

Did you merely authorise a configuration change? Or did you just purchase a new gameplay level for 99c? A Big Bag of Bravado for $9.99? Perhaps even a Heroic Hobbit Helmet, one careful owner, for a lofty $99.99?

That lack of clarity didn’t go down well with the FTC.

Secondly, the Commission argued, the authorisation dialog didn’t make it clear that you might be activating an “open slather” purchasing window that would stay open for 15 minutes, allowing your children ample time to rack up purchases without asking.

Of course, you can argue that parents ought to have familiarised themselves with on-line purchasing in iOS before letting their kids loose in the App Store, especially when one complainant didn’t seem to notice until her daughter had blown $2600 in the Tap Pet Hotel.

And you can argue that parents ought to be stricter with themselves about typing in their passwords at a dialog box for which they have no context.

But you can also argue that Apple ought to favour clarity throughout the purchasing process, not least because the company was happy to accept 30% of that $2600 blowout at the aforementioned Tap Pet Hotel.

And that is exactly the argument that the FTC has made.

Apple has settled – remember, that means that officially this isn’t a fine, or a conviction, or a negative judgement, merely an agreement to make the complaint go away – and will pay back at least $32,500,000.

If consumers don’t come at Apple for the full amount, the difference will be paid over to the FTC.

Reducing the risk

If you’re the sort of parents who let your children use your personal iPad or iPhone for games, you can manage the financial risk in two ways, as recommended by the FTC.

You can turn in-app purchases off altogether, so that you’ll never face one of those out-of-context “it’s asking for your password, Mummy/Daddy” requests.

Go to Settings | General | Restrictions, and toggle the In-App Purchases setting in the ALLOW section:

Or go to the ALLOWED CONTENT section and set the Require Password option to Immediately, so that entering the password once doesn’t open up a 15-minute pre-approved purchasing window:

If you choose the Immediately option, you’ll need to approve each purchase one-by-one, thus avoiding an unexpected bill from the pet hotel.

Let’s hope that this settlement reminds us all of the risks of sharing mobile devices, whether between individuals (such as parents and children) or between functions (such as work and home).

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aYszLn85V1k/

5 Security Services To Consider In 2014

With security expertise continuing to be in short supply, managed and cloud services will play a greater role in securing companies in 2014.

Benefiting from the knowledge of managed security service providers — or the built-in expertise in existing cloud security services — can help nontechnical companies build the infrastructure needed to stay secure. For more security-savvy companies, service providers can take over the day-to-day security drudge work and allow internal security teams to focus on bigger security issues that may be affecting the company, says Neil MacDonald, a vice president and fellow at business-intelligence firm Gartner.

“If I’m an organization with limited resources, I would rather free up my security team’s time to focus on more advanced threats rather than the more routine things like log monitoring, firewall management, and vulnerability management,” he says.

Whether a company pursues a managed security service, a cloud security service, or some hybrid with its existing capabilities depends largely on its own expertise and whether the organization already uses the cloud for existing business processes, says Rob Ayoub, research director for NSS Labs, a security consultancy.

“A lot of it depends on how they are using the cloud,” he says. “Are they using the cloud as an extension of their existing infrastructure? Or are they using the cloud and consuming services from the cloud as a way to expand their security capabilities or maybe because they do not have the in-house expertise?”

Whatever may be the case for your company, the following services could be in your future this year.

1. Cloud Asset Control
Most companies do not know how much they rely on the cloud, frequently underestimating the number of cloud services being used by employees. From its own customer data, for example, cloud-management provider Skyhigh Networks has found that the average firm uses approximately 550 cloud services.

In the past few years, a number of startups — such as CloudPassage, Netskope, and Skyhigh Networks — have focused on the problem of taming the wild and varied adoption of cloud services. These cloud-application visibility services allow companies to discover what services they are using, the risk those services pose, and then manage the threat, says Jim Reavis, co-founder and CEO of the Cloud Security Alliance.

“These types of services give you a pretty good visibility into what cloud services are in use, and allow companies to take the next step and implement controls,” he says.

2. Log Management To Incident Detection
Many companies already use a service provider to collect and manage logs, archiving the data for compliance purposes. With an increasing focus on network and business visibility, companies need to turn those logs into information on what is happening in the network.

The category actually covers a spectrum of services, from log management to security information and event management (SIEM) systems to big data analytics. Once companies have their log monitoring in the cloud, there is no reason not to look at analyzing the data, Gartner’s MacDonald says.

“They can essentially tell you if you have been compromised,” he says. “That can be intensely interesting, especially if you are a smaller organization and you don’t have the resources to build a security operations center.”

[Companies need cloud providers to delineate responsibilities for the security of data, provide better security information, and encrypt data everywhere. See 5 Ways Cloud Services Can Soothe Security Fears In 2014.]

Eventually, a focus on detection will turn into a focus on response and shutting down attackers, making incident-response services — such as what may come from FireEye’s purchase of Mandiant — likely to grow significantly over the next few years

3. Identity Management
As companies rely on an increasing number of cloud providers, managing access to those services has become more complex. Identity and access management in the cloud makes a lot of sense for firms that use a large number of cloud services, CSA’s Reavis says.

“There is a real risk that employees duplicate their identities out on the Internet, and that raises the risk of a lateral attack, where a breach at one provider allows attackers to breach the employee’s other accounts,” he says.

4. Encryption
The revelations that the U.S. National Security Agency is collecting massive amounts of data from the Internet has caused more companies to pay attention to how their data is secured in the cloud. While locking down data at rest with encryption is a good idea, especially when it is outside the firewall, many companies had been relying on the security of their storage providers to protect the data.

While a number of cloud services focus on encrypting data in cloud services, such as CipherCloud and Voltage Security, the market is still nascent. That will likely change this year, as cloud services focusing on encryption and access-management grow, NSS Labs’ Ayoub says.

“I think identity and encryption are the two areas where we will see a lot of adoption this year,” Ayoub says. “We need to focus on protecting who’s accessing the data, and we need to focus on protecting the data.”

5. Security Testing In The Cloud
Many companies have to focus on securing their software, not just their networks, whether the software is internally developed or comes from third parties. Outsourced application testing or application-testing in the cloud are able to find the most common bugs, help train developers, and hold third-party software firms to a standard security assessment.

“Application security testing is more difficult work, but it is becoming better understood,” Gartner’s MacDonald says. “By using one of these vendors to test their applications or require that their supply-chain partners to test their applications, they can enhance their security.”

A number of companies offer application testing and assessment services in the cloud, including Cenzic, Cigital, Veracode, and Whitehat Security.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/services/5-security-services-to-consider-in-2014/240165414

New Head For Panda Security

Bracknell, Jan 16, 2014.

Panda Security, The Cloud Security Company, today announced the appointment of Diego Navarrete as CEO of the multinational computer security company.

Navarrete joins Panda Security from IBM, and has extensive experience in the software and security sectors. Over the past two years, Panda Security’s new CEO has been director of IBM’s Security Systems Division in Europe, a company he joined in 1998. Since then, Diego Navarrete has occupied various executive positions at the American multinational corporation, as well as being director for three years (2009-2012) of the company’s Cloud and Server Infrastructure business division for Southwest Europe.

Diego Navarrete holds a Bachelor’s Degree in Business Administration from the Universidad Complutense de Madrid (Spain), and completed different senior executive programs at INSEAD (France), London Business School (United Kingdom) and Boston University (USA).

“Being part of Panda Security’s team is a great opportunity I approach with the utmost passion and enthusiasm. I am truly convinced that 2014 will be a pivotal year for both Panda and the computer security industry. Cloud Computing, Big Data and mobility will continue to be the market’s driving forces, and Panda holds a strong position as one of the top players in these areas”, explained Diego Navarrete, CEO of Panda Security. “I have a good knowledge of the company’s strategy and portfolio and I believe it is on the right track. In any event, as I mentioned before, we are just at the start of what will undoubtedly be a very important year for the entire security industry”.

About Panda Security

Founded in 1990, Panda Security is the world’s leading provider of cloud-based security solutions, with products available in more than 23 languages and millions of users located in 195 countries around the world. Panda Security was the first IT security company to harness the power of cloud computing with its Collective Intelligence technology. This innovative security model can automatically analyze and classify thousands of new malware samples every day, guaranteeing corporate customers and home users the most effective protection against Internet threats with minimum impact on system performance. Panda Security has 56 offices throughout the globe with US headquarters in Florida and European headquarters in Spain

Panda Security collaborates with The Stella Project, a program aimed at promoting the incorporation into the community and workplace of people with Down syndrome and other intellectual disabilities, as part of its Corporate Social Responsibility policy.

For more information, please visit http://www.pandasecurity.com

Article source: http://www.darkreading.com/management/new-head-for-panda-security/240165424

Saviynt Releases SAP HANA Security Solution

LOS ANGELES, Jan. 15, 2014 /PRNewswire/ — To help businesses running SAP applications better protect their data, Saviynt, an SAP partner, has just announced HANA Security Manager, the industry’s first SAP HANA security product natively built to help organizations optimally manage HANA security, and reduce implementation time by as much as 75% and improve SAP HANA security by over 80%.

SAP HANA security is extremely flexible and hence very complex. There are exponential relationships between users, roles and privileges. Implementing HANA security manually is very cumbersome and might result in users getting inappropriate access to critical data. Also, ongoing management of the HANA security model when new data sources are added can be extremely resource intensive, difficult, and often require costly security re-design.

Saviynt HANA Security Manager integrates natively with SAP HANA and provides a business layer for both security and business teams. Business analysts can easily define business rules and controls which are then converted by Saviynt into an optimized Role and Privilege Model. Saviynt creates users, roles and privileges in HANA and provides real time monitoring against business and compliance rules.

“It is important to understand that in HANA implementations, security is implemented at the HANA Database level. There is little to no authorization done at the Application layer. This is a major difference between HANA security and traditional database security. Also, companies are planning on storing their most critical data in HANA. Therefore, a serious focus on security is required during HANA implementations, and should be core to the overall business requirements and project strategy,” says Sachin Nayyar, CEO Saviynt.

Saviynt HANA Security Manager provides the following capabilities:

— Business layer that translates a very complex HANA data and security model into business views for security and business teams. This is important since out of box HANA security requires a deep understanding of database object models and does not cater to the security teams, business teams and data stewards

— Automated Role / Privilege Design and Management: Based on the business rules defined, Saviynt creates and manages an optimized Role and Privilege model and provisions it to HANA

— Real Time Monitoring: When an access that violates an existing business or compliance rule is modified directly in HANA, Saviynt notifies security and business teams in real time

— Automated User Management: Saviynt enables preventative business and compliance checks during user creation and management processes

— Reports and Dashboard: Saviynt provides real time and historical security health and compliance reports

SUPPORTING RESOURCES

— Saviynt HANA Security Manager

— Request Live Demo

For further information: visit www.saviynt.com, or email [email protected]

About Saviynt, LLC

Saviynt is an Application Security Management company that focuses on providing simple, fast and cost effective solutions for enterprises to manage security of their critical applications. Saviynt has a major focus on SAP Business suite and HANA security with several customers relying on Saviynt’s products and services for their SAP and application security and compliances needs.

Article source: http://www.darkreading.com/applications/saviynt-releases-sap-hana-security-solut/240165471

Security warnings do better if they use scammers’ tricks, research finds

Virus alert. Image courtesy of Shutterstock.There’s been much fiddling around with security warnings to see which versions work best: should they be passive and not require users to do anything? (Not so good.) Active? (Better.) Where should the dialogue boxes be positioned? What about the amount of text, message length or extent of technical gnarliness?

Now, in a systematic attempt to determine what gets people to comply with warning messages, two researchers at the University of Cambridge’s Computer Laboratory actually modeled their security warnings on scammers’ messages in their research.

In their recently published paper, “Reading This May Harm Your Computer: The Psychology of Malware Warnings”, Professor David Modic and Professor Ross Anderson describe their efforts to figure out what aspects of computer security warnings are effective.

The researchers assumed that crooks are doing something right, given that their messages are skillfully crafted to lure potential victims into clicking on bogus security messages that lead them to malware downloads.

From the paper:

[W]e based our warnings on some of the social psychological factors that have been shown to be effective when used by scammers. The factors which play a role in increasing potential victims’ compliance with fraudulent requests also prove effective in warnings.

According to prior research into persuasion psychology, these factors influence decision making:

  • Authority. People tend to comply with warnings if they think they’re coming from a trusted source.
  • Social influence. People tend to comply if they think that’s what other members of their communities are doing.
  • Risk preference. The researchers figured that giving “a concrete threat [that] clearly describes possible negative outcomes” would increase compliance more than a vague one.

The researchers wrote security warnings for five different conditions and used the same number of participants for each condition:

  • Control Group: The researchers used real anti-malware warnings that are currently used in Google Chrome.
  • Authority: “The site you were about to visit has been reported and confirmed by our security team to include malware.”
  • Social Influence: “The site you were about to visit includes software that can damage your computer. The scammers operating this site have been known to operate on individuals from your local area. Some of your friends might have already been scammed. Please, do not continue to this site.”
  • Concrete Threat: “The site you are about to visit has been confirmed to include software that poses a significant risk to you. It will try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you.”
  • Vague Threat: “We have blocked your access to this page. It is possible the page contains software that may harm your computer. Please close this tab and continue elsewhere.”

Anderson and Modic recruited 583 men and women through Amazon Mechanical Turk to take their survey.

Some of the findings:

  • Respondents said they were more likely to click through if their friends or – even more so – their Facebook friends told them that it was safe. In spite of this, the power of social influence was actually “much less effective than it is fashionable to believe,” the researchers found.
  • The warning messages that worked the best were clear and concrete – for example, messages that informed users that their computers would be infected with malware or that a malicious website would steal the user’s financial information.

Anderson and Modic advised software developers who create warnings to follow this advice:

  • The text should include a clear and non-technical description of the possible negative outcome.
  • The warning should be an informed, direct message given from a position of authority.
  • The use of coercion (i.e., threatening people so that they feel like they have no option but to do as told) tends to be counterproductive, whereas persuasion (i.e., getting people to voluntarily change their beliefs or behaviour) tends to get better results.

In short, knowledge is power, the researchers said:

When individuals have a clear idea of what is happening and how much they are exposing themselves, they prefer to avoid potentially risky situations.

Professor Modic’s work was funded by Google and by the Engineering and Physical Sciences Research Council (EPSRC), United Kingdom.

Image of virus alert courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RApL17g_od4/

Target issues apology letter – but includes some awful security advice

A Naked Security reader just emailed us to say, “I received a message from Target about the breach. It talks about customers, and people who shopped at the company’s stores, and names me in the breach. But I’ve never acutally shopped at Target.”

The concerned reader also pointed out that the statement was published on Target’s website back on 13 January 2014, but the email she received only arrived on 16 January 2014.

She admitted that the email didn’t look dangerous: it had no links to login pages and no suspicious attachments, so it didn’t seem to be anything to worry about.

Except for the fact that she received it at all – and apparently three days’ late at that.

Giving bad news in a good way

That’s always a problem for a company faced with delivering tough security news by email: it’s hard to make the message look obviously different from an email sent by crooks trying to capitalise on the disaster.

Let’s see how well Target did, and if there is anything they might have done differently.

Here is the web version in full:

The message is short, it doesn’t try to pretend the breach didn’t happen, and it offers a sincere apology.

All those things are good.

But there are two things I’d definitely have done differently.

Make it abolutely clear who had what stolen

Firstly, I’d have avoided mixing the words “guests”, “customers” and those “who shopped,” since it’s not clear whether they are intended as synonyms, or merely as different, possibly overlapping, groups of victims.

That, in turn, means it isn’t clear which group Target thinks each victim was in.

It certainly seems, from our reader’s confusion, that “guests” (who lost details like name, address and phone number) include people who have had something to do with Target, somewhere, somehow, but who have never actually have bought any products there recently, or even at all.

So here’s what I think Target is trying to say:

  1. We have a large database of people who have interacted with us, called “guests”, including customers (who have bought something from us at some stage), and others (who have shared personal information with us, but never actually purchased anything). If you are in this group, your name, address, phone number and email address were stolen by the crooks.
  2. We have a subset of the above database, consisting of customers who actually bought something from one of our stores (not online), using a payment card, between 2013-11-27 and 2013-12-15 inclusive. If you are in this group, your payment card details, such as number and expiry date, were stolen as well.

In other words, if you’re in group 2, you’re also in group 1. (Not all guests are paying customers, but all paying customers are guests. That, surely, must be the case?)

And if you are in group 2, you can have a year of credit monitoring for free, but if you’re only in group 1, you can’t.

Target really needs to clarify this.

It’s obvious that there are people whose names and addresses were stolen but who didn’t buy anything recently, who didn’t lose any payment card details in this breach and who therefore don’t qualify for the free credit monitoring.

Target has mixed together the warning sent to people who lost non-payment information only and the warning to those who lost payment card data as well.

That, in my opinion, is a recipe for confusion.

Don’t trust the caller

Secondly, if I were Target, I would not have said this:

Never share information with anyone over the phone, email or text, even if they claim to be someone you know or do business with. Instead, ask for a call-back number.

If you don’t know and trust someone who calls you, why would you trust any phone number or web URL they might give you?

Just bear in mind how successful the so-called fake support call scammers have been in recent years.

Those are the guys who phone you out of the blue, falsely claim that there is a virus on your computer, dishonestly use the Event Viewer to show you errors that “prove” their claim, and then fraudulently charge you $300 for cleanup.

If you ask for a number to call those guys back, or simply use the number that came up on your CallerID/CLI display, which amounts to the same thing, you will almost certainly be given a legitimate-sounding local landline number.

If you’re in Sydney, Australia, for example, the number will probably be something like +61.2.8xxx.xxx; if in Oxford, England, +44.1865.7xx.xxx; and so on.

And if you call that number back, guess what?

You will get through to the company that just called you.

Having an honest-looking local phone number doesn’t mean the caller is an honest, local person; the same applies to website domain names.

Domain names can, and frequently do, redirect anywhere; the same is true of phone numbers these days.

Don’t rely on information that could have come from a scammer to help you determine if that person is a scammer.

Use some objective, independent means of finding out the contact details of anyone who claims to represent a company you do business with and who asks you to disclose any personal information.

In the case of your bank, for example, you can probably find the number to call to dispute a payment card transaction on the card itself, or on a recent statement, or (as I notice many banks are doing now) on-screen at any ATM, without needing to insert a card.

And if you are an organisation that finds itself needing to call customers to look into suspicious activity on their account, try to do the right thing from your side.

Please avoid opening the call, as many companies still seem to do, by saying:

Hello, I am X from Y, investigating a possible problem with your account Z. Before we start, for security purposes, I need you to give me your date of birth and your security codeword.

Try a refreshingly different approach, such as this:

Hello, I am X from Y, investigating a possible problem with your account Z. But because you don’t know me, don’t tell me anything yet. I’m going to hang up; I’d like you to take out your {payment card, last statement, contract} and look in the top right corner where it gives an emergency contact number. Then call us back!

As for Target, let’s hope the company distinguishes the various parts of the breach a bit more clearly.

If the crooks got at your payment card data, did they get your home address and phone number, too?

If they got your address and phone number, should you cancel your cards even if you didn’t shop at Target recently?

Clarification along those lines would be very handy.

In the meantime, here’s a short podcast in which we offer some advice on dealing with suspicious callers who try to scam you or your friends and family over the phone:

(Audio player not working? Download to listen offline, or listen on Soundcloud.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gvI1Xmrv0Tw/