STE WILLIAMS

Microsoft releases latest Law Enforcement Requests Report

Microsoft has published its second Law Enforcement Requests Report, covering the first half of 2013.

The quick summary: not much increase over last year’s numbers.

A total of 37,196 requests were received, covering 66,539 separate users.

20.6% of the requests did not result in data being handed over, the bulk of the rejections being due to “no information found” rather than Microsoft resisting the demands of the police.

The US is well ahead of the field, with 7,014 requests affecting 18,809 users, but this is unsurprising given the number of US citizens online, well ahead of most other countries.

Others countries lots of hits include the major economies of Western Europe, with France, the UK and Germany all scoring fairly highly.

Brazil and Australia come in at just over 1000 requests each; Spain and Italy just under.

A somewhat surprising second place goes to Turkey though, not far behind the US with 6,226 requests. Given Turkey’s estimated 36 million internet users, compared to over 250 million in the US, this may be a little worrying for any privacy-loving Turks out there.

However, those 6000-odd requests cover just 7,333 separate users, putting Turkey in the lower end of the table if sorted by the number of people covered by each request. The US is near the top by this measure, with an average of over 2.5 users covered by each request.

Skype data has been rolled into the main report, but is also published separately to allow comparison with previous numbers.

There was a slightly lower fail rate in the Skype requests than in the overall figures, with a notable rise in the ratio of requests rejected for “not meeting legal requirements” – 7.3% compared to 2.4% for Microsoft services in general.

As in 2012, no Skype content was handed over, with positive responses limited to user metadata such as names, regions and IP address information.

In most regions no content was provided for any Microsoft service, but the US is a significant outlier in this area, with over 10% of requests earning the cops access to email text, stored photos or documents.

This figure is a little down on the 2012 stats, which show that 13.9% of US requests led to actual content.

The only other regions with notable percentages in this area are Brazil, with 5.8%, and Canada on 4.3%, although the small overall number of requests from Canada means that only three actual instances of data handed were over. Both Brazil and Canada were among the very small number of countries that also got their hands on content in 2012.

In the absence of content, the metadata provided may include billing information and “IP connection history”, which can reveal a fair amount about what people are doing across different services, and indeed where they are over time.

But accessing actual content seems considerably more intrusive, especially in the case of emails and other personal communications. It is somewhat cheering to know that Skype conversations and chats have not been subject to disclosure.

At least, not so far, and not to the “law enforcement” agencies covered in the report.

Noticeably absent is data on national security-related requests from other government agencies such as the NSA: detailed information on requests from a range of agencies could apparently not be made available for legal reasons.

All we have is a vague summary of the numbers of “National Security Letters” issued, showing figures so far this year of under a thousand covering between 1000 and 2000 people.

This again suggests little change over last year, when MS saw somewhere between 1000 and 2000 “Letters” affecting 3000-4000 “identifiers” in the full twelve month period.

These Letters apparently only allow access to metadata and must come from “senior FBI officials.”

Microsoft stresses its commitment to providing more complete data, with a court case pending, but for now can say no more than that data passed to security agencies only affects “a tiny fraction” of users.

Following similar transparency reports from web giants Google, Facebook and Yahoo, Microsoft’s latest data, limited as it is, provides some food for thought on the extent law enforcement is using internet usage history to track down miscreants.

It would be good to know more about what those non-law enforcement types are snooping on, though.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PlP9FzWDxh0/

HD More seeks crowd help for vuln scanning

Email delivery: Hate phishing emails? You’ll love DMARC

Security outfit Rapid7 has decided that there’s just too much security vulnerability information out there for any one group to handle, so its solution is to try and crowd-source the effort.

Announcing Project Sonar, the company is offering tools and datasets for download, with the idea that the community will provide input into the necessary research.


The brainchild of Metasploit creator HD Moore, the aim of Project Sonar is to scan publicly-facing Internet hosts, compile their vulnerabilities into datasets, mine those datasets, and share the results with the security industry.

Even though there’s widespread insecurity across the Internet, Rapid7 says “at the moment there isn’t much collaboration and internet scanning is seen as a fairly niche activity of hardcore security researchers.

“We believe that the only way we can effectively address this is by working together, sharing information, teaching and challenging each other. Not just researchers, but all security professionals.”

None of the tools HD Moore’s blog post lists are brand-new: they’re familiar names like ZMap (led by the University of Michigan), Nmap and MASSCAN. The first three datasets Rapid7 collected for the project cover IPv4 TCP banners and UDP probe replies; reverse DNS PTR records; and SSL certificates.

More told SecurityWeek it’s the size of the datasets that demands a crowd approach: “If we try to parse the data sets ourselves, even with a team of 30 people, it would take multiple years just to figure out the vulnerabilities in the data set,” he said. ®

5 ways to prepare your advertising infrastructure for disaster

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/30/hd_more_seeks_crowd_help_for_vuln_scanning/

Commerce in a World Without Trust

Trust is kind of a squishy concept. If you refer back to the definition from our pals at Merriam-Webster, trust is the “belief that someone or something is reliable, good, honest, effective, etc.” Reliable? Honest? Sounds great right?

Our world of increasingly frequent online commerce is based on trust. Your merchants need to trust that you are who you say you are. You trust you’re dealing with the legitimate merchant/vendor that you think are. Ultimately the entire process depends on trust that your transaction will be accepted and that at some point you’ll receive goods or a service in exchange for your payment.

Of course, there has been fraud since the beginning of time. Identity theft makes it difficult for merchants to know who is actually buying something. Site scraping and phishing make it difficult for consumers to know if the site they are using is legitimate. A third party emerged to bridge the gap and provide financial protection to both sides of the online transaction — credit card brands (and their associated issuers) vouch for a consumer to the merchant and protect the consumer from a fraudulent merchant. For their 2-3.5% transaction fees, both merchants and consumers are _protected_ from fraud. As long as the card brands don’t suffer more loss than they make in transaction fees, the system works.

But what happens when we hit the tipping point? When we don’t know who is who and online fraud is so rampant the models the financial institutions use to make sure they don’t lose money on transactions become obsolete. If those models break down, transaction fees could skyrocket. Or maybe they would bottom out as aggressive financials look to gain market share (we’ve seen that movie before). No one knows what would happen.

After reading Brian Krebs totally awesome investigatory piece on “Data Broker Giants Hacked”, we may be closer to that point that we wanted to believe. I mean we always knew fraud was rampant, but reading about the SSNDOB service that traded in personal data takes it to another level, given the recent trends in authentication technology.

I know, you’re probably thinking what the big deal? ChoicePoint got popped over 10 years ago and this is the same thing, right? Well, not so much. It turns out that many organizations (especially financial organizations) use adaptive authentication to reduce the risk of their transactions, which involves asking personal questions to validate a consumer’s identity depending on what they are trying to do.

If the attackers have access to many (if not all) of these standard questions, then you can be as adaptive as you want, you still can’t be sure who is on the other end of a connection. Even better, many of the new healthcare insurance exchanges rolling out in the US heavily use this kind of adaptive authentication to validate citizens and offer services. Soon enough your dog may be online buying health insurance from one of these exchanges. Though I’m not sure if there will be checkbox for ringworm on the medical history page.

If we live by the old adage that the Internet is as secure as it needs to be, we need to question if we’re getting to the point where we have to reset expectations of security? Do we have to fundamentally rethink our dependence on personal information for authentication, knowing full well that this data is easily accessible and not really a secret? Remember the old days when the social security number was a primary unique identifier and something you had to protect at all costs? Pete Lindstrom was early to point out the misplaced reliance on the SSN since it’s neither unique nor hard to get for an attacker. It turns out he was right, and now we should be asking the same questions about all of this other personal information. Are your previous addresses and mother’s maiden name becoming as useless as the SSN?

If you think about alternative technologies, we’ve learned that biometrics will be a tough sell, as evidenced by Apple’s TouchID technology, so we’ll need to expect push back about centrally storing biometric information. Do the financial institutions just jack up their shrinkage estimates and adjust transaction fees accordingly? Do consumers become more aware and go back into brick and mortar stores? Although it’s not like personal data captured in the physical world has proven any more secure.

Some days I wish my crystal ball was back from the shop. If I had to bet, I’d bet on Mr. Market gradually adjusting transaction fees until it’s too expensive to do online commerce and that will result in a wave of new security/authenticity technology to make the Internet once again “as secure as it needs to be” and restore balance to the Force that is online commerce. Until then, monitor the crap out of your financial accounts because you can’t trust anyone or anything nowadays.

Mike Rothman is President of Securosis and author of the Pragmatic CSO

Article source: http://www.darkreading.com/management/commerce-in-a-world-without-trust/240161994

£1.01 billion kept out of cybercrooks’ hands, claim UK e-cops

The UK’s Police Central e-crime Unit (PCeU) has released its final “Financial Harm Reduction and Performance Report” – a breakdown of the cybercrime cases the unit has investigated, and how much money has been kept out of the hands of cybercrooks by their investigations.

The headline figures boast of an epic total of £1.01 billion (approx €1.2 billion, or $1.6 billion) over the last two-and-a-half years.

The PCeU is run by the London Metropolitan Police but, through several regional “hubs”, covers the whole of the UK for much of its cybercrime policing needs, alongside the “Cyber Unit” of the Serious Organised Crime Agency (SOCA).

It was set up in 2008, and in 2011 was given a target of reducing the financial harm done by cybercrime by £500 million within the next four years, a target it now reckons to have beaten resoundingly in just over half of that time.

The report is something of a farewell for the PCeU, whose work will be taken over by the new National Cyber Crime Unit of the National Crime Agency, coming online next month.

The advent of the new force in cyber policing has already been pumped by the publication of details of an arrest several months ago related to the “biggest ever” DDoS attack, and the report goes into more detail on a range of similar successes.

During the two-and-a-half year period the report covers, for example, 255 “persons” have been arrested, 126 suspects have been charged, with 89 convictions and 30 more people awaiting trial.

61 crooks have been jailed, for an average of 3 years apiece, disrupting the activities of 26 different “Organised Crime Networks” (aka “gangs”).

The financial damage mitigated by these investigations is estimated at £58 for every £1 spent on the PCeU, again well above the expected targets.

These monetary values are pretty tricky to figure out, though.

It is hard enough to pin down how much money has been netted by cybercrimes that have actually happened, with the associated costs of cleaning up and shoring up defences adding an extra layer of vagueness, and that’s without considering things like the financial impact of reputational damage.

Putting a value on crimes that might possibly have been committed were it not for the swift action of the boys in blue takes the art of prediction several steps further, risking a detour into straight-out guesswork.

To be fair to the cops, they have made quite some effort to work out their stats on a scientific basis, using a “Threat Reduction Matrix” developed in conjunction with academia and beancounting giant PwC, apparently making the PCeU a “beacon of excellence” in estimating the costs of crime.

The workings of the Matrix are not detailed thoroughly, but they go beyond the obvious things such as how much cash was in a compromised bank account.

Other softer factors are taken into account, using rough estimates of their value: for example, the cost of applying software upgrades to prevent an attack is put at £0.01 (1p) per company employee.

Other vague fudges seem to be included to cover things like the emotional damage done to victims of cybercrime, the cost of time lost investigating, cleaning up from and preventing future risk from cybercrime (including “crowding-out” costs which cover things you could have been doing during this time which might have earned you money), and even “Social intangible costs” – the damage done to society as a whole by the fear of cybercrime.

Even the actual money that might be stolen can only be vaguely estimated. In most cases, figures are worked out based on probablities and past experience.

For example, in the case of a bank info phishing campaign, they might look at how many email addresses the gang had got hold of in a given period, the percentage of those addresses likely to match up with the banks being phished, how many of those users are likely to fall for the phish and how much on average those people might have in their accounts.

From that they would work out how much would be made if the gang carried on working at the same pace and with the same success rate for a further year, and use that as the figure for how much they stopped them from stealing.

It’s also worth noting that the figures are intended to only cover UK residents and businesses, and as the report points out, it can be hard, if not impossible, to work out from a list of email adddresses which ones belong to residents of which country.

As we’re now basing estimates on probabilities calculated using estimates which are in turn based on estimated probabilities, it’s pretty clear that the £1.01 billion figure should be adjusted to fit your personal taste in saltiness.

Even if they’ve overestimated the figure hugely, though, it still looks as though the PCeU has done a decent job.

It would be interesting to see how this would look compared to the overall financial impact of cybercrime – have they prevented half of all the potential damage, or only 1%? – but that would mean even more wild guesswork.

Putting the potential iffiness of the numbers aside, the PCeU has assembled some fascinating case studies of a wide range of cybercrimes that have been planned, committed, investigated over the last few years.

In short, the full report is well worth a look.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Clw3ZOa5YaQ/

£1.01 billion kept out of cybercrooks’ hands, claim UK e-cops

The UK’s Police Central e-crime Unit (PCeU) has released its final “Financial Harm Reduction and Performance Report” – a breakdown of the cybercrime cases the unit has investigated, and how much money has been kept out of the hands of cybercrooks by their investigations.

The headline figures boast of an epic total of £1.01 billion (approx €1.2 billion, or $1.6 billion) over the last two-and-a-half years.

The PCeU is run by the London Metropolitan Police but, through several regional “hubs”, covers the whole of the UK for much of its cybercrime policing needs, alongside the “Cyber Unit” of the Serious Organised Crime Agency (SOCA).

It was set up in 2008, and in 2011 was given a target of reducing the financial harm done by cybercrime by £500 million within the next four years, a target it now reckons to have beaten resoundingly in just over half of that time.

The report is something of a farewell for the PCeU, whose work will be taken over by the new National Cyber Crime Unit of the National Crime Agency, coming online next month.

The advent of the new force in cyber policing has already been pumped by the publication of details of an arrest several months ago related to the “biggest ever” DDoS attack, and the report goes into more detail on a range of similar successes.

During the two-and-a-half year period the report covers, for example, 255 “persons” have been arrested, 126 suspects have been charged, with 89 convictions and 30 more people awaiting trial.

61 crooks have been jailed, for an average of 3 years apiece, disrupting the activities of 26 different “Organised Crime Networks” (aka “gangs”).

The financial damage mitigated by these investigations is estimated at £58 for every £1 spent on the PCeU, again well above the expected targets.

These monetary values are pretty tricky to figure out, though.

It is hard enough to pin down how much money has been netted by cybercrimes that have actually happened, with the associated costs of cleaning up and shoring up defences adding an extra layer of vagueness, and that’s without considering things like the financial impact of reputational damage.

Putting a value on crimes that might possibly have been committed were it not for the swift action of the boys in blue takes the art of prediction several steps further, risking a detour into straight-out guesswork.

To be fair to the cops, they have made quite some effort to work out their stats on a scientific basis, using a “Threat Reduction Matrix” developed in conjunction with academia and beancounting giant PwC, apparently making the PCeU a “beacon of excellence” in estimating the costs of crime.

The workings of the Matrix are not detailed thoroughly, but they go beyond the obvious things such as how much cash was in a compromised bank account.

Other softer factors are taken into account, using rough estimates of their value: for example, the cost of applying software upgrades to prevent an attack is put at £0.01 (1p) per company employee.

Other vague fudges seem to be included to cover things like the emotional damage done to victims of cybercrime, the cost of time lost investigating, cleaning up from and preventing future risk from cybercrime (including “crowding-out” costs which cover things you could have been doing during this time which might have earned you money), and even “Social intangible costs” – the damage done to society as a whole by the fear of cybercrime.

Even the actual money that might be stolen can only be vaguely estimated. In most cases, figures are worked out based on probablities and past experience.

For example, in the case of a bank info phishing campaign, they might look at how many email addresses the gang had got hold of in a given period, the percentage of those addresses likely to match up with the banks being phished, how many of those users are likely to fall for the phish and how much on average those people might have in their accounts.

From that they would work out how much would be made if the gang carried on working at the same pace and with the same success rate for a further year, and use that as the figure for how much they stopped them from stealing.

It’s also worth noting that the figures are intended to only cover UK residents and businesses, and as the report points out, it can be hard, if not impossible, to work out from a list of email adddresses which ones belong to residents of which country.

As we’re now basing estimates on probabilities calculated using estimates which are in turn based on estimated probabilities, it’s pretty clear that the £1.01 billion figure should be adjusted to fit your personal taste in saltiness.

Even if they’ve overestimated the figure hugely, though, it still looks as though the PCeU has done a decent job.

It would be interesting to see how this would look compared to the overall financial impact of cybercrime – have they prevented half of all the potential damage, or only 1%? – but that would mean even more wild guesswork.

Putting the potential iffiness of the numbers aside, the PCeU has assembled some fascinating case studies of a wide range of cybercrimes that have been planned, committed, investigated over the last few years.

In short, the full report is well worth a look.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ld_qwyNJfQw/

Tech Insight: Top 4 Problem Areas That Lead To Internal Data Breaches

External data breaches from groups like Anonymous and internal data leaks from insiders such as Edward Snowden have enterprises questioning and rethinking their security programs. Are they doing enough to protect their data? Are there security controls effective? Would they be able to respond appropriately to a data breach and contain it quickly?

Much of the questions and confusion has to do with executives not understanding where their critical assets are and how they need to be protected. Their sense of security is skewed by the fact that they’ve passed their compliance requirements causing them to think they are safe. For most companies, if they were truly targeted by a sophisticated and determined attacker, they would fail miserably.

Why would they fail? Traditionally, security was focused on protecting the perimeter. Based on my experience with penetration testing organizations from all different industries, companies are doing a great job of locking down their externally exposed assets, with the exception of Web servers. There are fewer devices exposed and even less ports open that could provide an avenue for attack.

That sounds great, right? So, why would these companies fail at protecting their critically important data and business systems?

The first problem area is not knowing where all the critical assets are located inside the network and protecting them appropriately. All too often, when I ask during a penetration test what are the critical systems, I get several different answers, depending on the person answering the question. The CIO will have a different answer than the security team leader, and this will differ from the various business unit owners.

Then once the testing begins, we find that there is little to no true network segmentation between various organizational units, the servers, and general network devices. Most logical network separation is done because of physical separation between holding floors and geographic locations. It is not done from a security standpoint and there are usually very few, if any, firewall rules between those networks.

In order to combat the problem, your risk assessment and full inventory of all systems including the types of data handled by each system need to be completed. That information can then guide the proper network segmentation. Of course, it can be done completely without looking at the business processes and how users use and access the data. When the previous two processes are then combined, access control for users can then be properly architected and implemented–which leads us to the next problem area.

The second issue that plagues many enterprises is that they don’t have a solid concept of what the “principle of least privilege” and “need to know” mean. Users regularly have a great deal more access and privilege than necessary to complete their job — this goes for secretaries and systems administrators alike (i.e. like Snowden the snooping sysadmin). A company may take the proactive step of removing local administrator rights from their users on their desktops, but they don’t bother with the level of access in various internal applications and network file shares.

Properly designing those access controls can be difficult without already having the inventory and understanding of the business as mentioned above.

The third major area is security training and awareness for users. Having developed a security awareness program for a large university and working with many different enterprise organizations, I’ve found the best way for traction is to make it personal. Teach users easy and practical concepts that relate between home and work. Many of the same protective behaviors they should be doing at home can also help protect their corporate desktops and laptops.

The fourth issue, and one that is compounded by several of the others, is the presence of shared credentials and password reuse. Password reuse across local system accounts is one of the biggest problems we encounter during penetration tests. It allows us, and the bad guys, to easily move laterally within a company’s network once we compromise one system.

Or, once we compromise a user’s password, it is often the gateway to getting access to other systems and applications because users commonly reuse passwords across multiple company systems. You think single-sign-on sounds great? It’s even more useful to an attacker with a valid username and password because they can now get into everything with that one set of credentials.

User education and technical controls are needed to address both of these problems. The education piece needs to explain the problem and the impact to help instill a sense of responsibility and ownership. Being able to explain to a user exactly what could happen if their username and password were compromised, such as theft of corporate trade secrets that could result in their losing their job or the company going out of business, opens a few eyes.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/insider-threat/tech-insight-top-4-problem-areas-that-le/240161952

Copying fingerprints, Firefox trusted, Facebook not, Yahoo recycles

Schoolboy arrested over Spamhaus DDoS, world’s biggest cyber attack

Spamhaus LogoIn March 2013, a distributed denial of service (DDoS) attack of unprecedented ferocity was launched against the servers of Spamhaus, an international non-profit dedicated to battling spam.

A DDoS is an attack wherein the servers of a targeted online service are slowed to a crawl with loads of pointless email or file uploads that clog up their processing ability.

The March Spamhaus attack peaked at 300 gigabits per second, Spamhaus CEO Steve Linford told the BBC at the time – the largest ever recorded, with enough force to cause worldwide disruption of the internet.

In April, one suspect was arrested in Spain.

Now, it’s come to light, another suspect was also secretly arrested in April – this one being a London schoolboy.

The 16-year-old was arrested as part of an international dragnet against a suspected organised crime gang, reports the London Evening Standard.

Detectives from the National Cyber Crime Unit detained the unnamed teenager at his home in southwest London.

The newspaper quotes a briefing document on the British investigation, codenamed Operation Rashlike, about the arrest:

The suspect was found with his computer systems open and logged on to various virtual systems and forums. The subject has a significant amount of money flowing through his bank account. Financial investigators are in the process of restraining monies.

Officers seized his computers and mobile devices.

The boy’s arrest, by detectives from the National Cyber Crime Unit, followed an international police operation against those suspected of carrying out the massive cyber attack, which slowed down the internet worldwide.

The briefing document says that the DDoS affected services that included the London Internet Exchange.

The boy has been released on bail until later this year, the London Evening Standard reports.

The arrest follows close on the heels of two other London-based arrests resulting from international cyber-policing:

  • Last week’s arrest of eight men in connection with a £1.3 million ($2.08 million) bank heist carried out with a remote-control device they had the brass to plug into a Barclays branch computer, and
  • The arrest of 12 men in connection with a scheme to boobytrap computers at Santander, one of the UK’s largest banks, by rigging the same type of remote-control device found in Barclays – devices that enable remote bank robbery.

Truly, the UK isn’t fooling around when it comes to cybercrime – a fact it’s making clear with the robust work of the National Cyber Crime Unit, which itself will soon be rolled into the even more cybercrime-comprehensive arms of the National Crime Agency.

The National Crime Agency, due to launch 7 October, is going to comprise a number of distinct divisions: Organised Crime, Border Policing, Economic Crime, and the Child Exploitation and Online Protection Centre, on top of also housing the National Cyber Crime Unit.

If the recent arrests are any indication, it would seem that the UK’s on the right track with cyber crime.

May cyber crooks, both the seasoned and the schoolboys, take heed.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hX_zjQDtOdQ/

How to avoid being one of the “73%” of WordPress sites vulnerable to attack

The WordPress 70%A recent investigation has concluded that 73% of the 40,000 most popular websites that use WordPress software are vulnerable to attack.

The research, carried out by vulnerability researchers EnableSecurity and reported by WordPress security outfit WP WhiteSecurity, was conducted between Sept 12 and Sept 15 shortly after the release of the WordPress 3.6.1 Maintenance and Security Release.

WordPress is the most popular blogging and Content Management System (CMS) in the world and, according to WordPress founder Matt Mullenweg, it powers one in five of all the world’s websites.

As with any research of this kind we should apply a big pinch of salt.

In fact in this case we don’t need to supply our own salt because the research actually comes self-salted thanks to this hilarious rider at the bottom of the article:

The tools used for this research are still being developed therefore some statistics might not be accurate.

You have been warned.

So if the numbers might be wrong why am I bothering to reproduce them here? Because (in my opinion) they are probably true (well true-ish) and even if they aren’t they still highlight an important security issue which isn’t diminished one iota by their sketchiness.

As long as we go into this with our eyes open we’ll be fine.

The research did no more than set out to discover what versions of the popular CMS are in use by the top 1 million websites.

This singular focus is with good reason: the first rule of WordPress security is always run the latest version of WordPress.

If you aren’t running the very latest version of WordPress then the chances are you are running a version with multiple known vulnerabilities – bugs that criminals can use to gain a foothold on your system.

EnableSecurity’s scan of Alexa’s Top 1,000,000 discovered that 41,106 websites were running WordPress, a little over 4%.

They then determined that of those websites at least 30,823 were running versions of WordPress that have known vulnerabilities. From this they concluded that

73.2% of the most popular WordPress installations are vulnerable to vulnerabilities which can be detected using free automated tools.

Add your salt now.

Even if we take it as read that 73% of the sites are running vulnerable versions of WordPress we still can’t conclude that 73% are in fact vulnerable. There are common security strategies that the researchers didn’t test for, not least using a Web Application Firewall (WAF) that can put up a protective shield in front of vulnerable websites.

By the way, the first rule of WordPress security, always run the latest version of WordPress, holds true even for sites running behind a WAF. They are not mutually exclusive and should be considered as separate parts of a strategy of defence in depth.

In addition to skipping over reasons why the 73% might be a little on the high side the study also leaps acrobatically past a totally different set of reasons why it might be a bit on the low side.

The limited scope of the research meant that it didn’t account for other forms of automated attacks against WordPress installs such as targeting weak passwords or flaws in popular plugins.

As diaphanous as the study’s precision might be, the broad thrust is correct and it contains a useful message; users of WordPress need to be diligent about security because they are using software that is popular enough to be of interest to criminals who conduct large-scale automated attacks.

10 ways to keep your WordPress site secure

If you are running a website that uses WordPress here are 10 suggestions to help you avoid ending up in the 70% (or whatever large number it is) of vulnerable sites.

  • Always run the very latest version of WordPress
  • Always run the very latest versions of your plugins and themes
  • Be conservative in your selection of plugins and themes
  • Delete the admin user and remove unused plugins, themes and users
  • Make sure every user has their own strong password
  • Enable two factor authentication for all your users
  • Force both logins and admin access to use HTTPS
  • Generate complex secret keys for your wp-config.php file
  • Consider hosting with a dedicated WordPress hosting company
  • Put a Web Application Firewall in front of your website

For more on the subject of patching WordPress have a listen to Sophos Security Chet Chat 117, the latest 15 minute installment in our regular podcast series.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GdMSV9EkDH4/

Copying fingerprints, Firefox trusted, Facebook not, Yahoo recycles – 60 Sec Security [VIDEO]