STE WILLIAMS

Cyber criminals have no borders, so neither should we

Passport stamps image courtesy of ShutterstockIt’s great to be a part of the US’s National Cyber Security Awareness Month (NCSAM) again this year.

Back in the good old days of 2006, the Australian government was becoming increasingly aware of the growing risk of cybercrime, and in conjunction with like-minded organisations such as Sophos, the Australian Cyber Security Awareness Week was born.

Originally with only a handful of partners and schools, this week has become a serious event in 2013 with over 1,400 partners, including around 700 schools.

Then, in 2011, the New Zealand government launched its Cyber Security Awareness Week to coincide with its Australian counterpart. Sophos was again a member of the steering committee.

One great component of the Australian week has been the distribution of the Budd:e Cyber Security education package, which help students from years 3 and 9 adopt safe and secure online practices.

It is interesting to contrast the US ‘month’ with the ANZ ‘week’. A very serious but amusing colleague here at Sophos pointed out that “awareness just for the week or month is a bit like a ‘quit smoking afternoon’ being just a few hours when you don’t smoke, when it should be the point in time where you start never smoking again.”

Earlier this month I was fortunate to be able to present at the Singapore government’s conference – Govware – which has a strong affinity to Cyber Security Awareness Month. Governments all over the world are now engaged in similar events.

Given the global nature of the problem, with criminals possibly based in every country and utilising hijacked servers (zombies or botnets) based randomly around the world, it is clear why no single country can own this.

Bringing together all the fine work of governments will be increasingly important to us all. No one entity owns the problem but we all own elements of the solution.

I believe that a key set of players in this fight must be the global providers of security products as, by our nature, we are not constrained by borders in the way national governments are.

Every day, SophosLabs finds over 250,000 new examples of malware from all over the world and then provides this protection to our global customers – this is a clear example of the span of reach that multinational companies have.

So it seems to me that cooperation between both the public and private resources for good is a good way to ensure we’re not outflanked by the bad guys.

Cross-country prosecution is often difficult due to a general fragmentation of effort, the challenge of borders and uncertain legislation. Allowing this to continue will increase the power of the criminals and make the future problem even greater.

To again draw on the smoking analogy – every day you continue smoking makes giving up harder.

No one country alone can win this battle, nor can one security provider. We all need to play our part.  Wherever we are, let’s get behind our governments’ efforts.

Image of passport stamps courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mg0BJ9i3h7A/

Apple releases iOS 7.0.3 – fixes yet more lockscreen holes, including a call

Soon after iOS 7 came out, a pair of holes in the lockscreen were outed and then quickly fixed in iOS 7.0.2.

It turns out that Apple didn’t fix future problems of this sort proactively, because the just-announced iOS 7.0.3 closes three more locked-phone holes.

The three bugs this time deal with similar problems to those patched in 7.0.2:

  • Another flaw in the emergency call feature, where hitting the call button at a carefully-planned moment lets you call any number, not just 911 or your local equivalent.
  • A passcode lockout bypass, so that crackers can continue trying passcodes even after the phone decides they’ve had too many goes and locks them out.
  • Access to the Contacts pane even when the phone is locked.

Interestingly, the bug fix for the emergency call problem is described as follows:

A NULL dereference existed in the lock screen which would cause it to restart if the emergency call button was tapped while a notification was being swiped and while the camera pane was partly visible. While the lock screen was restarting, the call dialer could not get the lock screen state and assumed the device was unlocked, and so allowed non-emergency numbers to be dialed. This issue was addressed by avoiding the NULL dereference.

If you are experiencing déjà vu, you should be, because you’ve seen this before, in the iOS 7.0.2 security notes:

A NULL dereference existed in the lock screen which would cause it to restart if the emergency call button was tapped repeatedly. While the lock screen was restarting, the call dialer could not get the lock screen state and assumed the device was unlocked, and so allowed non-emergency numbers to be dialed. This issue was addressed by avoiding the NULL dereference.

As we explained last time, NULL pointers (references to memory addresses) can’t be dereferenced – that makes no programmatic sense, since a NULL pointer is, as a matter of definition, one that doesn’t point anywhere.

When a progam tries to dereference NULL, it’s almost impossible to determine what the programmer intended – who knows what memory location was supposed to be used instead? – so the operating system has little choice but to terminate it.

→ A NULL pointer usually means an uninitialised variable, or a memory allocation error, denoted with the special value NULL, that has been ignored. In the former case, you’re trying to use memory without even trying to allocate it first; in the second, you’re trying to use memory that you requested but never actually received.

So, correcting the NULL dereference wasn’t the wrong thing for Apple to do, but it clearly wasn’t enough to deal generically with this sort of lockscreen flaw.

When iOS 7.0.2 came out, we offered the following observations:

  • You can argue that Apple should make other software wait while the lockscreen is restarting, because of the key security function it performs.
  • You can argue that Apple should code things to fail closed: if the lockscreen software doesn’t know or can’t tell you whether the phone is locked or unlocked, treat it as locked, for security’s sake.

Of course, that’s easier said than done, because mobile phone regulators pretty much mandate some sort of bypass mechanism in a phone’s lockscreen.

That’s so emergency calls can be made any time the phone is powered up and in contact with the network. (You can even make 911 calls without a SIM card, for example).

That makes it hard to implement a lock screen “in reverse” – in other words, so that the phone is only unlocked when the lockscreen software is running, not the other way around – and it probably explains Apple’s reluctance to make big changes in the way the lock screen works for what is just a point release of iOS.

The flip side of that, if it’s true, is that iOS 7.0.3 ought to be uncontroversial, due to making only modest code changes inside the operating system.

In other words, if you are keen on security, you may as well make sure you grab this update as soon as you can, if your phone hasn’t done it for you already.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/q07Iz7pIlD0/

Security begins at home

NCSAMMy last post about two-factor authentication (2FA) got me thinking about another post for National Cyber Security Awareness Month (NCSAM).

While the last one dealt mostly with the ‘S’ in NCSAM, this one will also bring in a good measure of ‘A’.

My wife recently went back to work after spending a considerable amount of time away to look after our children.

With her work and home IT needs now converging on our family network, this got me thinking about security in a whole new way.

For over a decade now I’ve been responsible for maintaining security resources and advising Sophos customers and partners about security best practices.

I also do a fair bit of public speaking for Sophos on emerging threats and protection strategies and am always in contact with IT professionals and end users.

What I haven’t done so well is make sure that those closest to me get the same benefit from my experience.

While I practice what I preach, it occured to me that my family doesn’t get the equivalent level of attention.

The old adage about the cobbler’s kids came surging to mind.

So here’s a checklist of what I did.

Getting started

The first step was to get a laptop and configure it with all the necessary tools.

My wife works for a company that provides online services and is fortunate to work from home most of the time.

It also means that she spends a considerable amount of time online and handling potentially sensitive information.

The company is a small start-up, so she is mostly on her own when it comes to providing and securing these tools.

The basics

Since she is comfortable with computers, but by no means an expert, I went with the sensible option of Windows 7 Pro with Microsoft Office and Chrome.

→ This isn’t an endorsement for the security, usability or performance of Chrome over any other browser. It was simply the browser she was most accustomed to and I didn’t want to change too many things all at once.

This combination makes my job much easier when it comes to off-the-shelf hardware, general availability of tools, patching and compatibility of software.

And of course, I also made sure that the laptop was running up-to-date anti-virus software.

Encryption

With all the software installed, it was time to think about disk encryption.

I chose BitLocker because it gives me full disk encryption built into the operating system.

(Linux and Mac users have similar built-in options in the form of cryptsetup and FileVault2.)

If you plan on having any sensitive information on a portable device, I highly recommend that you encrypt it.

File storage and sharing

Next we looked at ways to securely share and store files in the cloud.

I’ve been using ownCloud for some time so I created an account on my server for my wife.

The benefit of ownCloud is that it allows me to control how and where the files are stored.

It also serves as a handy way to back up her files automatically by using the sync client, and works equally well on a smartphone.

If you prefer to use some of the available free cloud services for file storage and portability, make sure you understand how it’s all secured and consider adding your own layer of encryption as well.

Awareness

Then came the end-user training.

This is where we talked about the benefits of complex passwords and using different passwords for every site you interact with.

(Enjoy this video? Check out the SophosLabs YouTube channel.)

Like many users, my wife at first balked at the concept of different (and complex) passwords for every site.

However, she’s been using a password manager, in her case, LastPass, for some time, so choosing new and secure passwords was easy.

The password manager also made adding two-factor authentication relatively painless.

Securing the network

Let’s not forget about the network.

At home, we use the free Sophos UTM Home Edition which looks after our firewall needs as well as providing web and email filtering, intrusion prevention and a VPN (virtual private network) for secure remote access.

Wi-Fi

Since we’re talking networks, I should mention that our home wireless network is also set up with security in mind.

I have nearly 20 devices that require connectivity, and although I I still use wired Ethernet for some devices, for others, Wi-Fi is my only choice.

With that in mind I selected WPA2 Personal for my security mode with a 20 character passphrase.

Sure, it’s long and complex but I only had to enter it once on each device – the device are good at remembering it so I don’t have to.

Smartphone protection

I also encrypted my wife’s smartphone too, and ensured she had better than a four-digit passcode to unlock it.

After all, she receives work and personal email on this device.

While I was at it, I installed the Google Authenticator app so we could add two-factor authentication to all of her social media sites – especially Facebook and Twitter, which she uses both for work and for play.

Was it worth the trouble?

This was an interesting exercise, and well worth the time I spent on it.

My wife will undoubtedly be safer and more secure online; her employer’s data will be safer, too, thus spreading the benefits well beyond our own network.

It also provided me with a good checklist to go out and evaluate the security posture of my friends and family .

After all, if I’m going to provide them with technical support, I might as well make sure they’re standing on a good foundation.

Now, time to go explain elliptic curve cryptography to the kids!


Image of Wi-fi antenna thingy courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ynyUgz_2gsU/

Young drivers are especially vulnerable to ‘ghost brokers’

Young driver. Image courtesy of Shutterstock.Ever since I first passed my driving test I realised how expensive motoring could be. Just buying a car and filling the tank with petrol took my entire childhood savings. Thankfully the yearly road tax and insurance costs weren’t too high.

Now though, as my own kids approach the age of theory tests and driving tuition, the costs have rocketed up.

The largest expense these days appears to be insurance. Just the other day I was looking at quotes with my eldest and we discovered that the most basic third party cover he could obtain would cost him six times the value of the car he was thinking of buying.

Its no wonder then that the vast majority of motorists will invest at least some time into acquiring the best value policy they can find.

In many ways, the advent of the internet has helped in this regard – comparison sites have sprung up, offering a multitude of insurance policies presented in a format that allows the cheapest deal to be quickly identified.

Likewise, social media posts and traditional classified adverts can also offer tempting deals for those who need cover.

Unfortunately, however, some youngsters have found that things are not always as they seem with such offers.

As the cost of car insurance rises against a backdrop of youth unemployment and stagnating wages, many young drivers have been tempted to pay large up front fees for policies which are significantly cheaper than other quotes they have been given.

Of course there is an adage that they should have considered before parting with their money – “if something sounds too good to be true… it probably is” – and that is very much the case with a form of car insurance fraud known as “ghost broking.”

The UK’s police insurance fraud unit has said that they are seeing an increasing number of such scams that particularly target younger drivers whose annual premiums are guaranteed to be on the large side and therefore will make the most profit for the criminals.

Offering significant savings, the ghost policies are actually worthless and could leave the purchaser open to six penalty points on their driving license for driving without insurance.

Additionally, should a driver who wrongly believes they are insured be unfortunate enough to be involved in an accident, they would quickly discover that any claims for vehicle damage or personal injury would have to come out of their own pockets.

The victims of ghost broking rarely know that their insurance policies are not valid, only discovering the truth after an accident or when stopped by the police.

Talking to BBC Newsbeat, one victim, Peter Townsend, said:

I went online. I was just having a browse about and a website came up where you fill in a form and they call you back.

This company called me back with quite a good quote, just short of £1,600, where the others were about £2,000.

After making an initial payment of £750 the 19-year-old felt he had got himself such a good deal that he decided to return to the same website a month later to obtain a quote for his sister.

Instead of the page he was expecting to see, he saw a blog warning that the original site was a scam. He rang the DVLA (Driver and Vehicle Licensing Agency) who told him that the insurance policy he had purchased was bogus and that he was in fact driving around uninsured.

Estimates suggest that over 20,000 drivers in the UK may be blissfully unaware that their current insurance cover is worthless, though with most victims of ghost broking being unaware that they have been conned it’s hard to get an accurate number for how many such policies may have been sold.

Earlier this month, 27 people were arrested in stings across the UK, suspected of being involved in ghost broking.

DCI Dave Wood, head of the Insurance Fraud Enforcement Department (IFED), said at the time:

The consequences for innocent motorists who fall victims to ghost brokers can be dire, so it is absolutely vital that drivers shopping for car insurance online, or through other means, question what they are being offered to ensure they get a real deal.

While it is understandable that young drivers will wish to save as much money as possible, especially on such expensive and necessary insurance cover, this is one area where it is vital not to get ripped off – the financial and legal consequences are far too severe to risk it.

If you are looking to take out a new insurance policy then make sure you do your homework and only buy from a reputable company.

If buying through the web please ensure that you proceed with caution. Only buy a policy from a company’s official site and do not be tempted by deals that look too good to be true. There is every chance that they are.


Image of young driver courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ID---Ky-yD4/

DARPA slaps $2m on the bar for the ULTIMATE security bug KILLER

Free Regcast : Microsoft Cloud OS

It’s a bad day for the vulnerability scanning industry: DARPA has announced a new multi-million-dollar competition to build a system that will be able to automatically analyze code, find its weak spots, and patch them against attack.

Mike Walker, DARPA program manager, said that the challenge was to start a “revolution for information security” and said that today’s detection software left much room for improvement.


“Today, our time to patch a newly discovered security flaw is measured in days,” he said in a statement. “Through automatic recognition and remediation of software flaws, the term for a new cyber attack may change from zero-day to zero-second.”

Teams have until January 14, 2014, to put themselves forward, then they’ll be expected to come up with tech that can scrutinize and patch a system without any human intervention. Up to $750,000 in funding will be available to teams that have plausible designs for fixing security holes in a basket of commercially available software; early trials will take place this December to weed out weaker applicants.

The competition’s final will be held in early to mid-2016. The submitted vulnerability scanners must automatically find and patch flaws in code in real-world conditions in order to win: a cash prize of $2m is waiting for the best-performing team, $1m for the loser, and $750,000 to console the runner-up.

The agency hopes its Cyber Grand Challenge will encourage the development of systems that mimic the abilities of programmers skilled in reasoning their way to finding code flaws. The security industry is still basing much of its work on reactive signature-spotting tech, DARPA said, rather than building heuristic programs that identify a problem before it becomes one.

“The growth trends we’ve seen in cyber attacks and malware point to a future where automation must be developed to assist IT security analysts,” said Dan Kaufman, director of DARPA’s information innovation office.

DARPA likened the competition to that which spurred the development of automatic vehicles nearly a decade ago. While that has certainly helped spur the automatic car industry, this new challenge may cause some problems for the vulnerability-scanning industry.

For the larger firms that have built a lucrative industry from signature-based scanning the announcement is a warning of tough times ahead. If someone does build a system capable of finding and patching flaws far faster than what’s on the market then their industry is doomed.

On the other hand, for independent security researchers, things could be looking very good indeed. The cash on offer gives a strong incentive for novel approaches, and maybe some good will come of casting bread upon the waters, as Robert Heinlein suggested.

“Automated patching within seconds? Sounds like a great idea, and I can imagine it working well on the Starship Enterprise,” security watcher and former Sophos specialist Graham Cluley told El Reg.

“However, in reality I suspect this would be a very difficult to achieve in a way which would win the confidence and trust of large businesses. Good luck to them – but I’m not holding my breath.”

DARPA is not claiming any control over the technology demonstrated in the challenge, just the right to license it on reasonable terms. Non-US teams are invited to participate, subject to export laws and security controls.

Most of the self-driving car team that won that DARPA challenge ended up at Google on plush salaries, so some seriously talented security savant might face a seriously large payday that makes the agency’s cash prize look paltry in comparison. ®

Free Regcast : Microsoft Cloud OS

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/10/22/darpa_sets_2_million_cash_prize_for_the_ultimate_vulnerability_scanner/

Visualizing Security Analytics That Don’t Stink

When it comes to sifting through an inordinate amount of security data in order to make informed decisions, success depends not just on how one slices and dices that data via algorithms and analysis. Equally important is how that data is eventually presented, whether it be to IT operations making daily decisions, IT leaders developing strategic initiatives or to higher level executives who hold the purse strings.

As with many other analytics programs, data visualization is more than producing pretty charts. Good graphical interpretation of data and an effective selection of data to tell the relevant stories can mean the difference between timely decision making or simply succumbing to an exercise in numerical futility.

“Data visualization is an important tool in security analytics, because you often don’t know exactly what you’re looking for,” says Dwayne Melancon, chief technology officer for Tripwire. “The human brain is very good at seeing anomalies in large groups of data, and interacting with the data visually taps into that strength. After all, a lot of security is finding small, suspicious occurrences within a sea of ‘normal’ events – and visualizations are a great way to do just that.”

According to data scientists, effective data visualization starts first with choosing which numbers to tell the story. One effective means to offer digestible visualization is to look for analytical ways to reduce the dimensions of data, says Ram Keralapura, data scientist for Netskope, a cloud apps analytics and policy creation company.

“So how do we actually show information in a compact form?” says Keralapura. “One of the ways we do that is by collapsing multiple dimensions into a single dimension or at least fewer dimensions so the end user can more easily understand what’s happening.”

For example, his company monitors dozens of different factors that go into how risky a cloud connection might be, including things like the types of security certifications an organization might have, the auditing policies they have in place, notification policies they have in place and so on. Rather than just throwing that number over to customers in a massive table for every cloud connection possible, it developed what it calls a Cloud Confidence Index, a number that rolls up each of those other points into one score for that data.

Obviously, that’s just a first step to good visualization—even more important is establishing effective graphical representation of a data set so that it is easier for a data user to sift through individual points in a glance than actually scanning through pages and pages of raw numbers or Excel spreadsheets.

“Human beings tend to be good at perceiving patterns, especially visually; we learn to recognize faces at a young age, for example, and then spend the rest of our lives seeing them in clouds, wood grain, burn patterns in toast, and so on,” says Kevin O’Brien, enterprise solution architect for CloudLock. “What this reveals is that our brains are incredibly well tuned towards this type of behavior along a specific sensory axis — sight. By translating fairly esoteric text into visual information, we can tap into that “rapid response” mechanism more readily, and make decisions based on it.”

Unfortunately today many security tools out there tend to simply offer numbers in grid formats or spreadsheets, says Shawn Tiemann, solutions engineer for LockPath, explaining that running through a “pile of vulnerabilities” means you’ve got to read through thousands of items.

“Visualization makes it more digestible and easier to consume so a CISO or director of security can make informed decisions about the business without losing 10 to 20 hours of their life going over little nitty gritty details of those items,” he says.

One example of this is the traditional heat map method of visualization, says Keralapura, who explains that this can be useful for something like monitoring source and destination IP addresses.

[Your organization’s been breached. Now what? See Establishing The New Normal After A Breach.]

“If you’re looking at total number of connections that they’re using, a heat map is absolutely the right visualization in that context to be able to say, ‘these are the heavy hitters and these are the ones that exchange the most traffic and so on,'” he says.

Tiemann says he’s also a fan of tree mapping, which allows a “true drill-down experience.”

“Using that vulnerability security data as an example, you could start at a high level of how severe it is and then maybe click on high-ranking vulnerabilities and from there see what’s new versus what’s existing or drill into which scanner supplied the data and what business units those vulnerabilities exist in,” he says. “With a tree map you can distill that information down to see where the problem exists geographically all the way down to which assets they exist in.”

As security departments look for tools that can do the heavy lifting of translating constantly changing data into visualizations, some might buy tools built specifically for data analysis such as an IBM Cognos or a Maltego. The could also work with other departments such as a business analytics department that might already have access to these tools and to data scientists who can tailor these tools for security applications. But also, security departments should be leaning on their vendors to offer built in visualization tools within their products, Tiemann says, explaining that they should not only look for good charting but also for easy ways for the organization to get charting that is pumped out depending on the data user’s role in the organization. Because the type of data and how it is presented should change between the CEO, CIO, CISO and IT operations staff.
But IT departments and security pros don’t necessarily need to invest in expensive tools to get started with better security story telling through visualizations. Sometimes if you’re telling a story, particularly as you’re pitching for more budget or a change of process to higher-ups it might pay to invest in the time to do some manual design of data visuals, says J.J. Thompson, CEO and managing director of Rook Consulting, who says he’s gotten clients to make much quicker decisions about buying into projects or changing processes based on switching from multiple-slide PowerPoint decks during presentations into a single infographic-like one-pager that tells the same story in a graphical manner.

“What we’ve found is if we can forward one thing that someone can glance at and understand what’s going on, what the value proposition is and what next steps look like, that tends to get approved quickly,” he says. “It’s not useful for everything , but it is useful for demonstrating progress in where you’re at, for capabilities overviews or for spotting anomalies in data.”

He recommends that security practitioners look at sites like visual.ly for ideas of how infographics work and then search online for template tools to help build out simple visualizations. He and his team also invested in Adobe tools to make more sophisticated graphics.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/visualizing-security-analytics-that-dont/240162973

Facebook re-allows the posting of decapitation videos with ‘WARNING’

Facebook, image courtesy of ShutterstockVideo clips depicting beheadings have been allowed back onto Facebook.

Facebook temporarily banned decapitation clips in May after receiving complaints about the potential of long-term psychological damage from watching such horrific material.

According to the BBC, the company has now relaxed its stance.

Violent content, including beheadings, are now allowed, as long as the intent is to raise awareness rather than to celebrate violence.

Earlier this week, Facebook told news outlets that it was considering adding warnings to such content. Here’s what it told the BBC:

Facebook has long been a place where people turn to share their experiences, particularly when they’re connected to controversial events on the ground, such as human rights abuses, acts of terrorism and other violent events.

People are sharing this video on Facebook to condemn it. If the video were being celebrated, or the actions in it encouraged, our approach would be different.

However, since some people object to graphic video of this nature, we are working to give people additional control over the content they see. This may include warning them in advance that the image they are about to see contains graphic content.

It didn’t take them long.

After receiving a slew of backlash since the news spread on Monday, Facebook moved quickly to implement such warnings that will appear before graphic videos.

The message reads:

WARNING!
This video contains extremely graphic content and may be upsetting.

Regardless, Facebook’s decision to allow this content is still not sitting well. The BBC quoted one suicide prevention charity that condemned the move.

Dr Arthur Cassidy, a former psychologist who runs a branch of the Yellow Ribbon Program in Northern Ireland, told the BBC that such material can quickly leave scars:

It only takes seconds of exposure to such graphic material to leave a permanent trace – particularly in a young person’s mind.

The more graphic and colourful the material is, the more psychologically destructive it becomes.

The BBC reports that two of Facebook’s official safety advisers have also criticized the decision.

The backlash to the violent video is widespread: it includes at least one Facebook advertiser, charities that support kidnapping victims and their families, and the South Australia Police Force.

For its part, the car-sharing firm Zipcar denounced the beheading video and Facebook’s decision to publish the content.

The BBC quoted a statement Facebook put out on the matter:

We want you to know that we do not condone this type of abhorrent content being circulated on Facebook.

We have expressed to Facebook in the past the critical need to block offensive content from appearing and we will continue to engage with them on this important matter.

Facebook has since disabled Zipcar and other firms’ ads from appearing on the page in question.

Facebook policy specifically prohibits “photos or videos that glorify violence or attack an individual or group.”

The BBC was alerted to the change in policy when a reader pointed out that Facebook had refused to remove a page showing a clip of a masked man killing a woman, believed to have been filmed in Mexico.

The BBC reports that the video was posted last week under the title, Challenge: Anybody can watch this video?

I couldn’t find the video on Facebook, but Gizmodo coverage features a still photo, apparently taken from the Facebook post, the caption for which implies that the beheading victim was killed for cheating on her husband.

A Facebook spokesperson explained to Gizmodo that the Boston Marathon bombing illustrates an instance wherein posting violent content serves the greater good:

There was a gentleman whose legs had been blown off. If we’d had a more conservative stance, that image would not have been allowed on the site.

What we want to do is give folks the right balance of being able to control what it is they’re seeing. We’re definitely aware that this is not the perfect policy. We’re always trying to improve it.

I can sympathize with the difficulties Facebook faces in both supporting the free exchange of ideas and in avoiding publishing material that appeals to the most base examples of human curiosity (as Gizmodo points out, its photo of the beheading victim shows that it was shared almost 18,000 times).

But as many have pointed out, Facebook’s policy has jarring contradictions.

Censored female, image courtesy of ShutterstockPosting violent content – be it photos depicting a terrorist attack victim or a murder – passes muster, whereas nudity is verboten, including images of females’ breasts unless they depict an infant latched on and suckling.

Can we blame Facebook for its strange prudishness about the human breast?

I think not. As Facebook points out, it reviews photos almost exclusively based on Facebook members’ complaints about them being shared on Facebook.

We are, evidently, complaining to Facebook about the wrong things, given that Facebook is allowing beheadings but not the human breast.

Image of Facebook and censored female courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/baN8um1uhro/

First Lavabit, now CryptoSeal pulls the plug: VPN service axed

Supercharge your infrastructure

VPN service CryptoSeal has followed Lavabit’s example and shuttered its consumer service, saying its CryptoSeal Privacy service architecture would make it impossible to comply with a government order without handing over the crypto keys to its entire system.

The company, which will continue offering business services, made the announcement via a notice to users trying to log into the service, which has been posted to ycombinator here.


“With immediate effect as of this notice, CryptoSeal Privacy, our consumer VPN service, is terminated. All cryptographic keys used in the operation of the service have been zerofilled, and while no logs were produced (by design) during operation of the service, all records created incidental to the operation of the service have been deleted to the best of our ability,” the notice states.

Referring to the pen register issues that drove Lavabit’s decision to close, the post continues: “Our system does not support recording any of the information commonly requested in a pen register order, and it would be technically infeasible for us to add this in a prompt manner. The consequence, being forced to turn over cryptographic keys to our entire system on the strength of a pen register order, is unreasonable in our opinion, and likely unconstitutional, but until this matter is settled, we are unable to proceed with our service,” it continues.

Founder Ryan Lackey confirmed the shut-down on Twitter:

Paid subscribers are offered a one-year subscription to a non-US VPN service, a refund of their balance. CryptoSeal says it’s looking at legal ways to relaunch a consumer service, and subscribers will also be offered a year of free service should it bring a new service to market. ®

ioControl – hybrid storage performance leadership

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/10/22/cryptoseal_shutters_consumer_vpn_service/

DARPA slaps $2m on the table for the ULTIMATE code vulnerability hunter

Supercharge your infrastructure

It’s a bad day for the vulnerability scanning industry: DARPA has announced a new multi-million-dollar competition to build a system that will be able to automatically analyze code, find its weak spots, and patch them against attack.

Mike Walker, DARPA program manager, said that the challenge was to start a “revolution for information security” and said that today’s detection software left much room for improvement.


“Today, our time to patch a newly discovered security flaw is measured in days,” he said in a statement. “Through automatic recognition and remediation of software flaws, the term for a new cyber attack may change from zero-day to zero-second.”

Teams have until January 14, 2014, to put themselves forward, then they’ll be expected to come up with tech that can scrutinize and patch a system without any human intervention. Up to $750,000 in funding will be available to teams that have plausible designs for fixing security holes in a basket of commercially available software; early trials will take place this December to weed out weaker applicants.

The competition’s final will be held in early to mid-2016. The submitted vulnerability scanners must automatically find and patch flaws in code in real-world conditions in order to win: a cash prize of $2m is waiting for the best-performing team, $1m for the loser, and $750,000 to console the runner-up.

The agency hopes its Cyber Grand Challenge will encourage the development of systems that mimic the abilities of programmers skilled in reasoning their way to finding code flaws. The security industry is still basing much of its work on reactive signature-spotting tech, DARPA said, rather than building heuristic programs that identify a problem before it becomes one.

“The growth trends we’ve seen in cyber attacks and malware point to a future where automation must be developed to assist IT security analysts,” said Dan Kaufman, director of DARPA’s information innovation office.

DARPA likened the competition to that which spurred the development of automatic vehicles nearly a decade ago. While that has certainly helped spur the automatic car industry, this new challenge may cause some problems for the vulnerability-scanning industry.

For the larger firms that have built a lucrative industry from signature-based scanning the announcement is a warning of tough times ahead. If someone does build a system capable of finding and patching flaws far faster than what’s on the market then their industry is doomed.

On the other hand, for independent security researchers, things could be looking very good indeed. The cash on offer gives a strong incentive for novel approaches, and maybe some good will come of casting bread upon the waters, as Robert Heinlein suggested.

“Automated patching within seconds? Sounds like a great idea, and I can imagine it working well on the Starship Enterprise,” security watcher and former Sophos specialist Graham Cluley told El Reg.

“However, in reality I suspect this would be a very difficult to achieve in a way which would win the confidence and trust of large businesses. Good luck to them – but I’m not holding my breath.”

DARPA is not claiming any control over the technology demonstrated in the challenge, just the right to license it on reasonable terms. Non-US teams are invited to participate, subject to export laws and security controls.

Most of the self-driving car team that won that DARPA challenge ended up at Google on plush salaries, so some seriously talented security savant might face a seriously large payday that makes the agency’s cash prize look paltry in comparison. ®

ioControl – hybrid storage performance leadership

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/10/22/darpa_sets_2_million_cash_prize_for_the_ultimate_vulnerability_scanner/

Facebook re-allows the posting of decapitation videos with ‘WARNING’

Facebook, image courtesy of ShutterstockVideo clips depicting beheadings have been allowed back onto Facebook.

Facebook temporarily banned decapitation clips in May after receiving complaints about the potential of long-term psychological damage from watching such horrific material.

According to the BBC, the company has now relaxed its stance.

Violent content, including beheadings, are now allowed, as long as the intent is to raise awareness rather than to celebrate violence.

Earlier this week, Facebook told news outlets that it was considering adding warnings to such content. Here’s what it told the BBC:

Facebook has long been a place where people turn to share their experiences, particularly when they’re connected to controversial events on the ground, such as human rights abuses, acts of terrorism and other violent events.

People are sharing this video on Facebook to condemn it. If the video were being celebrated, or the actions in it encouraged, our approach would be different.

However, since some people object to graphic video of this nature, we are working to give people additional control over the content they see. This may include warning them in advance that the image they are about to see contains graphic content.

It didn’t take them long.

After receiving a slew of backlash since the news spread on Monday, Facebook moved quickly to implement such warnings that will appear before graphic videos.

The message reads:

WARNING!
This video contains extremely graphic content and may be upsetting.

Regardless, Facebook’s decision to allow this content is still not sitting well. The BBC quoted one suicide prevention charity that condemned the move.

Dr Arthur Cassidy, a former psychologist who runs a branch of the Yellow Ribbon Program in Northern Ireland, told the BBC that such material can quickly leave scars:

It only takes seconds of exposure to such graphic material to leave a permanent trace – particularly in a young person’s mind.

The more graphic and colourful the material is, the more psychologically destructive it becomes.

The BBC reports that two of Facebook’s official safety advisers have also criticized the decision.

The backlash to the violent video is widespread: it includes at least one Facebook advertiser, charities that support kidnapping victims and their families, and the South Australia Police Force.

For its part, the car-sharing firm Zipcar denounced the beheading video and Facebook’s decision to publish the content.

The BBC quoted a statement Facebook put out on the matter:

We want you to know that we do not condone this type of abhorrent content being circulated on Facebook.

We have expressed to Facebook in the past the critical need to block offensive content from appearing and we will continue to engage with them on this important matter.

Facebook has since disabled Zipcar and other firms’ ads from appearing on the page in question.

Facebook policy specifically prohibits “photos or videos that glorify violence or attack an individual or group.”

The BBC was alerted to the change in policy when a reader pointed out that Facebook had refused to remove a page showing a clip of a masked man killing a woman, believed to have been filmed in Mexico.

The BBC reports that the video was posted last week under the title, Challenge: Anybody can watch this video?

I couldn’t find the video on Facebook, but Gizmodo coverage features a still photo, apparently taken from the Facebook post, the caption for which implies that the beheading victim was killed for cheating on her husband.

A Facebook spokesperson explained to Gizmodo that the Boston Marathon bombing illustrates an instance wherein posting violent content serves the greater good:

There was a gentleman whose legs had been blown off. If we’d had a more conservative stance, that image would not have been allowed on the site.

What we want to do is give folks the right balance of being able to control what it is they’re seeing. We’re definitely aware that this is not the perfect policy. We’re always trying to improve it.

I can sympathize with the difficulties Facebook faces in both supporting the free exchange of ideas and in avoiding publishing material that appeals to the most base examples of human curiosity (as Gizmodo points out, its photo of the beheading victim shows that it was shared almost 18,000 times).

But as many have pointed out, Facebook’s policy has jarring contradictions.

Censored female, image courtesy of ShutterstockPosting violent content – be it photos depicting a terrorist attack victim or a murder – passes muster, whereas nudity is verboten, including images of females’ breasts unless they depict an infant latched on and suckling.

Can we blame Facebook for its strange prudishness about the human breast?

I think not. As Facebook points out, it reviews photos almost exclusively based on Facebook members’ complaints about them being shared on Facebook.

We are, evidently, complaining to Facebook about the wrong things, given that Facebook is allowing beheadings but not the human breast.

Image of Facebook and censored female courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0W0W-KO9BfI/