STE WILLIAMS

DARPA’s plan to make software security "the domain of machines"

Our Robot OverlordsThe Defense Advanced Research Projects Agency (DARPA) is looking for a superhero who can take on one of the trickiest problems in computer security.

Humans applicants need not apply.

The skunkworks team that brought you (amongst many, many other things) the Internet, Deep Web search engines, robot snakes, thought control, bionic exoskeletons and armed drones has set its sights on “making software safety the expert domain of machines.”

One of the fundamental challenges of cybersecurity is what’s known as the Fortification Principle; the economics are stacked in favour of an attacker because a defender must guard against every possible attack vector whilst an attacker only has to succeed once.

The patching problem is already significant – there aren’t enough bodies to go around,so bugs like Heartbleed and Shellshock can sit unobserved in critically important software for years.

According to DARPA, we’re “building [a] connected society on top of a computing infrastructure we haven’t learned to secure” and it’s about to get a lot worse thanks to the so-called Internet of Things (IoT).

As we add light bulbs, fridges, thermostats, cars and electricity grids to our global computer network, insecurity is “making its way into devices we can’t afford to doubt.”

DARPA’s answer to the Fortification Principle’s fundamental asymmetry is, unsurprisingly, automation and Artificial Intelligence (AI):

Today’s attackers have the upper hand due to the problematic economics of computer security. Attackers have the concrete and inexpensive task of finding a single flaw to break a system. Defenders on the other hand are required to anticipate and deny any possible flaw – a goal both difficult to measure and expensive to achieve. Only automation can upend these economics.

The robot-loving boffins want supercomputers that can analyse billions of lines of hitherto unseen code, find its mostly deeply hidden security flaws, and fix them without their creators having to so much as mop their AI’s sweatless brows.

We’re a long way from that point right now.

On the other hand, if we weren’t, then DARPA wouldn’t be getting its hands dirty – because that’s what DARPA does, trying to close the gap between where we are and where we need to go.

DARPA’s plan to speed up this particular journey is called the Cyber Grand Challenge – a series of competition events that started last year and will culminate in the world’s first all-computer Capture The Flag contest in 2016.

The competitors will duke it out in each event with no human involvement.

The tournament “where automated systems may take the first steps towards a defensible, connected future” will be held alongside the 2016 DEF CON Conference in Las Vegas, and the winning system will net its creators a prize of $2,000,000.

And what then?

The DARPA cybersecurity program manager who’s running the contest gave an interview last year to the New York Times in which he made an analogy with the progress of computer chess.

Deep Blue became the first computer to defeat a world chess champion in a six-game series in 1997, forty-seven years after Claude Shannon first outlined a plan for a competitive chess program.

If automated cybersecurity is on the same path then right now it’s still somewhere very near the start line.

So we’re a long way from fully autonomous, adaptive cyber-defense, but DARPA is doing no harm to its reputation as the organisation most likely to usher in our new robot overlords by accident.


Image of Giant evil robot destroying the city courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/22NWDL9X8C8/

Comments are closed.