STE WILLIAMS

Dropbox’s tool shows how chatbots could be future of cybersecurity

Disillusion with chatbots has set in across the tech industry and yet Dropbox’s deep thinkers believe they have spotted the technology’s hidden talent: cybersecurity.

The company is so sure of the concept that it has announced plans to deploy something called Securitybot inside the Slack collaboration platform as a way of smoothing how its workforce interacts with a daily flow of security alerts and queries.

For security staff and employees alike, alerts have become a time-consuming hassle because users are often interrupted to verify what they are doing. Says Dropbox:

Alerts can lead to a deluge of information, making it difficult for engineers to sift through. Even worse, a large number of these alerts are false positives, caused by engineers arbitrarily running [Linux commands] sudo-i or nmap.

A year ago, someone at Slack suggested the answer: get a chatbot hosted on Slack to do the verification instead. Inspired, Dropbox built Securitybot.

For those not familiar with Slack, it is a collaboration platform that integrates channels such as IRC chat, file-sharing, direct messages and even Twitter feeds into one searchable system. Enthusiasts think the idea might one day be big enough to challenge email.

When an alert pops up from one of Dropbox’s security systems, Securitybot automatically sends the employee a message through Slack asking them to verify the action, collecting the response. Employees must authenticate themselves using SMS-based two-step verification so anyone unable to do that immediately stands out.

After testing Securitybot for some months, Dropbox claims it can now more rapidly separate important alerts from the larger number of routine ones.

Responding to a polite chatbot is much easier than responding, in full sentences, to a member of the security team. It not only saves our security engineers time but also all of our employees.

A caveat is that organisations must invest in two-step verification, without which there is no way to authenticate that a user is who they say they are. But Securitybot’s generous open-source status means that any organisation on Slack can benefit from it.

As far as we’re aware, this will be the only open-source project to automatically confirm and aggregate suspicious behaviour with employees on a distributed scale.

An intriguing question is whether a Securitybot, or something like it, could be used to verify and interact with any internet user and not just those working internally for companies.

Today, users are increasingly assailed by alerts (for example, when accessing Google on a new device) but these are universally static and informational. The communication channel is always one-way and taking action is a matter of user choice.

Superficially, security chatbots offer a way out of this impasse, giving security systems a simple way to verify users in real time without the expense of manual intervention – or the security risk of just letting things ride and hoping for the best.

It’s a compelling proposition but, as Securitybot hints, might come at the price or peace and quiet. In a future guarded by security chatbots working 24×7, it is machines that will be asking the important questions.


 

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2FqPEw7D91E/

Comments are closed.