STE WILLIAMS

Concerns ignored as Home Office pushes ahead with facial recognition

Aug
21

Sure, automatic facial recognition (AFR) has its problems.

Those problems don’t seem to be troubling the UK government, though. As The Register reports, the Home Office is rushing full-speed ahead with the controversial, inaccurate, largely unregulated technology, having issued a call for bids on a £4.6m ($5.9m) contract for facial recognition software.

The Home Office is seeking a company that can set it up with “a combination of biometric algorithm software and associated components that provide the specialized capability to match a biometric facial image to a known identity held as an encoded facial image”. The main point is to integrate its Biometric Matcher Platform Service (BMPS) into a centralized biometric Matching Engine Software (MES) and to standardize what is now a fractured landscape of FR legacy systems.

All this in spite of an AFR pilot having proved completely useless when London police used it last year to pre-emptively spot “persons of interest” at the Notting Hill Carnival (pictured), which draws some 2m people to the west London district on the last weekend of August every year. Out of 454 arrested people last year, the technology didn’t tag a single one of them as a prior troublemaker.

Failure be damned, and likewise for protests over the technology’s use: London’s Metropolitan Police plan to use AFR again to scan the faces of people partying at Carnival this year, in spite of the civil rights group Liberty having called the practice racist.

The carnival is rooted in the capital’s African-Caribbean community. AFR is insult added to injury: the community which police plan to subject to face scanning is still reeling from the horrific June 14 fire at Grenfell Tower, the blackened shell of which looms over the area where Carnival takes place. Out of at least 80 missing or dead victims, many were from this community.

It’s probably safe to say that no group likes to be treated like a bunch of criminals by law enforcement grabbing their mugshots via AFR.

But those with dark complexions have even more reason to begrudge the surveillance treatment from a technological point of view.

Studies have found that black faces are disproportionately targeted by facial recognition. They’re over-represented in face databases to begin with: according to a study from Georgetown University’s Center for Privacy and Technology, in certain states, black Americans are arrested up to three times their representation in the population. A demographic’s over-representation in the database means that whatever error rate accrues to a facial recognition technology will be multiplied for that demographic.

Beyond that over-representation, facial recognition algorithms themselves have been found to be less accurate at identifying black faces.

During a recent, scathing US House oversight committee hearing on the FBI’s use of the technology, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.

That’s a lot of people wrongly identified as persons of interest to law enforcement. According to a Government Accountability Office (GAO) report from August 2016, the FBI’s massive face recognition database has 30m likenesses.

The problems with American law enforcement’s use of AFR is replicated across the pond. The Home Office’s database of 19m mugshots contains hundreds of thousands of facial images that belong to individuals who’ve never been charged with, let alone convicted of, an offense.

Another commonality: in the US, one of the things the House committee focused on in its review of the FBI’s database was the FBI’s retention policy with regards to facial images. In the UK, controversy has also arisen over police’s retention of images. According to biometrics commissioner Paul Wiles, the UK’s National Police Database holds 19m images: a number that doesn’t even include all police forces. Most notably, it lacks those of the largest police force, the Metropolitan Police. A Home Office review was bereft of statistics on how those databases are being used, or to what effect, Wiles said.

How did we get to this state of pervasive facial recognition? It certainly hasn’t been taking place with voter approval. In fact, campaigners in the US state of Vermont in May demanded a halt to the state’s use of FR.

The American Civil Liberties Union (ACLU) pointed to records that show that the Vermont Department of Motor Vehicles (DMV) has conducted searches involving people merely alleged to be involved in “suspicious circumstances”. That includes minor offenses such as trespassing or disorderly conduct. Then again, some records fail to reference any criminal conduct whatsoever.

UK police have been on a similarly non-sanctioned spree. The retention of millions of people’s faces was  declared illegal by the High Court back in 2012. At the time, Lord Justice Richards told police to revise its policies, giving them a period of “months, not years” to do so.

“Months”,  eh? Let’s hope nobody was holding their breath, given that it took five years. The Home Office only came up with a new set of policies in February of this year.

The upshot of the new policies: police have to delete the photos. If, that is, the people in the photos complain about them. And if the photo doesn’t promise to serve some vague, undefined “policing purpose”.

In other words, police will delete the photos if they feel like it.

As The Register notes, there’s simply no official biometrics strategy in the UK, despite the government having promised to produce one back in 2013.

That sounds familiar to American ears. The FBI is also flying its FR technologies without being tethered by rules. For example, it’s required, by law, to first publish a privacy impact assessment before it uses FR. For years, it did no such thing, as became clear when the FBI’s Kimberly Del Greco – deputy assistant director of the bureau’s Criminal Justice Information Services Division – was on the hot seat, being grilled by that House committee in March.

The omission of a privacy impact assessment means that we don’t know the answer to questions such as: what happens if the system misidentifies a suspect and an innocent person is arrested?

Nobody knows, apparently. States have no rules or regulations governing the use of real-time or static facial data, or whether this data can be accessed for less serious crimes that don’t require a warrant.

It’s almost as if law enforcement in both countries have discovered a new tool to make their job easier but want to use it on the quiet, with as little fuss as possible, and hopefully without all these messy, inconvenient civil rights questions and all those tiresome protests.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EChr_hvB-aA/

Return to sender: military will send malware right back to you

Aug
21

Planning to weaponize malware against the US? The US military will grab it, reprogram it and send it right back to you, warned lieutenant-general Vincent Stewart of the US Defense Intelligence Agency last week.

Once we’ve isolated malware, I want to reengineer it and prep to use it against the same adversary who sought to use against us. We must disrupt to exist.

Stewart was speaking at the Department of Defense Intelligence Information System Worldwide Conference, which includes commanders from American, Canadian and British military intelligence.

Attendees included the FBI, the CIA, the National Security Agency, the National Geospatial-Intelligence Agency and the Office of the Director of National Intelligence, along with organizations such as Microsoft, Xerox, the NFL, FireEye, and DataRobot.

The meeting focused on the growing and international nature of cyberattacks. Commander William Marks of the US Navy explained why discussing cybersecurity is important for them:

Threats are no longer constrained by international borders, economics or military might; they have no borders, age limits or language barriers, or identity. The threat could be a large nation-state or a 12-year-old hacking our network from a small, isolated country.

Janice Glover-Jones, chief information officer of the DIA, added:

In the past, we have looked inward, focusing on improving our internal processes, business practices and integration. Today we are looking outward, directly at the threat. The adversary is moving at a faster pace than ever before, and we must continue to stay one step ahead.

There are concerns about the DIA’s strategy of retooling malware and sending it back like a boomerang to attackers. Sophisticated attacks make it even more difficult to determine an origin and specific attacker – what if the malware the DIA sends attacks a teenage script kiddie? What if the DIA ends up attacking people who are unaware that their computers are part of a botnet? There’s also the concern of the DIA’s counter-attacks damaging innocent bystanders such as ISPs and web hosts.

Is this a good tactic? What do you think?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GKMPvh5uTuQ/

The $500 gizmo that cracks iPhone passcodes – and how to stop it

Aug
21

A recent YouTube video shows a phone-sized hacking device recovering the passcode of an iPhone 7 or 7s in just a few minutes.

Posted by an American YouTuber going by the name of EverythingApplePro, the video features a $500 “iPhone unlocker” apparently bought online and imported from China.

Rather than bypassing the passcode, the $500 gizmo (which can automatically try out passcodes on up to three iPhones at the same time) keeps trying codes in sequence – e.g. 0000, 0001, and so on – until it figures out that it just entered the right one, presumably from how the phone reacts.

You then read the code off the gizmo and you should be able to unlock the phone for yourself any time the lockscreen comes up.

According to the video, there are some special situations on some iPhone versions, halfway through a firmware update, in which you don’t get locked out after making too many wrong guesses.

The gizmo, it seems, exploits these conditions so it can keep guessing pretty much for ever.

Sounds scary!

Fortunately – although we don’t have a spare iPhone or one of the $500 unlockers to verify any of this – the reality is less dramatic than you might at first think.

Firstly, you need to have changed your password very recently (TechCrunch says “within the last minute or so”) to be able to guess at a non-glacial rate.

Secondly, you need to force a firmware update to get the phone into a state where the repeated guesses will work.

Thirdly, you need to have a short passcode.

According to the video, the cracking device can only try out about six passwords a minute at best; according to TechCrunch, this guessing rate seems to be 20 times slower if your password was last changed more than about 10 minutes ago. The three phones cracked in the 12-minute video were deliberately configured with the passcodes 0015, 0016 and 0012 so they would fall to the gizmo – which started at 0000 on each phone.

So even if your iPhone falls into the wrong hands, a cracker using this gizmo is only likely to succeed if you have a very short passcode, or you have chosen one that is likely to be at the top of any “try these first” list, such as 123456, 111111 or 5683 (it spells out LOVE, in case you are wondering).

Apparently, only iPhone 7 and 7s models (plus some iPhone 6 and 6s models) have this vulnerability, if that’s not an overstated way to describe it, and the bug will be eliminated anyway when iOS 11 comes out.

We’ve seen speculation that the vendor of the gizmo has started advertising it pretty openly – rather than just promoting it quietly to law enforcement or in underground forums – because it will be even less useful than it is now once iOS 11 ships.

Assuming TechCrunch is correct, if you have a six-digit passcode and haven’t changed your password in the past minute or so, you can expect to keep this gizmo guessing for about 10 years on average.

Presumably, all other things being equal, every extra digit in your passcode slows down the guessing time by another factor of 10, so a seven-digit passcode ought to hold out until the 22nd century – if your iPhone’s battery keeps going that long.

What to do?

Our suggestions, admittedly based only on hearsay so far, are:

  • Keep your phone close at hand immediately after you change the password. As far as we can see, the crook needs to pounce on it soon after you’ve done so for the attack to be even vaguely practicable.
  • Choose the longest passcode you can tolerate. Six digits is the minimum Apple will currently permit; try going longer than that.
  • Upgrade to iOS 11 as soon as you can when it comes out. There will almost certainly be dozens of other critical security bug fixes included in iOS 11, giving you plenty of good reasons to patch early anyway.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jR0vbZJT8iM/

British snoops at GCHQ knew FBI was going to arrest Marcus Hutchins

Aug
21

Secretive electronic spy agency GCHQ was aware that accused malware author Marcus Hutchins, aka MalwareTechBlog, was due to be arrested by US authorities when he travelled to United States for the DEF CON hacker conference, according to reports.

The Sunday Times – the newspaper where the Brit government of the day usually floats potentially contentious ideas – reported that GCHQ was aware that Hutchins was under surveillance by the American FBI before he set off to Las Vegas.

Hutchins, 23, was arrested on August 2 as he boarded his flight home. He had previously been known to the public as the man who stopped the WannaCry ransomware outbreak.

Government sources told The Sunday Times that Hutchins’ arrest in the US had freed the British government from the “headache of an extradition battle” with the Americans. This is a clear reference to the cases of alleged NASA hacker Gary McKinnon, whose attempted extradition to the US failed in 2012, and accused hacker Lauri Love, who is currently fighting an extradition battle along much the same lines as McKinnon.

One person familiar with the matter told the paper: “Our US partners aren’t impressed that some people who they believe to have cases against [them] for computer-related offences have managed to avoid extradition.”

Hutchins had previously worked closely with GCHQ through its public-facing offshoot, the National Cyber Security Centre, to share details of how malware operated and the best ways of neutralising it. It is difficult to see this as anything other than a betrayal of confidence, particularly if British snoopers were happy for the US agency to make the arrest – as appears to be the case.

American prosecutors charged Hutchins with six counts related to the creation of the Kronos banking malware. He faces a potential sentence of 40 years in prison. He pleaded not guilty to the charges last week.

Hutchins’ bail conditions are unusually lenient for an accused hacker, with the Milwaukee court hearing his plea more or less relaxing all restrictions on him – with the exception of not allowing him to leave the US and prohibiting him from visiting the domain that sinkholed the WannaCry malware.

The man himself has been active on Twitter again since his bail restrictions were lifted:

Previously, FBI agents had tried claiming Hutchins might try obtaining firearms to commit crimes, based solely on his having tweeted about visiting a shooting range in Las Vegas – a common tourist pastime in Sin City. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/21/gchq_knew_marcus_hutchins_risked_arrest_fbi/

Bitcoin-accepting sites leave cookie trail that crumbles anonymity

Aug
21

Bitcoin transactions might be anonymous, but on the Internet, its users aren’t – and according to research out of Princeton University, linking the two together is trivial on the modern, much-tracked Internet.

In fact, linking a user’s cookies to their Bitcoin transactions is so straightforward, it’s almost surprising it took this long for a paper like this to be published.

The paper sees privacy researcher Dillon Reisman and Princeton’s Steven Goldfeder, Harry Kalodner and Arvind Narayanan demonstrate just how straightforward it can be to link cookies to cryptocurrency transactions:

Sorry Alice: we know who you are. Image: Arxiv paper.

Only small amounts of transaction information need to leak, they write, in order for “Alice” to be associated with her Bitcoin transactions. It’s possible to infer the identity of users if they use privacy-protecting services like CoinJoin, a protocol designed to make Bitcoin transactions more anonymous. The protocol aims is to make it impossible to infer which inputs and outputs belong to each other.

Of 130 online merchants that accept Bitcoin, the researchers say, 53 leak payment information to 40 third parties, “most frequently from shopping cart pages,” and most of these on purpose (for advertising, analytics and the like).

Worse, “many merchant websites have far more serious (and likely unintentional) information leaks that directly reveal the exact transaction on the blockchain to dozens of trackers”.

Of the 130 sites the researchers checked:

It doesn’t help that even for someone running tracking protection, a substantial amount of personal information was passed around by the sites examined in the study.

A total of 49 merchants shared users’ identifying information, and 38 shared that even if the user tries to stop them with tracking protection.

Users have very little protection against all this, the paper says: the danger is created by pervasive tracking, and it’s down to merchants to give users better privacy.

Since, as they write, “most of the privacy-breaching data flows we identify are intentional”, that seems a forlorn hope. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/20/bitcoins_anonymity_easy_to_penetrate/

Foxit PDF Reader is well and truly foxed up, but vendor won’t patch

Aug
21

The Zero Day Initiative (ZDI) has gone public with a Foxit PDF Reader vulnerability without a fix, because the vendor resisted patching.

The ZDI made the decision last week that the two vulns, CVE-2017-10951 and CVE-2017-10952, warranted release so at least some of Foxit’s 400 million users could protect themselves.

In both cases, the only chance at mitigation is to use the software’s “Secure Mode” when opening files, something that users might skip in normal circumstances.

CVE-2017-10951 allows the the app.launchURL method to execute a system call from a user-supplied string, with insufficient validation.

CVE-2017-10952 means the saveAs JavaScript function doesn’t validate what the user supplies, letting an attacker write “arbitrary files into attacker controlled locations.”

Both are restricted to execution with the user’s rights.

No fix

ZDI went public after its usual 120-day cycle because the authors made it clear no fix was coming, with this response:

“Foxit Reader PhantomPDF has a Safe Reading Mode which is enabled by default to control the running of JavaScript, which can effectively guard against potential vulnerabilities from unauthorized JavaScript actions.”

Foxit Software appears to be content to suggest users run its wares in Safe Mode, as its security advisories home page offers that advice for bugs identified in 2011.

The company did patch a dirty dozen bugs in 2016. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/21/foxit_reader_vulnerabilities/

Mirai copycats fired the IoT-cannon at game hosts, researchers find

Aug
21

The Mirai botnet that took down large chunks of the Internet in 2016 was notable for hosing targets like Krebs on Security and domain host Dyn, but research presented at a security conference last week suggests a bunch of high-profile game networks were also targeted.

Although Mirai’s best-known targets were taken out by the early infections, other ne’er-do-well types saw its potential and set up their own Mirai deployments, finishing up with more than 100 victims on the list.

That’s the conclusion suggested in a paper, Understanding the Mirai Botnet, presented at last week’s Usenix Security conference in Canada last week and penned by a group spanning Google, Akamai, Cloudflare, two universities and not-for-profit networking services provider Merit Network.

The authors confirm the kinds of infection targets seen by other Mirai researchers – digital video recorders, IP cameras, printers and routers – and observe that the devices hit were “strongly influenced by the market shares and design decisions of a handful of consumer electronics manufacturers.”

Helping matters out were known administrator passwords, such as “0000000” for a Panasonic printer and “111111” for a Samsung camera.

The authors were also surprised to find targets that previous Mirai research hadn’t revealed. They say the PlayStation Network, which Flashpoint hinted was a target, last year hinted at but didn’t name) was a target, as was XBOX Live. Other groups operating Mirai botnets targeted “popular gaming platforms such as Steam, Minecraft, and Runescape.”

Even that’s a small slice of the overall attack distribution, since the total of more than 15,000 Mirai attacks that ended up in the researchers’ sample hit 5,046 victims on 4,730 individual IP addresses, 196 subnets, and 120 individual domain names. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/21/mirai_copycats_fired_the_iotcannon_at_game_hosts_researchers_find/

Bitcoin-accepting shops leave cookie trail that crumbles anonymity

Aug
21

Bitcoin transactions might be anonymous, but on the Internet, its users aren’t – and according to research out of Princeton University, linking the two together is trivial on the modern, much-tracked Internet.

In fact, linking a user’s cookies to their Bitcoin transactions is so straightforward, it’s almost surprising it took this long for a paper like this to be published.

The paper sees privacy researcher Dillon Reisman and Princeton’s Steven Goldfeder, Harry Kalodner and Arvind Narayanan demonstrate just how straightforward it can be to link cookies to cryptocurrency transactions:

Sorry Alice: we know who you are. Image: Arxiv paper.

Only small amounts of transaction information need to leak, they write, in order for “Alice” to be associated with her Bitcoin transactions. It’s possible to infer the identity of users if they use privacy-protecting services like CoinJoin, a protocol designed to make Bitcoin transactions more anonymous. The protocol aims is to make it impossible to infer which inputs and outputs belong to each other.

Of 130 online merchants that accept Bitcoin, the researchers say, 53 leak payment information to 40 third parties, “most frequently from shopping cart pages,” and most of these on purpose (for advertising, analytics and the like).

Worse, “many merchant websites have far more serious (and likely unintentional) information leaks that directly reveal the exact transaction on the blockchain to dozens of trackers”.

Of the 130 sites the researchers checked:

It doesn’t help that even for someone running tracking protection, a substantial amount of personal information was passed around by the sites examined in the study.

A total of 49 merchants shared users’ identifying information, and 38 shared that even if the user tries to stop them with tracking protection.

Users have very little protection against all this, the paper says: the danger is created by pervasive tracking, and it’s down to merchants to give users better privacy.

Since, as they write, “most of the privacy-breaching data flows we identify are intentional”, that seems a forlorn hope. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/20/bitcoins_anonymity_easy_to_penetrate/

US DoD, Brit ISP BT reverse proxies can be abused to frisk internal systems – researcher

Aug
19

BSides Minor blunders in reverse web proxies can result in critical security vulnerabilities on internal networks, the infosec world was warned this week.

James Kettle of PortSwigger, the biz behind the popular Burp Suite, has taken the lid off an “almost invisible attack surface” he argues has been largely “overlooked for years.” Kettle took a close look at reverse proxies, load balancers, and backend analytics systems, and on Thursday revealed his findings. For the unfamiliar, when browsers visit a webpage they may well connect to a reverse proxy, which fetches the content behind the scenes from other servers, and then passes it all back to the client as a normal web server.

Malformed requests and esoteric headers in HTTP fetches can potentially coax some of these systems into revealing sensitive information and opening gateways into our victim’s networks, Kettle discovered. Using these techniques, Kettle was able to perforate US Department of Defense networks, and trivially earn more than $30k in bug bounties in the process, as well as accidentally exploiting his own firm’s ISP, BT.

“While trying out the invalid host technique, I noticed pingbacks arriving from a small pool of IP addresses for payloads sent to completely unrelated companies, including cloud.mail.ru,” Kettle explained. A reverse DNS lookup linked those IP addresses to bn-proxyXX.ealing.ukcore.bt.net – a collection of systems belonging to BT, PortSwigger’s broadband ISP. In other words, sending malformed HTTP requests to Mail.ru resulted in strange responses from his ISP’s servers.

“Getting a pingback from Kent, UK, for a payload sent to Russia is hardly expected behaviour,” he added. This sparked his decision to investigate. The responses were coming back in 50ms, which was suspiciously fast for a request that’s supposedly going from England to Russia and back via a datacenter in Ireland.

A TCP trace route revealed that attempts to establish a connection with cloud.mail.ru using port 80 (aka HTTP) were intercepted by BT within the telco’s network, but traffic sent to TCP port 443 (aka encrypted HTTPS) was not tampered with. “This suggests that the entity doing the tampering doesn’t control the TLS certificate for mail.ru, implying that the interception may be being performed without mail.ru’s authorisation or knowledge,” Kettle explained.

Further digging by the researcher revealed that the system he’d stumbled upon was primarily being used to block access to stuff like child sex abuse material and pirated copyrighted material. Essentially, these were the boxes inspecting and filtering Brits’ internet traffic. “For years I and many other British pentesters have been hacking through an exploitable proxy without even noticing it existed,” according to Kettle.

Crucially, Kettle said he could reach BT’s internal control panels for its snooping tech via these proxy servers. “I initially assumed that these companies must collectively be using the same cloud web application firewall solution, and noted that I could trick them into misrouting my request to their internal administration interface,” he said.

Kettle added that, as well as this worrying security vulnerability, putting subscribers behind proxies is bad because if one of the boxes ends up on a black list, every gets blocked:

All BT users share the same tiny pool of IP addresses. This has resulted in BT’s proxy IPs landing on abuse blacklists and being banned from a number of websites, affecting all BT users. Also, if I had used the aforementioned admin access vulnerability to compromise the proxy’s administration panels, I could could potentially reconfigure the proxies to inject content into the traffic of millions of BT customers.

Kettle reported the ability to access the internal admin panel to a personal contact at BT, who made sure it was quickly protected. The interception system is related to CleanFeed, which was built by BT in the mid-2000s to block access to images and videos of children being sexually abused. This technology was repurposed to target pirates illegally sharing movies, music, software and other copyrighted stuff. A Colombian ISP called METROTEL had a similar set up.

Later in his research, Kettle discovered that US Department of Defense proxies whitelist access to internal services using the Host header in HTTP requests, but forget that the hostname in the GET request takes precedence over the Host header. So a browser could connect to the external-facing proxy, set the Host header in the request to a public-facing site like “darpa.mil” but GET “some-internal-website.mil”, and get through to that intranet portal.

Essentially, he was able to route requests to servers intended to be accessible to US military personnel only.

“This flaw has since been resolved. It’s likely that other non-DoD servers have the same vulnerability, though,” Kettle told El Reg.

Kettle also discovered a system that enabled reflected cross-site scripting attacks to be escalated into server-Side request forgeries.

On the back of his research, Kettle developed and released Collaborator Everywhere, an open-=source Burp Suite extension that helps uncloak backend systems by automatically injecting non-damaging payloads into web traffic.

“To achieve any semblance of defence in depth, reverse proxies should be firewalled into a hardened DMZ, isolated from anything that isn’t publicly accessible,” Kettle concluded.

His research is summarized in this blog post. To defend against attacks, basically make sure you’re not susceptible to this kind of interference. Kettle presented his work at BSides in Manchester, England, on Thursday. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/19/reverse_proxy_war/

Berkeley boffins build better spear-phishing black-box bruiser

Aug
19

Security researchers from UC Berkeley and the Lawrence Berkeley National Laboratory in the US have come up with a way to mitigate the risk of spear-phishing in corporate environments.

In a paper presented at Usenix 2017, titled “Detecting Credential Spearphishing in Enterprise Settings,” Grant Ho, Mobin Javed, Vern Paxson, and David Wagner from UC Berkeley, and Aashish Sharma of The Lawrence Berkeley National Laboratory (LBNL), describe a system that utilizes network traffic logs in conjunction with machine learning to provide real-time alerts when employees click on suspect URLs embedded in emails.

Spear-phishing is a social engineering attack that involves targeting specific individuals with email messages designed to dupe the recipient into installing a malicious file or visiting a malicious website.

Such targeted attacks are less common than phishing attacks launched without a specific victim in mind, but they tend to be more damaging. High profile data thefts at the Office of Personnel Management (22.1 million people) and at health insurance provider Anthem (80 million patient records), among others, have involved spear-phishing.

The researchers are concerned specifically with credential theft since it has fewer barriers to success than exploit-based attacks. If malware is involved, diligent patching and other security mechanisms may offer defense, even if the target has been fooled. If credentials are sought, tricking the target into revealing the data is all that’s required.

The researchers focus on dealing with attacks that attempt to impersonate a trusted entity, which may involve spoofing the name field in emails, inventing name that’s plausibly trustworthy, like [email protected], or messages delivered from a compromised trusted account. Another means of impersonation, email address spoofing, is not considered because it can be dealt with through email security mechanisms like DKIM and DMARC.

The challenge in automating spear-phishing detection is that such attacks are rare, which is why many organizations still rely on user reports to trigger an investigation. The researchers note that their enterprise dataset contains 370 million emails – about four years worth – and only 10 known instances of spear-phishing.

So even a false positive rate of 0.1 per cent would mean 370,000 false alarms, enough to paralyze a corporate IT department. And the relative scarcity of spear-phishing examples ensures that machine learning techniques lack the volume of data to create a viable training model.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/18/spear_phishing_detector/