STE WILLIAMS

Encrypt voice calls, says GCHQ’s CESG team … using CESG encryption

While the world was distracted by the UK Pry Minister’s ban-working-encryption, log-everything-online Investigatory Powers Bill, the civil service was urging government and enterprises to adopt better cryptography for voice calls.

CESG, “the information security arm of GCHQ, and the national technical authority for information assurance”, dropped new guidance (called “Secure voice at OFFICIAL”) about protecting voice calls, noting that the PSTN has been considered insecure (“suitable for UNCLASSIFIED calls only”) for some years.

It’s even got its very own nifty key exchange protocol it wants vendors to use.

Having decided in 2010 there wasn’t a security protocol that it liked, it put forward RFC 6509 (“MIKEY-SAKKE” – more on this in a minute) as its own proposal.

MIKEY-SAKKE is now incorporated into the CESG’s Secure Chorus product spec, and the body says as well as Cryptify Call for iOS and Android it’s evaluating other products to see if they meet the spec.

Into the future, the spooks reckon VoLTE will open things up even further, creating an ecosystem of products suitable for “government and enterprise customers”.

All of which is fascinating, given that if the code exists, it’s certain to escape the control of “government and enterprise customers” and be used to – horrors! – let users create encrypted voice calls.

Is it tinfoil time?

A good question is “why did CESG think the world needed a new key exchange protocol?”, and El Reg is practically certain that question will exercise Snowdenistas around the world.

The surface explanation is that encrypting VoIP calls adds a new wrinkle to encryption, compared to e-mail, Web, or VPNs communications.

When you hit an HTTPS:// Website, for example, the hard work is invisible: the server presents its certificate, and the browser makes sure it likes the cert, and if so, browser and server negotiate to set up an encrypted connection.

It’s the business of certificate handling the CESG decided was problematic for someone calling a friend on a smartphone, so it offers this rationale for the protocol: “no certificates need to be distributed. Instead, a user’s identity is their public key. Simply knowing a user’s phone number is enough to establish a secure communications link with them”.

Secure, of course, if the scheme itself is secure – something which will probably lead experts to take another look at the protocol, since its use of elliptic-curve cryptography is at odds with GCHQ’s pals the NSA, which is moving instead to “quantum resistant algorithms” (Bruce Schneier wrote about this in August here).

There is one discordant note that The Register is certain will tap a deep vein of paranoia in the outside world. CESG calls RFC 6509 a standard: “a new open cryptography standard – MIKEY SAKKE – was developed and standardised in the IETF”, its Secure voice at OFFICIAL document states (emphasis added).

Except: the RFC describing it says the opposite: “This document is not an Internet Standards Track specification; it is published for informational purposes”, the RFC states. ®

Sponsored:
Go beyond APM with real-time IT operations analytics

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/05/cryptography_is_bad_iandi_good_says_govuk/

UK cyber-spy law takes Snowden’s revelations of mass surveillance – and sets them in stone

IPB The encryption bothering parts of the UK’s Investigatory Powers Bill have left IT security experts flabbergasted.

Introducing the draft internet surveillance law in the House of Commons on Wednesday, Home Secretary Theresa May presented it as consolidating and updating existing investigatory powers. She spun it as a break from measures in the ultimately unsuccessful Communications Data Bill of 2012, adding “it will not ban encryption or do anything to undermine the security of people’s data.” The reality is far more complex and less reassuring than this bland assurance might suggest.

The draft law [PDF] states it “will not impose any additional requirements in relation to encryption over and above the existing obligations in RIPA [the Regulation of Investigatory Powers Act, 2000]” before summarising what these entail:

RIPA requires CSPs [communications service providers] to provide communications data when served with a notice, to assist in giving effect to interception warrants, and to maintain permanent interception capabilities, including maintaining the ability to remove any encryption applied by the CSP to whom the notice relates.

Look, ma – no backdoors! (Because they won’t be called that)

El Reg understands the UK’s security and intelligence agencies are already talking to makers of popular messaging software – most of which are based in the US – about how best to tap into people’s chatter. This includes those providing strong end-to-end encryption for normal folks, such as WhatsApp and Apple’s iMessage; reliable encryption that’s easy to install and use, unlike tough-to-crack but infuriating-to-use PGP packages.

Truly secure end-to-end crypto systems allow only the two people chatting to decrypt each other’s messages, calls or other information exchanged. The app makers, network providers and any eavesdroppers along the line have no hope of cracking the ciphered bytes if intercepted.

One way to do this is use the Diffie-Hellman protocol, which allows two people to create a shared secret known only to them using prime-number maths. No one in between the pair can figure out the secret, which can be used as a key to encrypt and decrypt data. There’s nothing communications providers can hand over when the g-men come knocking except useless scrambled bits.

There are also sorts of end-to-end encrypted communications available now, especially in the wake of the Edward Snowden revelations of NSA-GCHQ mass surveillance, but it’s the main providers the UK authorities are interested in, we hear.

That focus on the mainstream – Facebook-owned WhatsApp and Apple – may spark an exodus to software perceived as being beyond the radar of the UK authorities. Make sure whatever code you decide to use is verified and trusted to work as advertised.

Implementation flaws (such as weak keys or bugs in the programming) and slip ups by users (such as accidentally leaking private keys) are enough to break cryptographic systems. “The true security in ‘end-to-end’ encryption depends on how it’s implemented and how it is used. Key generation, management, forward secrecy all matter,” Professor Alan Woodward of the University of Surrey noted on Twitter.

What the security agencies really want is a backdoor in the cryptography: a way to forcibly decrypt messages and calls. Mathematically, it’s not possible to build such a system in a secure way. If the snoops can flick a switch and defeat the encryption, so can anyone else, in theory. Criminals, bored teenagers, you name it; everyone loses.

Critics charge that the UK government is trying to effectively ban secure cryptography, a suggestion ministers deny. Despite this, sections of the bill suggest that communications providers operating in the UK may be ordered to “provide technical assistance” and remove electronic protections, possibly under a gagging order along the lines of a US National Security Letter.

The UK government wants to promote the use of good crypto to further its established goal of making the UK the best place in the world to do e-commerce. Alongside this, GCHQ and MI5 still want to be able to decrypt communications and identify suspects in terrorist plots, child abuse, and other serious crimes.

The bill also provides a rationale for why police and intel agencies should be allowed to hack computers and network equipment to circumvent encryption:

Equipment interference plays an important role in mitigating the loss of intelligence that may no longer be obtained through other techniques, such as interception, as a result of sophisticated encryption.

Home Office fact sheets to accompany the draft bill on targeted interception [PDF] and equipment interference [PDF] provide further insights into Number 10’s thinking.

The proposed law is being spun as a means of ensuring there are “no ‘no go’ areas of the internet for law enforcement – so that the entirety of cyberspace can be policed in the face of technological advances” as well as giving the security services a “license to operate.”

Privacy advocates such as Liberty argue that the Investigatory Powers Bill contains “sweeping new powers for public bodies to track and hack British people’s communications – while failing to include the most basic privacy safeguards.”

“Far from attempting to create a more targeted and effective system, the bill places the broad mass surveillance powers revealed by Edward Snowden on a statutory footing, including mass interception, mass acquisition of communications data, mass hacking, and retention of databases on huge swatches of the population,” Liberty argues.

Sponsored:
Are DLP and DTP still an issue?

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/05/ipb_reaction/

US, UK big banks to simulate mega-hacker cyber-attack

A mock exercise will take place this month to test how major banks respond to a major cyber attack, according to a newspaper report.

The joint UK and US initiative, Operation Resilient Shield, will be “the most sophisticated test … yet” of the way industry communicates and coordinates its efforts in response to cyber security incidents, the Telegraph reported.

The exercise has been in planning for months. The UK and US announced their intention to participate in a joint cyber security exercise in the financial services sector in January.

Previous cyber security exercises have been coordinated by the Bank in the UK.

The “Waking Shark II” desktop cyber attack simulation was carried out in November 2013. The exercise involved approximately 100 people representing around 30 financial services organisations gathered in one room and was designed to assess what the likely impact of a major cyber attack would be on the investment banking industry and financial market infrastructure, including payment systems. The exercise tested the lines of communications between companies as well as their interaction with regulators as the scenario was unfolding.

In February 2014 the Bank revealed the results of the Waking Shark II test. It said the exercise had identified a lack of “central industry coordination” on sharing financial sector information and communicating to the public. Participants suggested that a single body could fulfil this role in future. The Bank released cyber security test materials based on the Waking Shark II exercise to help organisations practice how they would respond to a major cyber attack on the banking system.

The latest planned simulation comes as recently published minutes from a meeting of the Bank of England’s court of directors on 16 September provided detail on some of the efforts being taken to improve “cyber resilience” within the UK’s financial services sector, including by the Bank itself.

According to the minutes, directors at the Bank are concerned that banks, insurers and other financial service companies are not obliged to participate in the voluntary CBEST programme, a cyber security testing initiative. However, Andrew Gracie, executive director of resolution at the Bank, said that cyber security testing is “becoming close to mandatory” for big financial firms.

In July, the Bank reported that industry concerns about potential cyber attack on the UK’s financial system were at its “highest recorded level”. In August the Prudential Regulation Authority asked UK insurers to provide it with details of their “cyber resilience”.

Last year the Financial Policy Committee (FPC) at the Bank said that cyber security is not just a technical issue that the board of directors at UK banks can ignore.

Copyright © 2015, Out-Law.com

Out-Law.com is part of international law firm Pinsent Masons.

Sponsored:
Go beyond APM with real-time IT operations analytics

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/05/banks_to_face_cyber_security_test_this_month/

The spy in your pocket: Researchers name data-slurping mobe apps

Android app developers are more promiscuous with your personal data than iOS devs, according to research that examined more than 100 popular apps to sniff the way they handed data to third parties.

However, both iOS and Android developers are quite happy to scrape personal data and fire it off to third parties without asking permission.

The privacy boffins also found that 93 per cent of the Android apps they tested connected to “a mysterious domain, safemovedm.com, likely due to a background process of the Android phone”. (The Register’s quick search associates that domain with an app called “Hotspot Login Assistant”.)

The research, led by Harvard research analyst Federal Trade Commission research fellow Jinyan Zang, with collaborators from MIT and Carnegie-Mellon, is published at the open-access Technology Science.

“Our results show that many mobile apps share potentially sensitive user data with third parties, and that they do not need visible permission requests to access the data”, they write, something that needs to be changed.

The researchers focused their attention on the kinds of apps most likely to handle personal data, and sniffed what the apps were sending using HTTP/HTTPS proxies.

On average, the Android apps they tested shared “potentially sensitive” data to 3.1 third-party domains, while iOS apps connected to 2.6 third-party domains.

Here are the kinds of sharing that happen:

Google and Facebook are the favourite recipients for data harvested from Android apps, the research found. The most promiscuous Android apps – those that connected to the most third-party domains – were Text Free (free calls and text over Wi-Fi, 11 domains), Glide (video messaging, 8 domains), Map My Walk (9 domains), and Drugs.com (7 domains).

In the iOS world, Apple, Yahoo! and the SalesForce Marketing Cloud-operated exacttargetapis.com were the big-three recipients of personal data.

Walgreens (online pharmacy) sent data to 5 domains, while other offenders included Map My Run and Nike+ Running (4 domains each), Fruit Ninja (also sending data to 4 domains), Urgent Care and Pinterest (also 4 domains). However, the stand-out offender in the iOS world was the Localscope iPhone location browser, which spaffs data to a stunning 17 third-party domains.

Mind you, things are just as bad over on the boring-old World Wide Web. In research published in October, University of Pennsylvania doctoral researcher Tim Libert found that 90 per cent of a million websites he tested leaked personal data to third parties without alerting users. ®

Sponsored:
Go beyond APM with real-time IT operations analytics

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/05/dataslurping_smartphone_apps/

MacBooks are so hot right now. And so is Mac OS X malware

There’s been an unprecedented rise in Mac OS X malware this year, according to security researchers at Bit9 + Carbon Black, with the number of samples found in 2015 being five times that seen in the previous five years combined.

This year, there have been 948 OS X malware samples, compared with 180 in the years 2011-14 inclusive.

Cybercriminals have stepped up their efforts to hack Apple devices because MacBooks are rising in popularity, both in homes and the workplace. Nearly half of organisations (45 per cent) are offering Macs as an option to their employees, according to stats cited by Bit9 + Carbon Black.

OS X vulnerabilities and malware have grabbed the security community’s attention this year. One example is XcodeGhost, which inserts malicious components into applications made with Xcode (Apple’s official tool for developing IOS and OS apps).

Additionally, it has emerged that OS X El Capitan, which launched in September, contains serious vulnerabilities in its Gatekeeper and Keychain features.

Flashback – the biggest Mac infection vector to date, which infected 700,000 devices on the back of a Java-based vulnerability – struck in 2012. What we’re getting this year is therefore a higher volume of less infectious nasties.

Malware authors targeting Macs are using OS X-specific mechanisms, rather than typical UNIX persistence methods commonly present in traditional malware samples, according to the security software vendor.

Hackers are adopting a targeted approach to Mac OS X systems, undermining the comforting notion that Macs are much more secure than their Windows counterparts in the process.

There may be a far greater volume of Apple-biting nasties this year but Mac OS X malware still isn’t that sophisticated. More than 90 per cent of the malware samples from 2015 analysed by Bit9 + Carbon Black were found to use an old load command that became redundant with the launch of OS X 10.8 in 2012.

Malware authors failed to begin using Apple’s new load command until 2014, and even then it was found in only a tiny percentage of malware samples.

Whilst there are 13 documented persistence techniques used by malware to remain on the targeted system, the research identified that just seven were present in the vast majority of OS X malware samples examined. This lack of variation gives threat detection teams an easier ride, as there are fewer places they need to check for malware in comparison with Windows systems.

The report (registration required), 2015 – The Most Prolific Year in History for OS X Malware, is based on over 1,400 unique OS X malware samples, aggregated over ten weeks from independent research efforts, open sources, real-world Mac OS X incident response experience, peer research, black lists, and contagion malware dumps amongst other sources.

By comparison there have been more than one million samples of Android malware to date. Vendors largely stopped counting Windows nasties years ago, but where estimates exist, numbers exceed 20 million even on the more conservative counts. ®

Bootnote

Persistence means that malware stays on compromised systems after a reboot, a key goal for malware slingers whichever computing platform their creations infect.

Sponsored:
Go beyond APM with real-time IT operations analytics

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/05/mac_os_x_malware_explodes/

Microsoft may join Mozilla and retire SHA-1 in 2016

Microsoft has decided to follow Mozilla down the path to better security, bringing forward the end-of-life date for SHA-1 hashing.

SHA-1 has long been suspect, but in 2015 the ease and effectiveness of attacks against it have grown to the point where everyone with good sense is making their excuses and leaving the room.

Hashing converts sensitive data (like a password) into a string of characters using a one-way function. That way, if the hash is retrieved from a database by an attacker, they’re not supposed to be able to recover the original password. When a user presents their password, the hash function is applied, and the result is compared to the hash stored in the database to see if it matches.

SHA-1 is so ancient that attacks against it are a decade old, and it’s been on everybody’s EOL for a few years. However, this year’s relatively cheap attack (using US$75,000 worth of kit) gave a new impetus to its elimination.

In the blog post announcing the accelerated schedule, Microsoft Edge program manager Kyle Pflug says instead of January 2017, MS is considering deprecating SHA-1-signed TLS certificates in June 2016.

Since the deprecation goes way beyond the browsers (it affects code signing as well), Microsoft has published a detailed enforcement timeline here. ®

Sponsored:
Go beyond APM with real-time IT operations analytics

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/05/microsoft_retiring_sha1_2016/

Backup software that cracks web servers? Yup. It’s a thing

Commvault’s Edge Server offers users the chance to view and access their backups from mobile devices, a trick it enables in part by using a web server.

But version 10 R2 of that server has a problem: it “deserializes untrusted, user-provided cookie data, resulting in arbitrary OS command execution with the web server’s privileges.”

This isn’t going to happen by accident: the CERT notice of the problem says a “A remote, unauthenticated attacker can provide specially crafted cookie data” to corrupt the web server.

But there’s no fix as yet and the suggested workaround – “only allow connections from trusted hosts and networks” – is a nonsense because one usage scenario for Edge Server is allowing users of mobile devices to access backups. A stolen mobile device will look just like a trusted host and network, especially in the minutes or hours between a device’s disappearance and the loss being reported. Commvault’s site is currently silent on the matter. ®

Sponsored:
Go beyond APM with real-time IT operations analytics

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/11/05/backup_software_that_cracks_web_servers_yup_its_a_thing/

AndroBugs: A Framework For Android Vulnerability Scanning

At Black Hat Europe next week, a researcher will present a framework he says is more systematic than the vulnerability scanners popping up on the market.

The Android ecosystem is a Wild West where vulnerabilities can run rampant, and go undiscovered, unchecked, and unfixed. It’s hard to corral, but a researcher at Black Hat Europe in Amsterdam next week will present a new framework to help make the process of locating vulnerabilities in Android apps and libraries more efficient and more accurate.

Yu-Cheng Lin, software engineer on the security team at MediaTek, will introduce the “AndroBugs Framework: An Android Application Security Vulnerability Scanner.”

“I found that the same mistakes are being made again and again by Android developers,” Lin says. “So I believe probably I can find many security vulnerabilities if I use a systematic way. And actually, the results (vulnerabilities found) from AndroBugs Framework exceeded my expectation.”

There are other tools that help Android developers write clean code. Google’s Android integrated development environment includes a tool called Lint — but mostly it finds coding errors, not security weaknesses. There are also some Android vulnerability scanners hitting the market.

Lin says his framework is different.

Instead of just scanning the code for weak spots, AndroBugs tries to emulate the operation of an app, and considers the attack vectors through which those weaknesses would be exploited.

“I think the biggest difference from those tools is the ways that AndroBugs Framework plays with the vulnerability vectors,” says Lin.  “… The analysis report from AndroBugs Framework is prioritize, which means it tries not to print the useless information.”

Also, AndroBugs stores all its analyses in the NoSQL database and includes tools for users to query the NoSQL database by vector. That can reduce the effort for developers and security pros looking for vulns.

“So if I want to find vulnerabilities among lots of applications (e.g. more than 100,000 apps),” he says. “I will do the massive scanning with AndroBugs Framework first. … It truly helps you reduce the efforts of finding vulnerabilities and you no longer need to review the code line by line.”

When AndroBugs analyzes an app, it scans for vulnerabilities in the third-party libraries/SDKs it uses, too. 

“If a vulnerability is found in a SDK, it’s a disaster,” says Lin. “First, it may be used by many applications. Second, unlike system upgrade, the Android developers may not upgrade the libraries they are using.”

The good news is that it’s actually easier for the AndroBugs Framework to find SDK vulns than others. “If those vulnerable libraries are used by many applications, you actually can find those vulnerable libraries with AndroBugs Framework immediately because lots of applications will use them,” Lin says. “So I think the vulnerabilities in SDKs will get fixed faster.”

They may be fixed faster, he notes, but that doesn’t mean that the Android developers will update and upgrade appropriately, so security problems can still perpetuate themselves within the Android environment — something Lin doubts will ever change. 

“I think Google will never make [app security] an enforcement or rate the security of an app,” he says. “I think Google will never take down the vulnerable apps from Google Play. It’s a political problem. I think Google already knows some security problems in Android applications. But there are too many applications that use the vulnerable APIs.”

This is why Lin — who has also been an Android developer — uses an iPhone.

Lin says that he hopes AndroBugs will help improve Android security, but he also suggests that companies give mobile security the same attention they give Web security; that Android security researchers exercise responsible disclosure; and that companies acknowledge, not ignore, the vulnerability reports they receive.

The AndroBugs Framework will be released, open-source, on GitHub before Black Hat Europe. A Windows version is also forthcoming.

Black Hat Europe returns to the beautiful city of Amsterdam, Netherlands November 12 13, 2015. Click here for more information and to register.

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/androbugs-a-framework-for-android-vulnerability-scanning/d/d-id/1323000?_mc=RSS_DR_EDT

Drone Detection As The New ‘IDS’

ISS founder Noonan’s latest venture aims to detect drone-based cyberattacks, which so far have been mostly a project of the research domain.

Call it next-generation war driving: cyberspies and other nefarious attackers employing drones rigged with hacking tools that hover over a corporate data center or a power plant and break into the wireless network to steal information or sabotage operations.

Security researchers over the past couple of years have been all over the drone-born hacking attack: take SensePost’s Snoopy, which hacks smartphones, and Samy Kamkar’s SkyJack, which finds, hacks, and hijacks other drones. These hacking research projects drove home some of the scarier scenarios with the increasingly popular and commercially available drone. But the drone isn’t really breaking any new hacking ground: it’s instead providing a new transport medium for the same type of remote hacks commonly exploiting weaknesses in WiFi and radio-frequency security. Think TJX hacker Albert Gonzalez piloting a drone in the retailer’s parking lot rather than sitting in his car with his laptop to crack into its WiFi.

Enter the drone-detection system. Tom Noonan, chairman of drone detection firm Dedrone, has launched his Germany-based company in the US amid the explosion of drone-based technology — some 4.3 million drones are expected to ship worldwide this year, and the Federal Aviation Administration estimates 1 million drones will be sold this holiday season. That’s heightening concerns that drones could be used for malicious purposes. “This is intrusion detection 2.0 for me,” Noonan says, a play on his heritage as co-founder of pioneering intrusion detection system firm Internet Security Systems (ISS), which he eventually sold to IBM.  

Dedrone’s flagship DroneTracker system, based on a cloud-based network of sensors, is all about detecting drones in the vicinity of a facility. “We learned this 20 years ago in IDS: if you can’t accurately detect a threat with multiple sensors on a 24/7 basis, everything else doesn’t matter. The ability to detect and categorize the threat in advance allows you to be preemptive,” he says. “Even before a drone is getting the accurate GPS of where [a data center is, for example], we are detecting it.”

DroneTrackerSource: Dedrone

DroneTracker

Source: Dedrone

Like an IDS, DroneTracker uses a database of threat signatures. These signatures identify things like the type of drone and whether it’s carrying some sort of payload. That information is also preserved for incident response evidence, whether it was a physical threat or a cyber one, he says. For now, though, any response to a drone is done offline and not via the sensors, he says.

Noonan says the drone detection technology ultimately will integrate with network- and other types of cyberattack detection and prevention systems. “I think the easy way technologically is to first detect a drone in the area to update the network- and security monitoring system [about that], and then to detect and block its attempt to access to a network. This is where we go with this,” he says.

“Architecturally, we can get our threat data into a security information management system very quickly: We just need a customer who wants to do it,” Noonan says. “We can send an alert with enough intelligence to the IR [incident response] system or the security information management system to indicate a threat has been detected and translate that into a cyber-mitigation.”

But don’t start scanning the sky around the data center for a nation-state drone attack just yet. It’s still much easier for an attacker to exploit vulnerabilities in a web server, or phish a user, than to fly a drone overhead. “Hackers always take the path of least resistance,” says Glenn Wilkinson, senior security analyst with SensePost and a creator of Snoopy. Piloting a drone to hack a target from afar is a lot more work than hacking into that organization via SQL injection holes or other readily available vulnerabilities, he says.

That’s not to say some attackers wouldn’t use drones. “What you might see going forward in the future, as current attacks become harder and harder, they will move to the next step,” Wilkinson says.

Drones could replace the physical social engineering attack to get inside the building or facility, he says. “In a government building where it’s super-hard to get in, maybe a drone attack would be feasible,” he says.

Like SensePost’s Snoopy, which exploited weaknesses in the manner in which mobile devices search for WiFi signals in order to spy on a mobile user’s physical and online activity as well as steal his data, a drone-based cyberattack is basically fairly straightforward. “There’s nothing too fancy using the drone. It’s the same as getting any proximity-style access,” Wilkinson says. “It’s largely going after WiFi.”

So preventing a drone-based cyberattack would basically be no different than locking down your WiFi and other wireless connections.

Here’s one way a drone cyberattack would work:  the devices, often Linux-based, would be rigged with or controlled by hacking tools that could attempt to break into the WiFi network of a data center over which it was hovering, for example. The attacker would get past the WiFi SSID with stolen credentials, for example, and ultimately siphon stolen information wirelessly back to his own server.

In addition to security researchers who have rigged cyber attacking drones, the US military and Boeing are working on prototypes, Noonan notes. “The Italian defense contractor Hacking Team has been promoting this for a while,” he says.

So far, Noonan’s company is getting pings about its technology mainly from airports, government agencies, prisons, and utilities. Hotels are even looking at it to thwart drones spying on VIP suites, for instance, “hooked into automated curtain closers” when drone is detected at the window, he says.

Tom NoonanChair, Dedrone

Tom Noonan

Chair, Dedrone

Drones may be relatively inexpensive these days, but drone detection technology is not. The cost for a sensor network for a small facility is about $100,000 a year, and $1 million per year for a large facility such as an airport.

Like with the first generation of IDS, it was a learning curve, according to Noonan. “We had to beg people to think about intrusion detection. They were afraid it would shut down their network. It was only after those early adopters finally [added IDS] and the SQL Slammer worm hit … That’s what put ISS on the map. Nobody saw it coming and they got hosed.”

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/drone-detection-as-the-new-ids-/d/d-id/1323005?_mc=RSS_DR_EDT

PageFair analytics hacked and used to distribute malware on Halloween

shutterstock_156385835

First, the trick: on Halloween night, PageFair got hit by a Trojan masquerading as an Adobe Flash update.

Then, the treat: the company managed to eschew non-apology mumbo-jumbo to issue a detailed, satisfyingly remorseful apology.

Beginning late Sunday night, the day after the company discovered the attack, PageFair CEO Sean Blanchfield published a series of updated posts about the 83-minute long attack, which he said affected 501 publishers of the company’s free analytics service.

PageFair’s analytics enable online publishers to see how many of their visitors are blocking ads. It also offers an advertising system that displays “adblock-friendly” ads to adblockers.

PageFair’s mea culpa as of 21:30 GMT Sunday:

If you are a publisher using our free analytics service, you have good reason to be very angry and disappointed with us right now.

For 83 minutes last night, the PageFair analytics service was compromised by hackers, who succeeded in getting malicious javascript to execute on websites via our service, which prompted some visitors to these websites to download an executable file.

I am very sorry that this occurred and would like to assure you that it is no longer happening.

The malware (detected by Sophos as Mal/MSIL-PL) turned out to be a Trojan calling itself adobe_flashplayer_7.exe.

The attack started with a successful spearphishing attack against PageFair that gave the attackers access to a key email account.

They used that email account to reset the password on PageFair’s CDN (Content Delivery Network) and replaced PageFair’s analytics code with their own malicious JavaScript.

A CDN is a distributed website that mirrors content around the world to lots of different servers. PageFair customers embed code hosted on the CDN in their web pages.

Changing the code in on the CDN changed the code embedded by PageFair customers, turning them from advertising channels to malware distribution channels.

Users visiting sites that use PageFair’s compromised analytics code were prompted to install a fake Adobe Flash update and anyone who accepted it and wasn’t protected by up to date anti-virus software was at risk.

The company estimates that some 2.3% of visitors to the 501 affected publishers during the 83 minutes of the attack would have been placed at risk of infection, though more than that would have seen an alert dialog purporting to be a Flash update notice.

PageFair directly notified affected publishers and by Monday had completely resolved the breach, the company said.

It’s not looking like any core PageFair servers or databases were compromised.

That means that no publisher account information, passwords or personal information was apparently leaked.

It’s quite common for organisations to include javascript code from 3rd parties in their websites; it’s how things like online advertising, Google Analytics, Facebook Like buttons and Twitter’s Tweet widgets work for example.

Using 3rd party code is useful, easy and convenient (and often the only way to access a service) but it’s also a risk — your site is only as secure as the 3rd party organisations it pulls its code from.

In this instance, that sharing of code allowed a phishing attack against a single vendor to compromise 501 different websites with tens of millions of monthly visitors.


Image of Trick or Treat button courtesy of Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vk80OJvf5K0/