STE WILLIAMS

Hacking train Wi-Fi may expose passenger data and control systems

Vulnerabilities on the Wi-Fi networks of a number of rail operators could expose customers’ credit card information, according to infosec biz Pen Test Partners this week.

The research was conducted over several years, said Pen Test’s Ken Munro. “In most cases they are pretty secure, although whether the Wi-Fi works or not is another matter,” he added.

But in a handful of cases Munro was able to bridge the wireless network to the wired network and find a database server containing default credentials, enabling him to access the credit card data of customers paying for the Wi-Fi, including the passenger’s name, email address and card details.

He said he was not aware of any incidents of networks being compromised but warned in the worst-case scenario it might be possible for miscreants to take control of the train. “It might be possible, and this is speculation, to lock the braking system.”

Munro refused to name the operators affected by the weak security set-up – the vulnerabilities still exist.

The security hole could be prevented by increasing the complexity of login credentials, which often haven’t been changed from the default or are too simple.

Lateral thinking, lateral network movement

Munro said weak credentials mean a hacker can also change the routing, potentially allowing them access to more sensitive networks on the train.

Part of the problem is a lack of segregation between the Wi-Fi networks. This could be solved by ensuring passengers can only route traffic from their devices to the internet. The wireless router admin interface should not be accessible to passengers either.

Operators must also check that the admin interface cannot be accessed, which is often available on the gateway IP address. Completely isolated, physically separate hardware for passenger Wi-Fi is preferable.

A separate survey recently found that the majority of consumers are potentially leaving themselves exposed to miscreants by failing to change the password and security setting on their routers.

Infosec bods noted that if an attacker could access the admin interface, they could probably log in by guessing the default password and then change settings, including the Wi-Fi password itself. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/11/train_wifi_hackable_on_some_networks/

The New Security Playbook: Get the Whole Team Involved

Smart cybersecurity teams are harnessing the power of human intelligence so employees take the right actions.

It’s common to see the posters in airports, buses, and subways: “If you see something, say something.” Over the years, thousands of people have tipped off the police to physical security risks. It’s worked so well that New York City’s Metropolitan Transportation Authority has launched nine generations of the ad campaign.

Now, smart cybersecurity teams are stealing a page from that playbook, harnessing the power of human intelligence to write a brand-new playbook for their organizations, training users to recognize cyberthreats and take the right actions. It’s a collaborative approach to defense in depth, the yin to technology’s yang, and a way to turn your users into a layer of protection.

Step 1: Drill Users in the Basics
Many companies tend to cover the security fundamentals intermittently, during new-hire orientation or security awareness month. That’s hardly enough. Organizations need to educate users constantly.

Employees should know to verify links or attachments before clicking; it’s the simplest way to avoid being infected with malware. If an email recipient knows the sender but wasn’t expecting the attachment or link, he or she should contact the sender and ask about it. An ounce of inconvenience is worth a ton of pain.

Employees should learn to practice good cyber hygiene, starting with keeping the operating system and software applications current on any devices, in addition to downloading antivirus/anti-spyware software, configuring automatic updates, and securing their home Wi-Fi.

Before using e-commerce sites, employees should look for “HTTPS” in their browser’s URL field. If they don’t see these signs of encryption, they shouldn’t enter logins or personal information. When an email, say from a user’s bank, contains an e-commerce link, the user shouldn’t click but instead manually enter the bank’s URL.

Organizations should advise employees not to use public computers or Wi-Fi when they shop online, as public Wi-Fi is open and insecure. Also, employees should enable two-factor authentication when online shopping sites offer it.

Of course, security analysts already know these things, but plenty of users don’t. That’s why the first step is to drill them in the basics.

Step 2: Help Them Recognize Social Engineering
Social engineering comes in many flavors. Step 2 deals with good old-fashioned scams, such as someone in an airport coffee shop looking over your shoulder to steal your network login credentials.   

These days, most social engineering scams land in employees’ in-boxes, as email is the preferred attack vector. With most breaches starting out as malicious emails, organizations need to train users to recognize the tactic.

One way to start: help employees be aware of the emotions scammers take advantage of. When users receive emails and are tempted to click, what are they feeling? The thrill of some promised reward? The fun of social sharing? The fear of missing instructions from HR?

The answer could be any of those emotions. One study revealed that phishing motivators are a rich mix of personal messages and business communications — in other words, the contents of a typical in-box.

Here’s an example. An account payable specialist gets an urgent email that seems to come from a senior VP. The VP wants her to wire $100,000 to a vendor’s account, ASAP. An untrained employee might authorize the transfer. An employee trained in email security would ask a few questions, starting with, “Do we really respond to fund transfer requests via email?”

The biggest companies in the world — along with smaller and midsized firms, government agencies, schools, and more — run formal training to help users recognize and report the latest tactics. Some organizations have “bounty” programs to give employees rewards, cash, or free swag for reporting a verified scam.

All social engineering targets human beings. That’s why you need a strategy to harden your human assets.

Step 3: Help Users Help You Fight Malware
The easiest way to penetrate defenses is through employees. Conversely, employees are the last line of defense when technology fails, which happens all the time.

Imagine this subject line: “Free Coffee.” Think it would work? It has, at many organizations. So has “Holiday Party Pics” or “Your Package Delivery.” People are human. Unless they are trained to be aware of their emotions when reading an email, they’ll take a break from work to click on something fun and potentially malicious.

Before the incident response team can identify which users received an email loaded with malware and mitigate the threat, they have to know about it. Someone within the organization — an employee with the benefit of proper security training — has to report the email.

The kind of anti-phishing training explained in step two is a solid way to proceed with step three as well. Companies that have run these programs for years create scenarios for social engineering and malware delivery alike.

It’s kind of like in the real world, where employees face real threats. In the new security playbook, you need all of them to become field intelligence agents. To see something, report it, and join your security team.

Related Content:

John “Lex” Robinson has over 30 years’ experience in information technology with a focus on value innovation, strategic planning, and program delivery. In addition, he has consulted and managed product and service delivery teams for both small businesses and global Fortune 20 … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-new-security-playbook-get-the-whole-team-involved-/a/d-id/1331719?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

8 Ways Hackers Can Game Air Gap Protections

Isolating critical systems from connectivity isn’t a guarantee they can’t be hacked.PreviousNext

Image Source: Adobe Stock (Phongphan Supphakank)

Image Source: Adobe Stock (Phongphan Supphakank)

The almighty air gap has long been critical systems’ go-to last resort – the idea being if you pull the plug of connectivity on these systems and don’t allow them any kind of access to the outside world, you’ll eliminate the bad guys’ ability to remotely carry out their attacks.

While it’s true that air gaps can drastically shrink attack surface, they’re far from infallible. Security researchers – particularly a few from Ben-Gurion University in Israel – have worked over the last five to 10 years to show that even the most meticulously isolated air gap can be overcome with some clever uses of side channels. Here are some of the most effective end-arounds of air gap defense.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/endpoint/8-ways-hackers-can-game-air-gap-protections/d/d-id/1331740?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Newly Released Russian Facebook Ads Show Scale of Manipulation

House Democrats this week released 3,500 Facebook ads demonstrating the extent of Russia’s influence on US citizens from 2015 to 2017.

Democrats on the House Intelligence Committee have shared more details of Russia’s interference in the 2016 US Presidential Election with the release of 3,000 Facebook ads. The ads, purchased by Russia’s Internet Research Agency (IRA), ran from 2015 to 2017.

Committee members this week released a total of 3,519 ads and stated more than 11.4 million Americans were exposed to them. The IRA also created 470 Facebook pages, which generated 80,000 pieces of organic content and were seen by more than 126 million Americans, the Committee reports. It plans to release this organic content at a later date.

Earlier this year, a federal grand jury indicted 13 Russian nationals and three Russian entities, including the IRA, for their participation in a scheme to interfere with the 2016 election. Special Counsel Robert Mueller alleges that they aimed to sow discord in the US political system. They posed as US citizens and businesses to buy political ads on social media and spread disinformation.

Now we have more details about what these ads included and who they targeted. While not all of them are pro-Trump, they depict controversial and high-profile issues — the Second Amendment, Black Lives Matter movement, immigration, LGBT rights among them — in a way designed to pit groups of Americans against each other.

In recent public statements, Facebook admits it was “too slow to spot this type of information operations interference” and the company says it plans to make changes with the intent of stopping threat actors from leveraging misinformation to change the democratic process. For example, Facebook is creating an archive so users can search back through issues and political ads for up to seven years and view ad impressions, spending, and demographic data like age, gender, and location. Advertisers will need to confirm their ID and location before running political ads in the US, and ads will say who paid for them.

Read more details here and view the ads here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/newly-released-russian-facebook-ads-show-scale-of-manipulation/d/d-id/1331779?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hide and Seek Brings Persistence to IoT Botnets

The rapidly evolving Hide and Seek botnet is now persistent on a wide range of infected IoT devices.

IoT devices tend to be simple. So simple, in fact, that turning them off and back on again has historically been a reliable way to eliminate malware. Now, though, a new variant of the Hide and Seek bot can remain persistent on IoT devices that use a variety of different hardware and Linux platforms.

A research team at Bitdefender described the new variant of a botnet they had first discovered in January with notes of two important developments, one novel and one in keeping with a broader trend in malware.

Persistence in IoT devices is novel and disturbing since it removes a common defense mechanism from the security team’s toolbox. In order to achieve persistence, Hide and Seek must gain access to the device via Telnet, using the protocol to achieve root access to the device. With root access, a file is placed in the /etc/init.d/ directory where it executes each time the device is rebooted. According to the Bitdefender researchers, there are at least 10 different versions of the executables that can run on 10 different system variants.

“Once this new botnet has been armed, it isn’t going to do anything but increase the availability of the already prevalent DDoS tools for those looking to launch such attacks,” says Sean Newman, director of product management at Corero Network Security. He points out that this is disturbing for technology advancement reasons, but it might not immediately make a huge impact on the DDoS environment. “With most IoT devices rarely rebooted and easily re-infected if they are, it feels like this may not make as much impact as you might think to the already burgeoning supply of botnets,” he says, “particularly those being used to launch damaging DDoS attacks.”

As part of a broader trend in malware, Hide and Seek shows considerable development and evolution in the code being deployed. Since its initial discovery in January of this year, “The botnet seems to undergo massive development as new samples compiled for a variety of architectures have been added as payloads,” according to the Bitdefender Labs blog post on the malware.

“This showcases the continued evolution of malware and how the internet continues to democratize access to information, malicious or otherwise,” says Dan Mathews, director at Lastline. He lists some of the ways in which the industry has seen botnet malware evolve since the days of Mirai, including, “…default expanded password guessing and cross-compiled code to run on multiple CPU architectures added, as well as exploits added to leverage IoT vulnerabilities, exploits added for peer to peer communications, and now exploits added for persistence.”

Hide and Seek’s original version was notable for using a proprietary peer-to-peer network for both CC and new infection communication. Now that persistence has been added to the feature mix, the botnet has become a more pressing concern for the owners of the 32,000+ already infected and those IoT devices that are vulnerable and still unprotected.

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/iot/hide-and-seek-brings-persistence-to-iot-botnets/d/d-id/1331783?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

iOS 11.4 to come with 7-day USB shutout

Mobile forensics researchers recently discovered a major new security feature while poking around in the beta version of Apple’s upcoming iOS 11.4 release, due soon.

It’s called USB Restricted Mode: a feature that popped up in the iOS 11.3 beta but didn’t make it to the final release. The feature snips the USB data connection over the Lightning port if the device hasn’t been unlocked for a week. The device can still be charged over USB, but after 7 days, it won’t give up data without a passcode, meaning that at least some backdoor ways to get at data won’t work anymore.

ElcomSoft researchers found this explanation of how it works in Apple’s documentation:

To improve security, for a locked iOS device to communicate with USB accessories you must connect an accessory via lightning connector to the device while unlocked – or enter your device passcode while connected – at least once a week.

If the device is unlocked with a passcode, the data transfer over USB will be re-enabled. But once the Lightning port has been disabled for a week, thieves or investigators won’t be able to get at data by pairing the device to a computer or USB accessory. Without a passcode to unlock the device, they won’t even be able to get into it using an existing iTunes pairing record, used to recognize PCs that are ‘trusted’ by the device, also known as a lockdown record.

As ElcomSoft researcher Oleg Afonin has explained, forensics experts have found pairing records to be “immensely handy” for extracting device data without having to first unlock it with a passcode, a fingerprint press or a trusted face.

Lockdown records aren’t foolproof when it comes to getting into phones without those unlocking techniques, but on the upside for police or thieves, you could use old records – Afonin mentioned using a year-old lockdown record. That is, you could do that up until recently. In iOS 11.3 beta Release Notes, Apple said it was adding an expiration date to lockdown records.

In a post published on Tuesday, Afonin said that it’s not clear yet whether the iPhone unlocking techniques developed by outfits such as Grayshift and Cellebrite will be blocked by the new USB Restricted Mode.

According to Grayshift’s reported marketing materials, its iPhone X and 8 unlock tool is called GrayKey. Grayshift claims its software works against disabled iPhones, which is one of the states an iPhone can enter if a passcode is entered incorrectly too many times.

Law enforcement agents using a tool like GrayKey have apparently only needed two things to get into an iOS device: physical access and enough time. As Forbes has reported, the tool might hack Apple’s Secure Enclave: the isolated chip in iPhones that handles encryption keys. Secure Enclave makes it time-consuming to brute-force a phone by incrementally increasing the time between guesses: up to an hour for the ninth attempt and onwards.

The new USB Restricted Mode will sharply curtail the time investigators have to break into an iOS device. As TechCrunch noted, the FBI milked its months-long access to the phone of the dead San Bernardino terrorist and mass murderer Syed Rizwan Farook before breaking into his iPhone 5C, dragging the matter through the courts and turning it into a major battle in the war against encryption.

Looks like the FBI, et al., are going to have to speed things up quite a bit with the upcoming seven-day deadline of this new security feature, assuming it makes it into the final release.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AUhcIBmNq20/

Apple boots out apps that abuse location data collection

There are only two weeks to go before the European Union’s General Data Protection Regulation (GDPR) officially lands, on 25 May. Surely companies have all their data protection ducks in a row by now, one imagines…?

Or not. Or, at least, over at Apple, there’s still work being done to ensure that customers’ data is on extra strong lock-down, according to 9to5mac.

Namely, Apple is reportedly looking beyond its own data privacy/security toward that of its developers. Specifically, it’s been cracking down on those developers whose apps share location data, kicking them off the App Store until they cut out any code, frameworks or Software Development Kits (SDKs) that are in violation of Apple’s location data policies.

9to5mac has seen several cases of Apple having emailed developers to let them know that, “upon re-evaluation,” their applications are in violation of sections 5.1.1 and 5.1.2 of the App Store Review Guidelines. Those sections pertain to data collection, storage, use and sharing, as well as to letting people know what type of data an app requests (including location data).

One Twitter user sent out a screen capture of the notice he got:

9to5mac says that in the instances it’s seen, apps aren’t doing enough to let users know what’s happening with their data. Apple doesn’t want developers to just ask for permission – it’s telling them to explain what the data’s used for and how it’s shared.

If it’s to improve user experience, that’s OK. Otherwise, the apps are getting yanked.

You may not use or transmit someone’s personal data without first obtaining their permission and providing access to information about how and where the data will be used.

Data collected from apps may not be used or shared with third parties for purposes unrelated to improving the user experience or software/hardware performance connected to the app’s functionality.

Good for Apple for doing this type of due diligence on its developers.

You don’t have to look far to find instances where location data has been used in surveillance scenarios in which the information of scads of unintended targets gets caught up in dragnets. One of the most notorious such dragnets was revealed by Edward Snowden, when he released documents that showed that the National Security Agency (NSA) was collecting and storing data in a vast database that contained the locations of at least hundreds of millions of devices.

A more recent case, from November, was when Androids were caught secretly reporting location data regardless of opt-out.

The location data was never used, a Google spokesperson said, and therefore was never stored. Google was “taking steps to end the practice,” the spokesperson said at the time, “at least as part of this particular service.” Google didn’t say whether there were other Android services that do this.

What, exactly, is going to happen to companies that make this type of D’oh! mistake after the 25 May GDPR deadline?

The penalties could be huge: any business found not in compliance after that date could find itself hit with fines up to €20m or 4% of an organization’s annual global turnover.

If you have any question about your own business’s compliance, you might want to have a peek at the Sophos GDPR compliance check for peace of mind.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2P3DrMGotpU/

Firefox support for WebAuthn shows passwords the door

Something important happened in the world of passwords this week – Firefox 60 has become the first browser to support a new standard called Web Authentication (WebAuthn).

Developed as a joint effort by the industry FIDO Alliance and the World Wide Web Consortium (WC3) on the back of Universal Authentication Factor (UAF), WebAuthn is an API which deploys public key encryption to let users log into websites without needing a password.

The point of WebAuthn is to turn today’s flawed authentication model on its head.

That model typically has users authenticating themselves with passwords and, in some cases, a second factor such as a one-time code.

Passwords are widely reused, bad ones are easy to guess, strong ones are hard to remember and all passwords can be stolen by phishing attacks. The one time codes that add so much extra protection are hardly used and can also be phished, although the window of time in which they can be used is very small.

WebAuthn aims to change all of that:

Firefox 60 will ship with the WebAuthn API enabled by default, providing two-factor authentication built on public-key cryptography immune to phishing as we know it today.

For now WebAuthn relies on hardware keys, like YubiKeys, either on their own or alongside passwords. In future it could utilise any number of authentication methods including Windows Hello, face or fingerprint ID, or even a PIN terminal.

Once a user has authenticated at their end, no credentials leave their device – all a website sees is confirmation that authentication was successful – so there is nothing to steal.

By relegating passwords, at a stroke WebAuthn also reduces the negative impact of password re-use, which engineers are still trying to figure out how to stop.

How long might WebAuthn take to establish itself?

The short answer is don’t throw away your passwords just yet. Support has started in browsers which, in addition to Firefox, will soon include Chrome and Edge. (Apple’s intentions for Safari are less clear).

Next will be mobile devices, which in the case of Android will take longer because the architecture for storing credentials securely, on which WebAuthn hinges, is still evolving.

WebAuthn must of course be supported by the big websites – Google, Facebook and Microsoft are keen while Dropbox is already there, but even the latter’s enthusiasm is qualified:

There are still many security and usability factors to consider in these scenarios before replacing passwords entirely, and we believe that enabling WebAuthn for two-step verification strikes the right balance for most users right now.

What Dropbox seems to be saying is that no matter how good an idea, WebAuthn still requires users to start using security mechanisms such as biometrics or tokens for passwords to be retired.

You don’t have to be an outright pessimist to think that might take years.

A side issue is where WebAuthn leaves users who’ve already invested in hardware tokens prior to new FIDO2 WebAuthn tokens appearing last month.

As far as we can tell, older U2F tokens (which lack the number ‘2’ on the front) are backwards compatible with WebAuthn, the only limitation being that they won’t support the Client to Authenticator Protocol (CTAP) used in a few scenarios when one device (the hardware token) is authenticating another (a phone).


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Dl1RNo1NKfY/

Telstra warns cloud customers they’re at risk of malware or worse

UPDATE Telstra has advised users of its cloud who run self-managed resources that their “internet facing servers are potentially vulnerable to malware or other malicious activity.”

The company says that it spotted a weakness in its service on May 4th and is now telling users to “delete or disable” the “TOPS or TIRC account on your self-managed servers”.

The Register has asked Telstra what “TOPS” and “TIRC” accounts allow. But the note sent to customers suggests they’re privileged administrator accounts of some sort.

“We’ve also taken steps to access your account and remove the TOPS or TIRC accounts to minimise the risk on your behalf,” the note says. “We’re still encouraging you to check your account settings and remove/disable any unused accounts as we can’t confirm at this stage if we’ll be successful updating the accounts from our end.”

The letter was sent to users of self-managed servers and advises customers of Telstra-managed servers that they’re in the clear.

At a guess, this sounds like TOPS and TIRC accounts have standard passwords, which have become more widely known than is sensible. And because such accounts appear to be on by default, it is party time for any miscreants who have credentials to unlock them.

And whatever the opposite of party time is at Telstra cloud.

We’ve asked Telstra to detail the situation and will update this story if it offers pertinent information. ®

UPDATE: A Telstra spokesperson told us “Our customers’ security is our number one priority. We identified a weakness, moved quickly to address it and worked closely with our customers to ensure the necessary steps were taken to fully secure their systems.” The spokesperson did not elaborate on the nature of the security SNAFU.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/11/telstra_self_managed_cloud_security_incident/

Shining lasers at planes in the UK could now get you up to 5 years in jail

The ban on shining lasers at cars and aeroplanes has been strengthened with a five-year prison sentence now available for those who train their laser pointers on ships, aircraft or air traffic control towers.

“Under the new law, it is a crime to shine or direct a laser beam that dazzles or distracts, or is likely to dazzle or distract, air traffic controllers, pilots, captains of boats and drivers of road vehicles,” said a UK government statement as the Laser Misuse (Vehicles) Act gained Royal Assent.

The Act builds on previous laws which made this illegal but did not include the need for prosecutors to prove that the person shining the laser intended to endanger a vehicle. It also lengthens the available prison sentence to five years or an unlimited fine. Existing legislation allowed for a maximum fine of £2,500 and potential prosecution for reckless endangerment, which carries potential prison time.

Under the updated law, a defence is available for those who took “all reasonable precautions” to stop their light beams from catching drivers, pilots, ships’ captains or air traffic controllers in the eye.

The problem of laser misuse stretches back to the introduction of cheap, readily available laser pointers in the UK. Though regulations did exist, limiting the power levels available for sale to the general public, these were (and are) virtually unenforced.

Naughty folk had an unfortunate tendency to point these lasers at passing or distant items (clouds, aircraft, cliffs, air traffic control towers, etc) with no thought for the effects on people at the receiving end. Relatively high-powered lasers are easily capable of causing permanent damage to eyesight.

“In 2017, UK airports reported 989 laser incidents to the Civil Aviation Authority. The most affected airport was Heathrow with 107 incidents, followed by Gatwick (70), Manchester (63) and Birmingham (59),” said the government’s statement on the successful passing of the Act.

Brian Strutton, general secretary of the British Airline Pilots’ Association (BALPA) trade union, added: “Shining a laser at an aircraft is extremely dangerous and has the potential to cause a crash that could be fatal to not only those on board, but people on the ground too.”

BALPA has led the charge for stricter laws on laser abuse for many years. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/11/laser_misuse_act_5_year_sentences/