STE WILLIAMS

Facebook Struggles in Privacy Class-Action Lawsuit

Facebook’s privacy disclosures “are quite vague” and should have been made more prominent, a federal judge argued.

Facebook, in the midst of a class-action privacy lawsuit, was dealt a blow last week when US District Judge Vince Chhabria argued its privacy policies and practices cause users harm.

In a motion-to-dismiss hearing held Feb. 1, Facebook asked Chhabria to throw away a 267-page complaint from a multidistrict case that had sought billions in damages for the social giant’s violations of state and federal laws. Facebook’s attorney insisted the company had not broken the law because its users willingly let external parties collect data via their privacy controls.

However, Chhabria said Facebook’s disclosures informing users of its data-sharing practices “are quite vague,” as detailed in a Courthouse News Service report. Derek Loeser, an attorney representing a proposed class of Facebook users, argued that in order for the policy to be binding, people have to be properly informed before they consent to share their information.

“The injury is the disclosure of private information,” said Chhabria in Friday’s hearing.

Chhabria gave plaintiffs a chance to file an amended complaint within 21 days instead of first issuing a ruling in an effort to accelerate the litigation. Plaintiffs have agreed, saying they will add new data regarding a Facebook privacy settings change. The new complaint is due Feb. 22.

Read more details here.

 

 

 Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/facebook-struggles-in-privacy-class-action-lawsuit/d/d-id/1333786?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Researchers Devise New Method of Intrusion Deception for SDN

Team from University of Missouri take wraps off Dolus, a system ‘defense using pretense’ which they say will help defend software-defined networking (SDN)
cloud infrastructure.

Researchers with University of Missouri hope to move the ball forward on cyber decepton technology with a new form of intrusion deception they designed specifically to help defend software-defined networking (SDN) cloud infrastructure.

Their system, called Dolus, was designed using pretense theory from child-play psychology and machine learning to fool attackers: giving them a false sense of success that buys defenders time to thwart DDoS and targeted attacks while collecting valuable threat intelligence in the process.

“With the time gained through effective pretense initiation in the case of DDoS attacks, cloud service providers could coordinate across a unified (software defined everything Infrastructure) SDxI infrastructure involving multiple (autonomous systems) ASes to decide on policies that help in blocking the attack flows closer to the source side,” they wrote in their research.

This is a classic sales pitch for intrusion deception methods, which can vary in sophistication from simple honeypots or honey nets all the way up to fully simulated systems and environments.

Research lead Prasad Calyam, associate professor of electrical engineering and computer science and the director of Cyber Education and Research Initiative in the MU College of Engineering, says the difference with Dolus is that it’s more fully simulating an SDN environment in production.

“Honepots – they were more like pre-deployments of applications – so before something goes live you do a lot of this resilience testing and then once things are live, you don’t do much in terms of sophisticated defense,” says Calyam, who believes more sophisticated forms of intrusion deception are “under-explored.”

The question is whether the market has already beaten Calyam and his cohort to the punch when it comes to evolving deception technology. Players like Cymmetria, TrapX, and Attivio are currently duking it out with commercial products moving in this direction, and a recent report from Market Insights Reports shows the market will grow by more than 15% annually through 20205.

So it’s no surprise that analysts like Rich Mogull and Adrian Lane of Securosis wonder whether this is much different than what practitioners already have access to on the market. 

Lane says he’s “highly skeptical” from what he’s seen skimming through the research. And Mogull notes that while he likes the concept of tricking attackers with deception, simply putting forward an advance using some social theory and AI/machine learning may not be enough to differentiate Dolus from existing deception products.

“It isn’t that hard to trick attackers,” Mogull says, also noting that the DDoS defense for SDN may still have a very limited market. “The market is somewhat limited to cloud providers. Very few enterprises are running SDN.”

Calyam, however, believes that there is still room for the field to advance, and his team’s next step is in exploring how to coordinate policies across providers’ software-defined infrastructure in order to provide a defense that helps the ecosystem improve defense. They’re seeking a means of distributed trust, perhaps through the use of blockchain, to accomplish this.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/cloud/researchers-devise-new-method-of-intrusion-deception-for-sdn/d/d-id/1333781?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Security Tips Before You Put a Digital Assistant to Work

If you absolutely have to have Amazon Alexa or Google Assistant in your home, heed the following advice.PreviousNext

Image Source: Adobe Stock: bht2000

Image Source: Adobe Stock: bht2000

Experienced security pros like Amy DeMartine simply won’t allow a digital assistant into her home.

DeMartine, a principal analyst who serves security and risk professionals for Forrester, says last year’s reports of so-called “voice squatting” (aka “skill squatting”) – where attackers create malicious Amazon Alexa “skills” that appear to be legitimate applications – has her thinking twice about any of these digital assistants. 

“People have to decide what their risk threshold is and configure Alexa or any other digital assistant accordingly,” DeMartine says.

According to Candid Wueest, senior principal threat researcher at Symantec, consumers should start by asking the following questions: Do I really need the device, and, if so, what do I need it for? Do I want the device to have a camera to check in on my dog, or am I OK with no camera? And since I already have several devices in my home, do I want to stick with one brand because it’s easier to integrate?

Another $64 million question, and this is the big one: Do I trust the vendor?

“This is not an easy one to answer, but, ultimately, you have to trust that the vendor will safeguard your data,” Wueest says. 

So if you absolutely have to have Alexa or Google Assistant in your home, heed the following advice from DeMartine, Wueest and Jessica Ortega, a website security research analyst at SiteLock. And if you’re a security pro, be sure to educate your customers, too.  

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/vulnerabilities---threats/6-security-tips-before-you-put-a-digital-assistant-to-work/d/d-id/1333783?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Selling fake likes and follows is illegal, rules New York

Last week, a year after the New York Times reported that an obscure company called Devumi was making millions by selling fake likes, followers and retweets to celebrities, businesses or anyone who wants to puff themselves up online, the New York Attorney General announced a groundbreaking settlement that for the first time has declared that fake social engagement from imposter accounts is illegal.

The settlement, announced by New York Attorney General Letitia James, is the first in the US to find that selling fake followers and likes is illegal deception, and that fake engagement via stolen identities is illegal impersonation.

The settlement bars Devumi LLC and its offshoot companies, including Disrupt X Inc., Social Bull and Bytion – collectively referred to as Devumi – from ever again engaging in this type of business. According to the New York Post, Devumi owner German Calas Jr. pleaded no contest to the charges and agreed to a $50,000 fine.

Calas’s company had been grossing $15 million a year until it folded in August or September, following negative publicity in the wake of reports that the AG was investigating.

Bots and sockpuppets

James said that Devumi sold social media engagement that was generated by bot and sockpuppet accounts.

A former employee told the NYT that Devumi purchased the bots from various makers scattered around the web. As of last year, one such, Peakerr, was charging a little more than a dollar for 1,000 high-quality English language bots with photos. Devumi would turn around and sell that many for $17.

According to the New York Times’s investigation, there’s good money to be had in the business of bogus likes: Devumi sold about 200 million Twitter followers to at least 39,000 buyers over a few years, accounting for a third of the company’s sales, which were more than $6 for the time period.

James said that the fake followers, likes and other engagement that Devumi sold came either from bot accounts or from one person pretending to be many others: what’s known as sockpuppets. They’re found throughout social media land, including Twitter, YouTube, LinkedIn, SoundCloud, and Pinterest,  where they pretend to be real people expressing genuine opinions.

They’re not real. They’re paid-for, they’re manufactured, and they’re out to deceive people, James said.

Devumi also sold social engagement that came from fake accounts that ripped off real people’s social media pictures and profiles without their knowledge or consent. The New York Times wrote about a Minnesota teenager who suffered that fate: Jessica Rychly, whose photo wound up being used for a whole mess of things that are definitely not Jessica-Rychly-endorsed, such as…

[promoting] accounts hawking Canadian real estate investments, cryptocurrency and a radio station in Ghana. The fake Jessica followed or retweeted accounts using Arabic and Indonesian, languages the real Jessica does not speak. While she was a 17-year-old high school senior, her fake counterpart frequently promoted graphic pornography, retweeting accounts called Squirtamania and Porno Dan.

When the NYT first revealed Devumi’s business last year, Twitter had nothing good to say about it, tweeting that …

The tactics used by Devumi on our platform and others as described by today’s NYT article violate our policies and are unacceptable to us. We are working to stop them and any companies like them.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oMeHdLPSgkk/

FBI burrowing into North Korea’s big bad botnet

The US has infiltrated, mapped, and poked a stick into the spokes of Joanap: what it claims is a botnet of hijacked Microsoft Windows computers operated by botnet masters in North Korea.

The Feds are also continuing to mess with the globe-spanning network by notifying the owners of the commandeered systems Joanap still controls, years after it was first discovered and in spite of antivirus software being able to fend it off.

The US Department of Justice (DOJ) announced on Wednesday that the effort follows charges, unsealed in September 2018, against a North Korea regime-backed programmer, Park Jin Hyok.

The botnet behind some big baddies

The complaint against Park alleged that he and his co-conspirators used a Server Message Block (SMB) worm commonly known as Brambul to gain unauthorized access to computers, and then used those computers to carry out a mess of big, nasty cyberattacks.

Among them were the global WannaCry ransomware attack of 2017, the 2014 attack on Sony Pictures, and the $81m cyber heist from 2016 that drained Bangladesh’s central bank.

The complaint alleged that Park, a North Korean citizen, was a member of a government-sponsored hacking team known as the “Lazarus Group” and that he worked for a North Korean government front company, Chosun Expo Joint Venture (aka Korea Expo Joint Venture or “KEJV”), to support cyber actions on behalf of the Democratic People’s Republic of Korea (DPRK).

Lazarus Group, also known as Guardians of Peace or Hidden Cobra, is a well-known cybercriminal group. In June 2017, US-CERT took the highly unusual step of sending a stark public warning to businesses about the danger of North Korean cyberattacks and the urgent need to patch old software to defend against them.

It specifically called out Lazarus Group. The alert was unusual in that it gave details, asking organizations to report any detected activity from Lazarus Group/Hidden Cobra/Guardians of Peace to the US Department of Homeland Security (DHS).

Specifically, US-CERT told organizations to be on the lookout for DDoS botnet activity, keylogging, remote access tools (RATs), and disk wiping malware, as well as SMB worm malware like WannaCry.

Hidden Cobra, crouching warrants

As US-CERT detailed in a May 2018 alert, the Joanap RAT is a so-called “second-stage” malware that’s often spread by the “first-stage” Brambul malware.

Once installed on a system, Joanap allows what the US claims are its North Korean overlords to remotely access computers, gain root-level access to infected computers, and load additional malware.

Joanap-infected computers – known as peers or bots – then get lassoed into the botnet. The Joanap botnet uses a decentralized peer-to-peer (P2P) setup to communicate, rather than a centralized command-and-control domain. …

… A fact that came into play when getting a court order and search warrant granted by a California court in October, which gave the FBI and the US Air Force Office of Special Investigations (AFOSI) the go-ahead to operate servers that pretended to be peers in the botnet.

That way, the FBI’s imposter peers could collect what prosecutors said was “limited identifying and technical information about other peers infected with Joanap,” including IP addresses, port numbers, and connection timestamps.

The FBI and AFOSI used that information to build a map of the Joanap botnet’s infected computers.

The reason we’re hearing about this now, as opposed to when the warrant was granted in October, is that the court gave the FBI permission to delay service of the warrant until last week, on Wednesday, due to the flight from justice or tampering/destruction of evidence that would very likely have been triggered otherwise.

At any rate, by monitoring the IP addresses of the infected computers that join the network, the Feds could also alert people whose systems have been infected. The victims were, or will be, tipped off via their ISPs or via personal notifications if their computers aren’t behind a firewall or router. For victims outside the US, the Feds are contacting their host countries’ governments, including by using the FBI’s Legal Attachés.

Old, well-known, and still a threat

Even though the botnet was discovered years ago, and even though antivirus software can detect it, there are still computers around the world that remain affected, Assistant Attorney General for National Security John Demers said in the DOJ’s release.

United States Attorney Nicola T. Hanna:

While the Joanap botnet was identified years ago and can be defeated with antivirus software, we identified numerous unprotected computers that hosted the malware underlying the botnet. The search warrants and court orders announced today as part of our efforts to eradicate this botnet are just one of the many tools we will use to prevent cybercriminals from using botnets to stage damaging computer intrusions.

We’re going to patch-party like it’s 2009

In fact, the second-stage Joanap botnet and the first-stage Brambul worm have been around since 2009, even though they’ll be mopped up by any good antivirus product.

So please, take advantage of the protections that are out there. Ten years later, they’re still necessary.

Paul Delacourt, Assistant Director in Charge of the FBI’s Los Angeles Field Office:

We urge computer users to take precautions, such as updating their software and utilizing antivirus, in order to avoid being victimized by this type of malware.

Sophos products like Sophos Home, Intercept X and XG Firewall will all prevent Joanap infections.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/APcktdQ0G84/

Chrome’s hidden lookalike detection feature battles URL imposters

Most of us have suffered from fat-fingered browsing before, mistyping website URLs and getting taken to the wrong place. Some of us have fallen victim to hyperlinks that look like legitimate websites at first glance but which are deliberately misspelled. Now, Chrome will try to save us from lookalike sites by detecting them and flagging up a warning.

Google has given its web browser a new feature that checks before it sends you to misspelled versions of popular sites. The feature, first called “Navigation suggestions for Lookalike URLs”, reportedly appeared in the Canary release of Chrome 70. Canary releases test new features on early adopter users so that Google can refine them before releasing them into the mainstream.

When activated, the security measure checks for misspelled sites, where it’s likely that the user intended to visit a popular url. It will display a link to the site that it thinks the user might have wanted to visit.

Sometimes, users intentionally mistype websites. The letter o on your keyboard is close enough to the zero that typing g00gle.com could be a legitimate mistake. More often, criminals deliberately register misspelled versions of websites for phishing or malware attacks, in an process known as typosquatting. By substituting a 1 for an l, or by transposing characters, attackers can create domains – and sites – that look real, using them for phishing attacks.

The other danger is the IDN homograph attack. An attacker registers a domain name in ASCII that browsers convert to Unicode, which is a standard for displaying writing in many non-Latin alphabets such as Greek or Cyrillic.

IDN homographs enable someone to register a seemingly gibberish domain name and get the browser to display a domain that looks a lot like a regular site, in a conversion process known as punycode. So, xn--mxail5aa.com becomes αρριε.com. Chrome and other browsers each have their own rules when it comes to whether they convert punycode, and they can be pretty convoluted as browser vendors do their best to avoid security problems while respecting legitimate cultural usage.

Chrome’s new feature will use a site’s popularity along with any site engagement score that it has to help detect a misspelled site and recognize the right one. The browser gives a URL a site engagement score if it sees a user spending a lot of time on the site.

I tested the browser’s lookalike detection in version 72 (now Chrome’s stable release channel), which still had to be turned on by entering this command into the address bar:

chrome://flags/#enable-lookalike-url-navigation-suggestions

The feature caught paypai.com, asking whether we wanted to visit paypal.com. It did the same for pay-pal.com and paypal.om. However, it missed paypa|.com, and αρριε.com.

This will hopefully go some way towards introducing more security into the URL system, which Google has said in the past has major security problems. In September, its expert suggested that it might be time to replace URLs with something else that isn’t prone to problems like these. At the time, Wired quoted Google technical lead Emily Stark, who called the security issues the URLephant in the room. That is also the name for a presentation that she gave describing the lookalike detection system at Usenix last week.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sVX8TtqOnyw/

Security weaknesses in 5G, 4G and 3G could expose users’ locations

Fifth generation (5G) wireless test networks are barely in the ground and already researchers say they’ve uncovered new weaknesses in the protocol meant to secure it.

5G security is built around 5G AKA (Authentication and Key Agreement), an enhanced version of the AKA protocol already used by 3G and 4G networks.

A big issue this was supposed to address was the ease with which surveillance of 3G and 4G devices can be carried out using fake base stations known as IMSI catchers (International Mobile Subscriber Identity-catcher, sometimes called ‘StingRays’).

Disappointingly, according to a research paper, New Privacy Threat on 3G, 4G, and Upcoming 5G AKA Protocols, made public late last year, 5G AKA might not solve this thanks to deeper issues with the AKA protocol on which it is based.

As the name suggests, IMSI catchers work by tricking devices into connecting to them instead of the real base station, exploiting the fact that under GSM (the Global System for Mobile Communication mobile phone standard), devices prioritise closer and stronger signals.

Luring a smartphone to connect to a fake base gives attackers the power to identify the device’s owner, track their physical location, and potentially execute a downgrade attack by asking it to remove security such as encryption.

In doing this, IMSI catchers are aided by the fact that while the device will authenticate itself via its unique subscriber identity, the base station isn’t required to authenticate in return.

That sounds like an open invitation to hackers but it seemed logical in the early days of mobile networks when interoperability with lots of different companies’ base stations was a priority.

Under 5G, fake base stations would still in be possible, but the subscriber’s identity would be hidden using public key encryption managed by the mobile network.

Activity monitoring

Nevertheless, the researchers suggest that because some of 5G AKA’s architecture is inherited from standard 3G and 4G AKA, this encryption could be defeated by what the researchers call an “activity monitoring attack.”

Essentially, an attacker might use inference to identify an individual even when they can’t access that data directly by monitoring Sequence Numbers (SQNs), which are set every time a device connects to the mobile network.

By monitoring every occasion a target device enters the range of the IMSI catcher, the attackers can build up a picture of how that device is used, including when it is not in range. Specifically:

The attacker can relate the number of AKA session some UE [User Equipment] has performed in a given period of time to its typical service consumption during that period.

Although under 5G, an attacker can’t see the contents of communications or its metadata, the ability to model the pattern of a device’s connections might allow an eavesdropper to calculate the identity of a device.

For anyone worried about privacy, two pieces of good news emerge from all of this.

First, a new generation of IMSI catchers will be needed exploit these weaknesses, and these will also require a lot more time and sophistication to do the sort of location tracking that under 3G and 4G today seems to be quick and easy – this buys time for defenders.

The second is that the researchers are scrutinising 5G security in its first phase of deployment, making it possible to do something about the issue in the second phase, hopefully before there are any exploits:

Our findings were acknowledged by the 3GPP and GSMA and remedial actions are underway to improve the protocol for next generation.

There’s little doubt that IMSI catchers have become a popular technique for police, intelligence services and criminals to monitor people they’re interested in.

They’re also popular for espionage, with the US Department of Homeland Security (DHS) confirming it had found rogue access points in Washington suspected of having been planted by unfriendly nation states.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ngN8ai-40XE/

IoT Security’s Coming of Age Is Overdue

The unique threat landscape requires a novel security approach based on the latest advances in network and AI security.

Security always lags behind technology adoption, and few technologies have seen growth as explosive as the Internet of Things (IoT). Despite the rapid maturation of the market for connected devices, security has been an afterthought until now, creating an unprecedented opportunity for hackers worldwide.

It’s 2019 and the industry is overdue for a new, comprehensive security model for connected devices — one that reflects the challenges of protecting IoT’s position at the confluence of software and device security. The unique threat landscape requires a novel security approach based on the latest advances in network and artificial intelligence (AI) security.

What’s at Stake
Cisco estimates the number of connected devices will surpass 50 billion by 2020. Enterprises are on pace to invest more than $267 billion in IoT tools during that same time. Attacks on IoT devices rose by 600% in 2017, reflecting both security vulnerabilities and the value of the targets. The NSA posted an advisory on smart furniture hacks, and the 2018 Black Hat and DEF CON conferences produced a stunning array of connected device attacks and security analysis.

The prevalence of connected devices and lack of comprehensive IoT security pose diverse risks for enterprises.

To start, altering or interrupting connected device performance alone can constitute a catastrophic breach — even one with life-or-death consequences. The Stuxnet attack famously sabotaged the Iranian nuclear program by causing as many as a thousand uranium enrichment centrifuges to malfunction and eventually fail. Attacks targeting power grid infrastructure have been detected abroad in Ukraine and the United States. Interference with consumer devices such as vehicles and pacemakers puts their owners at risk. Inside the enterprise, tampering with smart mining, manufacturing, or farming equipment could cause millions of dollars in damages in goods and equipment. The growing trend toward corporate ransom and hacktivism has expanded the pool of potential targets beyond scenarios where attackers can profit directly from a breach.

In addition to service disruptions, IoT systems are susceptible to breaches resulting in data loss. Data from manufacturing and consumer sensors can be valuable intellectual property. Lost data from consumer or enterprise devices can constitute privacy violations, as in the case of connected toys or even office-entry badge logs. Regulatory experts anticipate a “feeding frenzy” of legal cases stemming from IoT attacks in the coming years.

Following Data from Sensors to the Cloud
The IoT threat landscape includes elements of both centralized and dispersed systems. A typical architecture involves a large number of sensors collecting data, which is then consolidated and analyzed. Practically, we can group the vulnerabilities of IoT systems into two categories: the security of sensors and the security of data repositories.

Connected devices create liabilities at all stages of the security life cycle, from prevention to detection to remediation. The challenge of securing sensors begins with taking an accurate inventory. Many companies will be hard pressed to evaluate the security posture of all connected devices in use, from strategic enterprise equipment to connected devices in regional offices. Many connected devices lack basic security features found on laptops or smartphones. Default passwords, unpatched operating systems, network trust issues, and unhardened devices with open ports are all vulnerabilities endemic in IoT security. Finally, hardware may not support the capability to register that it has been tampered with, limiting the security team’s ability to detect and respond to successful attacks.

The Internet of Things is inherently intertwined with cloud security. Most sensors have relatively limited processing capabilities and rely on cloud hosting to analyze data. These consolidated repositories create risks around access control, data security, and regulatory compliance. Gartner warns that at least 95% of cloud security failures will be the customer’s fault, meaning misconfigured security settings will result in security incidents. Research on a sample of enterprise AWS S3 buckets found 7% with unrestricted public access and 35% unencrypted. Hundreds of millions of dollars in acquisitions for vendors dedicated to auditing and automating cloud security configurations attest to the breadth of this attack vector.

Leveraging the Strengths of IoT for Security
Companies have invested in IoT in the absence of robust security because of the business opportunities available from massive amounts of data and powerful analytics. Fittingly, IoT security solutions must lean on these same advantages.

First, IoT security fundamentally requires network-based enforcement. IoT sensors cannot support the same endpoint security solutions available for smartphones. The sheer number of devices a typical enterprise uses makes security at the device-level unfeasible. Applying security at the network level allows the enterprise to gain holistic visibility and enforcement across their IoT portfolio.  

Second, companies can use the large quantities of data coming from IoT devices to implement behavioral security with neural networks. The AI approaches in use today with IoT are simple statistical deviation or anomaly detection. They may find the needle in the haystack, but they will also see needles where they do not exist. The massive traffic coming from IoT systems allows for the training of neural networks to accurately detect malicious intent with greater accuracy, lowering the rate of false positives and alleviating alert fatigue.

Forcing existing enterprise security approaches onto IoT systems is doomed to failure. Securing the Internet of Things requires a combination of hardware and software security that contends with the unique risks and limitations of connected devices and data processing repositories. By tailoring security to the architecture of IoT systems in use, organizations can take advantage of all the benefits that technologies like the cloud and AI have to offer.

Related Content:

Saumitra Das is the CTO and Co-Founder of Blue Hexagon. He has worked on machine learning and cybersecurity for 18 years. As an engineering leader at Qualcomm, he led teams of machine learning scientists and developers in the development of ML-based products shipped in … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/iot-securitys-coming-of-age-is-overdue/a/d-id/1333756?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mobile network Three UK’s customer details exposed in homepage blunder

Exclusive Mobile operator Three UK’s website was showing visitors other customers’ names, postal addresses, phone numbers, email addresses and more – all without asking for a login.

Alarmed Reg reader Chris immediately tweeted at Three to ask what on Earth was going on, querying why Three’s site was displaying different people’s data to him every time he changed page.

Three UK data breach screenshot of other customer's details

Click to enlarge

The site was showing him as logged in even though he’d only gone to the mobile operator’s homepage.

“When you load their site over your mobile internet connection, it recognises you and automatically logs you in,” Chris told us. “I was doing this on my home Wi-Fi (which isn’t Three), so it should’ve required me to log in manually when I first went to their site. I guessed it might’ve either redirected me to a session for a valid user who was accessing at the same time, or some blip which didn’t recognise me and just assigned another user’s ID instead.”

Three UK data breach screenshot of other customer's details

Click to enlarge

“I wasn’t able to to view any payment details – card or direct debit, and I wasn’t able to load any detailed bills to view itemised activity,” added Chris. Three claims to have around 10 million registered subscribers.

Three UK data breach screenshot of other customer's details

Click to enlarge

While our reader waited for a response from Three (it replied to him on Twitter an hour and a half after his initial tweet), he tipped off El Reg. As we investigated, we noticed the company website went down for a little while with the standard “under maintenance” page displayed – and came back up again after about an hour. Chris said other people’s data was no longer visible once the site returned.

The nature of the data breach suggests that potentially the entire customer database along with some of the personal data held on file may have been exposed.

Despite repeated contact with Three’s PR representatives, none of The Register‘s questions about the potential size or scale of the breach have been answered.

Judging by the URLs visible in some of the other screenshots Chris sent us, which included the letters /new, the company’s techies may have accidentally deployed an under-construction revamp of the site to the mobe firm’s production servers. This is merely speculation and Three has not responded to questions on this.

The Information Commissioner’s Office was unable to say, at the time of publication, if Three had reported the breach. ®

Updated to add at 1628 UTC:

An ICO spokesperson told us: “Three has made us aware of an incident and we will be making enquiries.”

A Three UK spokesperson told us: “A small number of customer[s] have reported an issue to us regarding my3. We have blocked access to my3 while we investigate the issue.”

Updated to add at 1825 UTC:

Three UK wanted to make it known that only four people had complained about being able to view any random Three customer’s personal data by simply visiting its website and not even needing to log in. El Reg is very happy to make this clear.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/01/three_uk_data_breach_no_authentication_blunder/

Bug-hunter faces jail for vulnerability reports, DuckDuckPwn (almost), family spied on via Nest gizmo, and more

Roundup This was the week we saw GPS grumbles, shady speakers, and Yahoo! Losing! Again!

While all that was happening, a few other bits of news that hit our screens…

DuckDuck D’oh!

Drama in search engine land this week as Google-alternative DuckDuckGo disclosed a potentially nasty flaw in its server-side software.

Bug-hunter Michele Romano took credit for spotting and reporting an information-leaking vulnerability in backend servers that handled some user requests.

The XML External Entity vulnerability would have allowed an attacker to feed maliciously crafted XML files that had local paths embedded within them into DuckDuckGo’s backend servers, causing those systems to cough up internal data. Because the server-side code was not properly examining XML content for things that shouldn’t be there (such as requests for local system files) miscreants could have downloaded sensitive files and documents from the servers using dodgy XML files.

Fortunately, the flaw has now been patched, and there are no reports of malicious actors targeting it.

Crook builds massive library of stolen credentials

Someone is making the rounds on cybercrime forums offering a massive collection of personal details built by aggregating a bunch of previous data breaches.

The collection of 2.2 billion records is apparently nothing new, just a fat collection of other data dumps, but you have to admire (and be a little scared by) the commitment of the crook to get so many pilfered pieces of information in one place.

Now would be a good time to make sure you aren’t re-using any old passwords.

S(o) S(crewed) 7

UK cyber-snoops are warning, via Vice, that criminals are abusing flaws in the SS7 text message protocol to steal two-factor login codes from banking websites, and then break into online bank accounts.

Apparently, criminals have been abusing the system to re-route messages around phone networks, eventually intercepting the messages. In the UK, this has taken the form of attacks on Metro Bank.

A criminal gets into the SS7 backbone and then intercepts the text messages of the person they are targeting and, using the intercepted 2FA code along with a username and password obtained by other means (such as fishing) they could get everything they need to access and drain a bank account.

Chrome and Firefox patched

While they may not get the attention of Microsoft’s Patch Tuesday, security fixes for the Chrome and Firefox browsers are something everyone should keep an eye on.

Earlier this week, security fixes were posted for both browsers on Linux, Windows and macOS. Among the vulnerabilities patched were remote code execution flaws, and US-Cert is advising users and admins to make sure the patches are installed and running.

This should be easy enough to do, as both browsers have built-in update mechanisms that will download and install the fixes, so just make sure you have the latest version installed.

Hungarian researcher faces jail time for vulnerability disclosure

No good deed goes unpunished, right?

A researcher in Hungary could be spending as long as eight years in jail simply for discovering and reporting a vulnerability in the network of one of the country’s largest telcos.

BleepingComputer reports that the unnamed researcher spotted and reported a vulnerabulity in the network of Magyar Telekom last April.

Rather than recognize the bug-hunter or pay out a bounty, the telco instead ratted out the white hat to the police. He could now get as many as eight years in jail if convicted on charges of hacking into the company’s network and database.

Hopefully cooler heads prevail, and this whole affair gets sorted out without anyone having to spend time behind bars.

Dumb problem in smart home

A smart home aficionado in Illinois, USA, saw his internet of things house meet the internet of trolls this week after hackers got into his home network and began manipulating both surveillance cameras and thermostats.

Telly news station NBC Chicago reports that for more than a week Arjun Sud and his family have been in a panic over strangers who apparently had access to their network of Nest devices, including two smart thermostats and 16 cameras placed around that home.

The hackers undertook such creepy activities as talking to Sud’s 7-month old baby while alone in the nursery, cranking the couple’s heating system up to 90 degrees (32C) and shouting obscenities into the family’s living room.

“The moment I realized what was happening, panic and confusion set in, and my blood truthfully ran cold,” Sud was quoted as saying.

“We don’t know how long someone was in our Nest account watching us. We don’t know how many private conversations they overheard.”

Not exactly a ringing endorsement for smart home devices, is it?

Turbulence ahead for Airbus after mystery data theft disclosure

European plane-builder Airbus is fessing up to a potentially serious hack and data theft. Emphasis on the “potential,” because the biz isn’t revealing much information of use.

The French air giant says an unspecified “cyber incident” hit its commercial airliner operation, resulting in the loss of some employee data. What is that data? Your guess is as good as ours.

The disclosure was conspicuously short on details, omitting any sort of specifics on how many people were affected, what data was taken, or who might have taken it, but Airbus said the “incident” included unauthorized access to information that included “professional contact and IT identification details” for some of its workers. The number of employees affected is estimated to be somewhere between 1 and 129,000.

“This incident is being thoroughly investigated by Airbus’ experts who have taken immediate and appropriate actions to reinforce existing security measures and to mitigate its potential impact, as well as determining its origins,” Airbus said.

“Investigations are ongoing to understand if any specific data was targeted, however we do know some personal data was accessed.”

How forthcoming.

Airbus notes it is working with the “relevant regulatory authorities and the data protection authorities pursuant to the GDPR.” We imagine that EU authorities are going to want a slightly more detailed report than “a cyber incident occurred” when they look into the matter.

The aircraft builder also says it is advising its employees to “take all necessary precautions going forward”, though that might be hard to do if they have no idea what data was taken, who has it, and where they got it from.

So, to recap, something happened at Airbus. To someone. Resulting in the theft of something. By someone. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/02/security_roundup_010219/