STE WILLIAMS

New Zombie ‘POODLE’ Attack Bred From TLS Flaw

Citrix issues update for encryption weakness dogging the popular security protocol.

Turns out a major design flaw discovered and patched five years ago in the old SSL 3.0 encryption protocol, which exposed secure sessions to the so-called POODLE attack, didn’t really die: A researcher has unearthed two new related vulnerabilities in the newer TLS 1.2 crypto protocol.

Craig Young, a computer security researcher for Tripwire’s Vulnerability and Exposure Research Team, found vulnerabilities in SSL 3.0’s successor, TLS 1.2, that allow for attacks akin to POODLE due to TLS 1.2’s continued support for a long-outdated cryptographic method: cipher block-chaining (CBC). The flaws allow man-in-the-middle (MitM) attacks on a user’s encrypted Web and VPN sessions.

“Specifically, there are products out there that did not properly remediate the first POODLE issue,” says Young, who will detail his findings next month at Black Hat Asia in Singapore. He found the latest flaws while further researching, and then testing, just how an attacker could exploit the original POODLE MitM attack.

Among the affected vendors is Citrix, which is also the first to issue a patch for the flaw (CVE-2019-6485). The bug could allow an attacker to abuse Citrix’s Delivery Controller (ADC) network appliance to decrypt TLS traffic.

“At Citrix, the security of our products is paramount and we take all potential vulnerabilities very seriously. In the case of the so-called POODLE attack, we have applied the appropriate patches to mitigate the issue and advised our customers on actions needed to secure their platforms,” the company said in a statement given to Dark Reading. “We will continue to vigorously monitor our systems to ensure the integrity of our solutions and provide the highest levels of security for our customers around the world.”

Young declined to name other vendors currently working on patches, but he says the products include Web application firewalls, load-balancers, and remote access SSL VPNs.

Young has christened the two new flaws Zombie POODLE and GOLDENDOODLE (CVE). With Zombie Poodle, he was able to revive the POODLE attack in a Citrix load balancer with a tiny tweak to the POODLE attack on some systems that hadn’t fully eradicated the outdated crypto methods. GOLDENDOODLE, meanwhile, is a similar attack but with more powerful and rapid crypto-hacking performance. Even if a vendor has fully eradicated the original POODLE flaw, it still could be vulnerable to GOLDENDOODLE attacks, Young warns.

Some 2,000 of the Alexa Top 1 Million websites are vulnerable to Zombie POODLE, with some 1,000 to GOLDENDOODLE as well hundreds still vulnerable to the nearly 5-year-old POODLE, according to findings from Young’s online scans.

It’s not just small sites that are vulnerable, he says: “It seems to be more prevalent in sites that are spending more money on running websites,” such as government agencies and financial institutions that run hardware acceleration systems like Citrix’s platforms, he notes.

“This [issue] should have been put to bed four or five years ago,” Young says, but some vendors either didn’t fully remove support for the older and less secure ciphers or didn’t fully patch for the POODLE attack flaw itself. Citrix, for instance, had not fully patched for the original POODLE, he says, leaving it open for the next-generation POODLE attacks.

The core problem, of course, is that HTTPS’s underlying protocol (first SSL, now TLS) hasn’t been properly purged of old cryptographic methods that are outdated and less secure. Support for these older protocols, mainly to ensure that older legacy browsers and client machines aren’t locked out of websites, also leaves websites vulnerable. Like its predecessor, TLS 1.2 is riddled with workarounds and countermeasures for protecting against abuse of the older crypto, such as CBC and RC4.

The new Zombie POODLE and GOLDENDOODLE attacks – like POODLE – allow an attacker to rearrange encrypted blocks of data and, via a side channel, get a peek at plaintext information. The attack works like this: An attacker injects a malicious JavaScript into the victim’s browser via code planted on a nonencrypted website the user visits, for example. Once the browser is infected, the attacker can execute a MITM attack, ultimately grabbing the victim’s cookies and credentials from the secured Web session.

The First POODLE
The original POODLE flaw (Padding Oracle On Downgraded Legacy Encryption), aka CVE-2014-3566, was initially discovered by researchers at Google. It wasn’t easy to execute, and neither is POODLE Zombie or GOLDENDOODLE. That’s because attackers must be able to set up a MitM attack on the victim’s network or via Wi-Fi.

“Every attack has to be rather targeted, and there are a lot of moving parts,” Young says. “From the attacker’s perspective, you have to know who you are targeting and what kind of system they are running so you can predict where the sensitive material is you are trying to steal. The goal of this attack is to steal an authentication cookie.”

An attacker could gain access to the victim’s SSL VPN and ultimately pose as that victim on the organization’s VPN and move around the network, for example. That would require the attacker on via a public Wi-Fi network to employ ARP spoofing or trick the user’s client machine or phone to a phony Wi-Fi hotspot where the attacker then could discern the victim’s authentication cookie for his or her VPN session.

Young says it’s not likely the POODLE family of attacks are being exploited by cybercriminals, but even so, these attacks would be difficult to detect. Servers don’t typically log for this type of activity, for example, he notes.

GOLDENDOODLE kicks it up a notch and executes the POODLE attack at a faster and more efficient rate, he explains. Why the seemingly silly name? It actually retrieves the key intel it needs: “[It’s] deterministic such that the attacker is able to test whether the byte being decrypted has a specific value,” Young explains.

Go TLS 1.3
The long-term fix for POODLE-based attacks is adoption of the latest version of the TLS encryption protocol, TLS 1.3, which deleted the older crypto methods like CBC rather than including confusing and easily misconfigured workarounds. “It takes away all nonauthenticated ciphers” so attacks like POODLE and its successors can’t be executed, Young says.

While TLS 1.3 is available in popular browsers and networking products, website operators have been slow to deploy it mainly out of fear that the move will inadvertently “break” something.

Meantime, organizations not quite ready to go full TLS 1.3 just yet can disable all CBC encryption suites in their TLS 1.2-based systems to protect themselves from the new attacks. Young says his recent scans are showing some organizations he contacted about their sites’ vulnerabilities to the POODLE family are now all clear:  “I have … noticed some websites that are able to remediate the flaw without disabling CBC or patching,” but it’s not clear what workarounds they employed, he says.

The challenge is that larger websites often must support older Web browsers, Android devices, and Windows systems connecting to them. “While I’d like these businesses to disable CBC ciphers, it would probably create business issues for them” if older client systems couldn’t reach their sites, he says.

At Black Hat Asia, Young plans to release the scanning tool he created for his research for vendors and security experts to test Zombie POODLE and GOLDENDOODLE attacks. Tripwire’s IP360 scanner also detects the flaws, he notes.

Meantime, researchers at NCC Group today published new research on an attack that would downgrade TLS1.3 to the older, more vulnerable versions.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/new-zombie-poodle-attack-bred-from-tls-flaw/d/d-id/1333815?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Reasons to Be Wary of Encryption in Your Enterprise

Encryption can be critical to data security, but it’s not a universal panacea. PreviousNext

Encryption is the answer to every cybersecurity issue. That message seems to flow from countless articles and blog posts on the Internet — so why isn’t everything, everywhere, encrypted? As it turns out, issues with encryption make some data best left unencrypted where it sits.

Now, there’s no denying that encryption is an enormously valuable tool and far more data should be encrypted than is currently being protected by the technique. Especially in certain regulatory frameworks, it can seem that full encryption is a forgone conclusion as a strategy.

But encryption should be given the same level of scrutiny as any other technique before deployment. In particular, a half-dozen factors should be taken into account when an organization is considering encryption. Knee-jerk reactions seldom provide the best results, and that’s as true for encryption in cybersecurity as in any other endeavor.

What has your organization decided to do about encryption? Are you encrypting everything, drawbacks aside? Or are you proceeding cautiously and encrypting in stages? Let us know in the comment section, below.

(Image: mrhighsky vis Adobe Stock)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/operations/6-reasons-to-be-wary-of-encryption-in-your-enterprise/d/d-id/1333821?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook ordered to keep apps separate unless users opt in to sharing

A few weeks ago, the New York Times reported that Facebook CEO Mark Zuckerberg is planning to interconnect all of the company’s chat apps – Messenger, WhatsApp and Instagram – in spite of having promised to retain the independence of WhatsApp and Instagram when it bought them.

According to the NYT’s sources – four company insiders – the goal is to keep people’s attention focused on Facebook. The more time they spend on the platform, the better for its advertising revenue, the sources said.

Oh? Well, how about “Absolutely not,” Germany’s competition regulator suggested this week.

On Thursday, the Bundeskartellamt (FCO) issued an order saying that Facebook can keep collecting data from its apps, but it can’t combine that data with a user’s main Facebook account unless the member gives their “voluntary consent.”

The order comes following a probe, begun in March 2016, into how Facebook collects and combines data from all of its apps.

So much for Facebook’s dream of creating a multi-tentacled chat blob to advertise at. From the ruling:

Where consent is not given, the data must remain with the respective service and cannot be processed in combination with Facebook data.

As well, Germany has ruled that Facebook’s going to have to get consent before it’s allowed to collect data from third-party websites or to assign that data to a Facebook user’s account.

Andreas Mundt, President of the Bundeskartellamt:

Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts. The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data.

Without user consent, Facebook won’t be allowed to merge data from its different sources, he said.

Mundt said that an “obligatory tick on the box” doesn’t cut it when we’re talking about agreeing to Facebook’s “intensive data processing.” Given that Facebook is the 500-lb. gorilla of social networks, users don’t have much choice but to go along with its terms of use, meaning that they haven’t really had what one might call “voluntary consent.”

Mundt:

The only choice the user has is either to accept the comprehensive combination of data or to refrain from using the social network. In such a difficult situation the user’s choice cannot be referred to as voluntary consent.

Tripping up tracking

Besides how this ruling might affect what are reportedly Zuckerberg’s plans to commingle user data from its apps, it could also affect how Facebook tracks both users and non-users alike across websites and apps.

One of the ways it does so is with the Like and Share buttons on external sites. Those buttons enable Facebook to track visitors’ IP addresses, what web browser (and version) they’re using, and other things that can help to identify who they are – even if they don’t actually click on the buttons. Facebook also collects device-identifying information via the Facebook Login, which lets users avoid having to type in a unique username and password for each service.

Facebook also has multiple tools to let advertisers target users and non-users when they’re not on its platform: Audience Network, for one, lets advertisers create ads on Facebook that show up elsewhere in cyberspace.

Advertisers can also target non-users with a tiny but powerful snippet of code known as the Facebook Pixel: a web targeting system embedded on many third-party sites. Facebook has lauded it as a clever way to serve targeted ads to people, including non-members, and uses the code to let third-party sites track whether the ads they run on Facebook succeed in converting visitors into buyers.

The ‘what, who, huh?!’ of the ads you see

In other, related news, Facebook users are going to see more information when they click “why am I seeing this?” – a question that can be accessed by clicking the top-right drop-down of any Facebook ad.

Up until now, users have been shown what brand paid for the ad, some biographical details they targeted and if your contact information had been uploaded. But starting on 27 February, Facebook’s going to also show:

  • when your contact info was uploaded,
  • if it was the brand that uploaded it or one of its agency/developer partners, and
  • when access to your information was shared between partners those partners.

It’s part of Facebook’s attempts to introduce more transparency into its ads business – efforts it started following criticism over its ads being used to tinker in the 2016 US presidential election. For example, over the past year, Facebook – along with Twitter and Google – have all been working on boosting transparency around who buys electoral and political issue-based ads.

Before that, in February 2018, Facebook said it was going to verify election ad buyers by snail mail. As well, Facebook last year began attaching “paid for by” labels on political and issue ads on Facebook and Instagram in the US and launched an archive to look it all up.

Facebook must be a bit weary of making privacy splashes: at any rate, rather than publishing a post about the “why am I seeing this” changes on its newsroom feed, it announced it on the Facebook Advertiser Hub page.

At any rate, back to the FCO ruling:

Facebook: Popular ≠ dominant

Facebook has a month in which it can appeal the ruling, and appeal it shall. In a blog post posted on Wednesday, Facebook said that the FCO has got it all wrong on three counts.

First, Facebook said, the FCO is underestimating the “fierce competition” the company faces in Germany. We’re really not all that popular in Germany, it said:

The Bundeskartellamt found in its own survey that over 40% of social media users in Germany don’t even use Facebook. We face fierce competition in Germany, yet the Bundeskartellamt finds it irrelevant that our apps compete directly with YouTube, Snapchat, Twitter and others.

Facebook says that the regulator also “misinterprets” its compliance with the EU General Protection Data Regulation (GDPR), by overlooking how Facebook actually processes data and what steps it’s taken to be compliant with the GDPR. At any rate, this should all be left up to the Irish Data Protection Commission, it said – the authorities who “have the expertise” to rule on these things.

In fact, Facebook says, the ruling “undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

Facebook defended the way it ties information together, pointing to positive outcomes such as “identifying abusive behavior and disabling accounts tied to terrorism, child exploitation and election interference across both Facebook and Instagram.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mqPzFT2W1AU/

Police demand Waze stop pinpointing their checkpoints

The New York Police Department (NYPD) over the weekend sent a cease-and-desist letter to Google, demanding that it stop giving away the location of police driving while impaired (DWI) checkpoints.

According to Streetsblog, the letter came from Ann Prunty, NYPD’s Acting Deputy Commissioner for Legal Matters.

From the letter, which has since 404’ed from a number of news sites but which CNET still has up:

This letter serves to put you on notice that the NYPD has become aware that the Waze Mobile application, a community-driven GPS navigation application owned by Google LLC, currently permits the public to report DWI checkpoints throughout New York City and map these locations on the application.

Those checkpoints are part of New York’s Vision Zero initiative, the letter said: a program to eliminate traffic fatalities by, among other things, enforcing DWI laws. It’s putting “significant resources” into the effort, the letter said, and Waze users are gumming it up by giving away their unannounced road blocks and thereby helping drunk drivers evade them.

That interference could cross over into the criminal, the NYPD said:

Individuals who post the locations of DWI checkpoints may be engaging in criminal conduct since such actions could be intentional attempts to prevent and/or impair the administration of the DWI laws and other relevant criminal and traffic laws. The posting of such information for public consumption is irresponsible since it only serves to aid impaired and intoxicated drivers to evade checkpoints and encourage reckless driving. Revealing the location of checkpoints puts those drivers, their passengers, and the general public at risk.

The letter was sent following Google’s launch of a new feature on its Google Maps app, alerting drivers to the location of police speed cameras. The new speed camera alerts began showing up on Google Maps last week.

This isn’t the first time that police have tried to get Google to muzzle Waze: In 2015, US police asked Google to pull the plug on citizens using the mobile app to “stalk” police locations, regardless of whether they’re on their lunch break, assisting with a broken-down vehicle on the highway, or hiding in wait to nab speeders.

Acquired by Google in 2013, Waze describes itself as “the world’s largest community-based traffic and navigation app”.

The GPS navigation app relies on community-generated content that comes from a user base that, as of June 2013, reportedly consisted of nearly 50 million. It lets people report accidents, traffic jams, and speed and police traps, while its online map editor gives drivers updates on roads, landmarks, house numbers, and the cheapest nearby fuel.

In response to the NYPD’s letter, Google sent this statement to CBS2:

Safety is a top priority when developing navigation features at Google. We believe that informing drivers about upcoming speed traps allows them to be more careful and make safer decisions when they’re on the road.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aY7BfM63694/

Student gets creative with data accidentally blasted out by university

When it comes to information security, Cal Poly Pomona (CPP) looks set to get a ‘could try harder’ on its report card this year. The polytechnic university, based in Pomona, California, inadvertently emailed personal information on all of its College of Science students to almost 1,000 people late last month.

According to a report in CPP’s student newspaper the Poly Post, an employee tried to send information to 940 computer science students on 28 January as part of an advising process to help students plan their courses. In the process, the employee accidentally attached a spreadsheet containing personal information on every one of the College of Science’s 4,557 students.

The information included the students’ names and addresses, their academic standing, their email, student ID, gender, ethnicity, and their grade point average (GPA).

A student who received the information tipped off the Office of Admission and Enrollment Planning 30 minutes after the mail went out, and staff deleted the email on CPP servers within ten minutes.

The ability to delete the mail from students’ email inboxes highlights the benefit of owning both the sending and receiving addresses in an email system. Still, it wasn’t enough to stop at least one enterprising computer science student from downloading the mail and getting creative.

Over on Reddit, ‘Billy Bronco The Spy’ crunched the numbers in the spreadsheet and used a throwaway account to produce a set of infographics drilling down into student statistics.

He said:

I am one of the students who received that email and was able to download it before the IT department was able to delete the email and the attachment. While I don’t intend to do anything nasty with the information, I did see this as an opportunity to see statistics that we normally would never be able to see, and so I created these infographics to show the information that wouldn’t invade the privacy of any individuals.

As well as an overview infographic in which he details a breakdown of students’ ethnicity, academic major, and gender, he also produced a separate slide called the Bronco Awards with some fun stats about specific majors:

He vowed not to reveal any individual student details, adding:

Invading the privacy of everyone to get this data is enough, that should be enough for you too.

CPP, which hasn’t posted anything about the incident on its website or social media accounts, didn’t immediately return our request for comment. However, the Poly Post quotes staff saying that the personal information cannot be used to log into students’ accounts.

The university won’t reveal its plans to address staff access privileges to information in the light of the breach, but it did tell the Post that it plans to introduce a system called CPP Connect. This will reportedly eliminate the need for mass emails during the advising process. However, it will still use listservs to communicate with the broader campus community.

When it comes to data breaches, this isn’t CPP’s first rodeo. In 2003, it mistakenly put information about 355 student applicants in a publicly accessible folder that went undetected for five years.

When the breach was discovered, then-CIO Stephanie Doda said:

We take the protection of personal information very seriously.

In 2011, a staff member put two files containing 38 faculty members’ personal details on a server without realising that it was publicly accessible.

At the time, university spokesperson Tim Lynch said:

We do see these as opportunities to address any issues out there.

With this third disclosure of information through employee error, Redditors didn’t seem especially impressed with the University’s assurances that it had things under control:

In 2015, hackers compromised nearly 20,000 CPP student records in another breach, which wasn’t the fault of CPP staff. The intruders infiltrated networks at We End Violence, a contractor that provided a state-mandated “Agent of Change” sexual assault prevention training class to students at CPP and other institutions.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zSNUlTq5xKw/

Child abuse imagery found in cryptocurrency blockchain

For the second time in a year, illegal child abuse images have been spotted inside a blockchain.

According to a post by web blockchain payments system Money Button, on 30 January its service was abused to place “illegal content” inside the Bitcoin Satoshi Vision (BSV) ledger, a recent cryptocurrency hard fork from Bitcoin Cash [BCH].

Money Button offers no mechanism to view files or links on BSV but whoever posted the content was able to achieve this via BitcoinFiles.org, a service set up to make files and links posted to blockchains viewable.

After being told of the content by local authorities, BitcoinFiles.org removed it from its websites and gave Money Button the bad news that its service had been used as a conduit. Said Money Button:

We have confirmed that was the case and we have banned the user responsible for creating those transactions.

Money Button’s statement makes no mention of the type of illegal content, but the BBC confirmed that child abuse images were involved after talking to Money Button founder, Ryan Charles.

Why has this happened now?

The short answer is: because it can – the ability of BSV to store data recently expanded.

However, it it’s also possible that someone was trying to make a point using shock tactics.

Blockchains are ledgers comprising blocks of data that are cryptographically linked to each other in succession, hence the chain.

For cryptocurrencies, the data in these blocks records transactions but in principle there is no reason why these can’t be used to encode other types of data as long as the user is willing to do a bit of work.

The dark arts of this vary from blockchain to blockchain, but one technique is to use the OP_RETURN function normally used to mark transactions as invalid. The catch with OP_RETURN is that the data limit is normally small – 233 bytes in BSV’s case – and it still costs because cryptocurrency blockchains understand everything in terms of transactions.

However, in December, BSV miners agreed to expand the OP_RETURN data limit to 100KB, which doesn’t sound huge but is obviously a big increase on 233 bytes (0.233KB). As Money Button itself noted:

We are happy to announce that as of today, Money Button supports giant OP_RETURN data sizes of up to 100 KB, making it possible to store files such as images, audio, video, documents, and any other type of data in a single transaction on the Bitcoin SV (BSV) blockchain.

And more cheaply too:

Money Button provides this service completely for free, and the only fees paid are to miners, which at present market prices are only about 7 US cents per 100 KB.

The implications

The problem with posting data to a blockchain is that by its nature it’s immutable and can’t be removed short of a currency hard fork (content can be marked to avoid it being rendered).

Whomever posted the images to BSV ledger would have known this, which has raised the possibility that they were trying to prove this point.

As a cryptocurrency that has pushed the idea of larger and more efficient block sizes (128MB compared to Bitcoin’s original 1MB) perhaps it’s a strange form of politics.

Whatever its origins, it’s not the first time child images have been found on a blockchain. In March last year, German researchers said they’d found links to this kind of material on Bitcoin, including one illegal image.

This will sound like a tiny fraction of the images that are available on the dark web and the internet more widely, but the blockchain community remains understandably touchy about bad publicity.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wnrzvBXjhdE/

iPhone apps record your screen sessions without asking

What would you call it if your iOS apps were tracking your every tap and your every swipe, then sending what amounts to a running series of screen captures off to servers to scrutinize – sometimes to their own servers, sometimes to those at a third-party customer experience analytics firm?

… sometimes masking personal data such as passport numbers and credit card data, sometimes not, leading to at least one data breach?

… and not bothering to mention any of this in their privacy policies, hence not informing users what they’re up to and obviously offering no user opt-in?

The answer from one Twitterer: “unacceptable surveillance.” From another: “A perfect nightmare for humanity.”

Well, welcome to session replay services

Following Air Canada having leaked the personal data of up to 20,000 users of its mobile app – including passport numbers and expiration dates, passport country of issuance, NEXUS numbers for trusted travelers, gender, dates of birth, nationality and country of residence – a mobile app expert took a look at that mobile app to see how it dropped all of those sensitive details.

The expert, who goes by the name The App Analyst, uses a man-in-the-middle tool called Charles Proxy to intercept data sent, and possibly spilled, by apps. In so doing, his analysis found that Air Canada’s mobile app, which allows customers to book and manage flights, tracks users with analytics from a company called Glassbox Digital.

The ‘oops!’ that shed light on these apps

As TechCrunch reports, Glassbox is one of many session replay services on the market. These services help companies to determine their users’ device characteristics, to collect precise location information, and to take screenshots of the devices so they can replay entire user sessions.

Other companies that market “user recording” technology include Appsee, which promises to let app developers “see your app through your user’s eyes,” while another, UXCam, says it lets developers “watch recordings of your users’ sessions, including all their gestures and triggered events.”

All of this bubbled to the surface earlier this week when one of the website analytics firms – Mixpanel – admitted to accidentally slurping up user passwords in its efforts to help web publishers improve user engagement with their apps.

Mind you, online sites have long been recording our keystrokes

We should all be accustomed to, or at least aware of, the fact that websites have always been able to log our keystrokes. That’s just plain old Web 1.0. JavaScript, the language that makes this kind of monitoring possible, is both powerful and ubiquitous. It’s not news, but it’s certainly worth repeating: anybody with a website can capture what you type, as you type it, if they want to.

That’s not intrinsically bad. Without the abilities to track the position of your cursor, track your keystrokes and call “home” without refreshing the page or making any kind of visual display, sites like Facebook and Gmail would be almost unusable, searches wouldn’t auto-suggest, and Google Docs wouldn’t save our bacon in the background.

‘Transparent black boxes’

Mobile apps need to be optimized, too. But when you’re talking about what mobile apps get up to when they want to analyze user interaction, tracking is one thing. Users might not like the notion of having one or more companies look over their shoulders as they type, but that’s not what’s leading to data spillage. Rather, it’s what The App Analyst calls “transparent black boxes.”

Glassbox captures many screenshots during a user session on the Air Canada mobile app. Some of those are of fields into which users enter sensitive data, including the passport numbers and other pieces of personal information that were breached in August. In order to shield that sensitive data from being captured in screen shots and stored in a screenshot database, Glassbox obfuscates them with black boxes. …

… that can be inappropriately configured. From The App Analyst’s analysis:

The configuration which Air Canada uses to specify placement of black boxes is not extensive enough and almost every black box used to cover sensitive data is captured in screenshots.

In early January, he posted this YouTube video, which steps through the Air Canada app’s GlassDoor-enabled screen captures:

The App Analyst told TechCrunch that the misconfigured, not-thoroughly-tested obfuscation means that Air Canada employees – and anyone else capable of accessing the screenshot database – can see unencrypted credit card and password information.

Air Canada attempts to cover the password form when logging in. However they do not obfuscate the initial setting of the password during account creation or resetting the password when forgotten.

Passwords and credit cards were not, however, involved in the August breach. The App Analyst said that he saw them when he used the “show password” functionality, which leads him to believe that users’ passwords may in fact be captured in screenshots, in plain text, which goes against industry standards.

Five months after the Air Canada breach in late August, TechCrunch asked The App Analyst to analyze some of the popular iPhone apps that use GlassDoor’s session replay technology. Its clientele includes hoteliers, travel sites, airlines, cell phone carriers, banks and financiers, with such names as Abercrombie Fitch, Hotels.com and Singapore Airlines.

Some of the apps he looked at sent session replays to Glassbox, while some sent them back to a server on their own domains. The App Analyst didn’t find a lack of obfuscation on par with the Air Canada app, though he did discover some instances of non-obfuscated email addresses and postal codes.

Not even a whisper in teeny-tiny type

As TechCrunch notes, it’s impossible to know if a mobile app is recording screen sessions as you use it. These companies certainly don’t seem to be disclosing it if they are. From Zack Whittaker’s TechCrunch article:

We didn’t even find it in the small print of their privacy policies.

A few examples of the privacy policies he squinted at in vain:

Expedia’s policy makes no mention of recording your screen, nor does Hotels.com’s policy. And in Air Canada’s case, we couldn’t spot a single line in its iOS terms and conditions or privacy policy that suggests the iPhone app sends screen data back to the airline. And in Singapore Airlines’ privacy policy, there’s no mention, either.

Out of the companies that responded to TechCrunch, none of them addressed the fact that their privacy policies don’t mention session replays. This is what Air Canada had to say after the TechCrunch article was posted on Wednesday:

Air Canada uses customer provided information to ensure we can support their travel needs and to ensure we can resolve any issues that may affect their trips. … This includes user information entered in, and collected on, the Air Canada mobile app. However, Air Canada does not – and cannot – capture phone screens outside of the Air Canada app.

Tell users what you’re doing, or face expulsion

TechCrunch had a response from Apple, in which a spokesperson said that Apple had notified developers that are in violation of the “strict privacy terms and guidelines” around recording user activity, and would be removing offending apps from the store if developers don’t properly disclose their antics to users:

Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity.

We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_2QPG25vdWI/

Android vulnerabilities open Pie to booby-trapped image attacks

A trio of bugs caused by programming inconsistencies could have opened up Android 7, 8 and 9 to remote attackers wielding booby-trapped image files.

In Google’s own words:

[These bugs] could enable a remote attacker using a specially crafted PNG file to execute arbitrary code within the context of a privileged process.

If you want to track the bugs by number, they are handily sequential: CVE-2019-1986, CVE-2019-1987 and CVE-2019-1988.

The bugs were inherited from an open-source image handling programming toolkit called Skia that is bankrolled and managed by Google, and used as the graphics engine in many products, including Chrome, ChromeOS and Android.

(The bugs described in this article notwithstanding, Skia is worth looking at if you are after a free and liberally-licensed graphics library that runs well even on low-powered computer hardware.)

This crop of bugs involves uninitialised variables and imprecise error handling in the code responsible for processing PNG image files.

Feeding malformed PNG files into Skia’s image rendering code could cause it to access and use memory that it shouldn’t.

In theory, this sort of flaw means that attackers can almost certainly make Skia crash, and may even be able to trick it into failing in a way that they can control.

For example, if you can trick a program into consuming and trusting memory that it didn’t initialise itself, and you can find a way to manipulate the contents of that not-safe-to-use memory ahead of time…

…then you have a fighting chance of creating a remote code execution exploit, or RCE.

Typically, RCEs based on memory mismanagement involve persuading a program into downloading and using booby-trapped data – something that ought to be perfectly safe, if the error-checking in the program is correct – and then tricking the program into treating some of that data as code, executing it instead of merely processing it.

Safe by default?

Bugs that can be triggered by image files such as JPEGs and PNGs are particularly handy for attackers, because it’s not considered controversial for software such as your web browser to fetch and display images from remote websites by default.

We’re generally careful about opening files such as PDFs, Word documents and programs if we downloaded them from the internet or received them in email attachments, because we know they can contain risky add-in components such as macros, scripts and so on.

Indeed, PDFs, DOCs and EXEs can all be active carriers of malware, even if the files aren’t deliberately malformed and there are no bugs for them to exploit.

But images aren’t supposed to contain executable code, and even if they do, it’s not supposed to cause problems, so browsers and image viewers that fetch picture files from remote web servers generally just process and display them without so much as a by-your-leave.

That’s why potentially exploitable bugs in the software libraries that your operating system uses to display images are often greeted with great concern.

How bad is it?

The bad news in this case is that the bugs affect Android 7, 8 and 9, so just having a modern and supposedly safer flavour of Android won’t protect you.

The good news is that the bugs were patched in Google’s February 2019 update.

Of course, given the size and variability in the Android ecosystem, it’s anyone’s guess when, or even if, the relevant patches will filter down to your phone.

The other good news is that it doesn’t seem to be the Bad Guys who found these bugs first, because there’s no sign that they’ve been abused in the wild to mount attacks.

As a result, the RCE danger in this case is more theoretical than practical.

In fact, not all potential RCE vulnerabilities can be turned into working attacks, thanks to mitigations such as DEP and ASLR.

DEP is short for data execution prevention, and it’s a method by which the operating system will refuse to run as code anything that was originally presented as data.

ASLR is short for address space layout randomisation, and it’s the technique of loading system software at a different place in memory on every device, so that attackers can’t guess where to find the vital sytem functions or data they need to access to make their exploits work.

What to do?

If you’re an Android user on version 7 or later, make sure you’ve got your February 2019 updates.

If you haven’t, ask your handset vendor or carrier when you can expect them to arrive.

Keeping pressure on Android providers to move quickly when it comes to patches is one way you can help put an end to the inconsistency and confusion around updates that characterise the Android ecosystem.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3mS7IXSepoc/

EE customer: Creepy ex used employee access to change my mobile number, spy on me

A staffer at BT Group’s EE has been accused of using his employee access to peek at his ex’s account details and change her phone number to spy on her texts.

His ex-partner, Francesca Bonafede, a customer of the phone provider, claimed he had accessed her personal data, including bank details, new address and images of her driver’s licence.

According to the BBC, the incident happened last February, when Bonafede found that her phone suddenly stopped working.

When she called EE, she was told that someone had visited a branch, asked for a new SIM and had the account switched to a new handset, and registered it to a new address, which Bonafede recognised as her ex’s.

If he did peek at her account details to make the switch, her ex would have received all Bonafede’s texts and calls during the period she was trying to resolve the problem.

UK’s Just Eat faces probe after woman tweets chat-up texts from ‘delivery guy’

READ MORE

But Bonafede said that EE’s response was lacklustre. Initially the agent “didn’t seem concerned at all”, she told the Beeb, then EE failed to keep her updated on the investigation into the data breach.

She eventually went to the police after her ex bombarded her with texts and calls asking her to drop her complaint, and even visited her new address.

“The only way he could have known about my new address was through the data breach, because we broke up quite a long time before that,” she claimed.

The man was arrested and given a harassment warning, the BBC reported, but the UK mobile firm was said to be unmoved until Bonafede took her grievances to social media.

EE told the BBC that its own internal polices weren’t followed in the case, but that its employee had been given the heave-ho.

The network added that it had “worked quickly to protect Francesca”, but apologised “for not keeping her informed”.

Staffers peeking at personal data held in their company’s accounts is a breach of data protection laws, and the UK’s privacy watchdog has prosecuted people for it.

In November 2018, a trainee secretary at a GP practice was fined £350 for snooping on the health records of colleagues, friends and strangers; a month later, a former headteacher was fined £700 for downloading personal info on the kids he used to teach.

And in January last year, fast-food-flinger Just Eat was forced to investigate after a driver allegedly sent a sea of WhatsApp messages to a customer after meeting her when he delivered a takeaway to her house. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/08/ee_customer_says_creepy_ex_used_employee_access_to_change_her_mobe_number/

Leaky child-tracking smartwatch maker hits back at bad PR

Kids’ smartwatch-pusher Enox, whose Safe-KID-One watch was pulled by the European Commission, has hit back against the bad PR – with some rather unusual arguments.

Citing an investigation by Icelandic infosec firm Syndis, the Commission this month outlined “serious” risks with the watch, which comes with GPS, a mic and a speaker so parents can make and receive calls from their kid or kids and keep an eye on where they are.

However, multiple studies found that such devices are prone to security flaws, with hackers being able to seize control of them to make calls to previously unknown numbers, eavesdrop and track the wearers.

kid

European Commission orders mass recall of creepy, leaky child-tracking smartwatch

READ MORE

“A malicious user can send commands to any watch making it call another number of his choosing, can communicate with the child wearing the device or locate the child through GPS,” the Commission’s alert warned on Safe-KID-One.

When The Reg first reported the recall – which is thought to be the first of its kind – founder Ole Anton Bieltvedt unsurprisingly disputed Syndis’s analysis.

Instead, he pointed to a one-page assessment from the German federal agency Bundesnetzagentur that the watch didn’t violate that country’s Telecommunications Act.

However, rather than leaving it at that, Bieltvedt – who had to watch as a PR firm picked up on the story and pushed it out to multiple outlets – sought to put the watch’s glitches into some kind of context. And that’s when things got more than a little bizarre.

“If somebody wants to do harm to a kid, he will follow the kid, not a vague watch,” he told The Daily Star. Er, sure, that’s one argument.

His reference to “vague” is because, apparently, the watch’s GPS system “has an old construction from 2015” in which the antenna was “not strong” meaning the accuracy of the watch was only +/- 500m.

“The main function of the watch, was the clock and the telephone… The sales did stress this point, and the buyers didn’t expect much from the GPS,” he told The Reg. So that’s fine then.

Next up is the risk associated with hacking into the server, with the founder suggesting that “regular” people wouldn’t be able to do it, and if they did, it didn’t really matter.

“We feel convinced that no regular person can misuse or break into our watches,” Bieltvedt told The Register. “Besides, what will you find there? Just a few phone numbers, which can be mostly found in a telephone catalogue or on the internet.”

Well, doesn’t everyone put their family’s mobe digits online?

Another complaint was that the press had latched onto the announcement (Bieltvedt said he’d had to deal with “dozens or hundreds” of journos) when there are plenty of unsecured systems out there.

“Recently, a student in Germany broke into the smartphones and/or computers of 1,000 of the leading political and commercial people in Germany,” he said.

“This shows that the whole Internet system and the respective equipment is not safe, yet. So, we think this is a system problem more than an individual product problem.”

Similarly, he argued to The Reg that since the Icelandic firm is a “high tech company, specialising in trying to protect banks and high-grade internet and computer systems”, it could have broken into “any watch and any computer on the market, all brands and types included”.

The news here, though, isn’t that a security firm can hack into – as Bieltvedt put it – “a simple and cheap kids’ watch”; it’s that a product marketed to concerned guardians appears to be a privacy and security risk and has been recalled by the Commission.

More broadly, of course, he’s not wrong: there’s no end to reports of sophisticated hacks on even the supposedly most secure kit, and plenty of very dodgy connected devices at the opposite end of that scale.

And people concerned with security and privacy, especially in kids’ toys, will no doubt hope that the Commission slaps more bans on them.

But just because other systems can be hacked doesn’t mean it’s OK to market an insecure product to parents as a child safety device – and El Reg reckons most punters would hope that companies recognise that.

When this point was put to him, Bieltvedt responded: “Of course any internet device, also our watch, needs maximum security. But, at this stage, this security is not 100 per cent available, neither for us nor others.”

He added that Enox had “never experienced any misuse or problem with Safe-Kid-One”, asking: “What are you and your colleagues trying to blame us for?”

Well, y’know, just marketing a watch that could expose kids’ locations (even if only within 500m) and that miscreants could hack into to call any number they fancy as a kids’ safety device – and then failing to show it takes any criticisms seriously. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/08/privacy_fail_kids_smartwatchmaker_hits_back_at_bad_pr/