STE WILLIAMS

Android November update fixes flaws galore

Studying Android’s November security bulletin, you’ll notice that there’s a fair amount to patch.

In total, there are 36 vulnerabilities assigned a CVE, and another 17 relating to Qualcomm components rather than Android itself.

Within Android, four rated are critical and 13 rated as high. If there’s a standout it might be CVE-2018-9527, simply because it’s a Remote Code Execution (RCE) vulnerability affecting all versions of from Android 7.0 (Nougat) onwards.

The other RCEs are CVE-2018-9531 and CVE-2018-9521, although both relate to version 9.0 (Pie), which mainly affects devices released since the summer.

CVE-2018-9531 turns out to be one of a clutch of CVEs arising from the Libxaac library, which Google says has been marked “experimental” and “and is no longer included in any production Android builds.”

Leaving aside the extra flaws added to the mix this month by Qualcomm, November looks very similar to every other month this year – plenty of fixes, exactly what one might expect.

The complicated bit

However, this being Android, things are never that simple because when these patches appear on your device – indeed whether they appear at all – will depend on several factors.

One factor is that November’s patches are for Android versions 7.0 and later: devices that either shipped with this after August 2016 or were upgraded later from an earlier version.

In other words, if your device runs Android 6.x, the three years Google commits to support that device with security updates ended in September and now you’re on your own.

Another factor is how quickly the device maker or mobile network gets around to making the November update available to customers.

To speed things up from the glacial patching of the past, in 2017 Google initiated something called Project Treble that allowed vendors to apply security patches without having to refresh the entire OS.

Unfortunately, vendors other than Google can take anything from one to several months to apply these, while it’s even been claimed that some simply lie about the patch version.

It’s possible the delay has something to do with the difference between Android’s Framework updates (the one managed by Google itself, increasingly through its own firmware over-the-air servers) and those relating to the components that are part of the vendor’s hardware and software for each device.

To that end, Android’s monthly updates work on two patch levels, one identified by the first day of the month (i.e. 1 November), and one by the fifth of the month (5 November).

If your phone mentions the fifth of the month (Settings About Phone scroll down to Android Patch Level) that means you have both the Framework updates and the vendor updates up to and including the current month.

If, however, it you see the first day of the month, that means you have the Framework updates for that month but the vendor-specific updates only up to the previous month (we told you it was a bit complicated).

Unlike Apple with its small family of devices designed by itself, Android devices are made by numerous vendors, each of which has different models running different versions of Android.

For now, the dream of every Android device getting a guaranteed monthly update for security vulnerabilities is getting nearer whilst appearing frustratingly just out of reach.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AQpY2NtyAXg/

HSBC now stands for Hapless Security, Became Compromised: Thousands of customer files snatched by crims

HSBC has admitted miscreants have probably made off with personal details of thousands of its online-banking customers.

The bank submitted paperwork [PDF] to the California Attorney General’s office late last week outlining its plan to notify folks of the significant data theft. California law requires that the AG be notified whenever a computer security breach affects 500 or more residents in the US state.

HSBC would not give the exact number of online banking accounts crooks rummaged through, but it would say the hack affects “less than 1 per cent” of what reports estimate are 1.2 million US customers, meaning as many as 12,000 Americans could have had their personal information and account details fall into the hands of scumbags. Bear in mind, as we’ve seen with Equifax, that number may rise considerably.

The accounts were likely ransacked between October 4 and 14, this year, we’re told.

“We are reminding our customers to protect access to their banking accounts by regularly changing their passwords, and by using unique passwords they are not using elsewhere, including on any social media accounts,” an HSBC spokesperson told The Register.

That suggests the accounts were accessed using so-called credential stuffing, in which criminals exploit the fact people reuse the same usernames and passwords across many sites. The hackers may have obtained victims’ login details from one website, and used them to log into HSBC online banking accounts that reused the same credentials.

The data likely swiped from the online accounts looks to be highly sensitive and, if put to use by cybercriminals and identify thieves, could be extremely harmful to HSBC and its customers.

Woman stares at laptop screen, shocked. Pic by shutterstock

HSBC biz banking crypto: The case of the vanishing green padlock and… what domain are we on again?

READ MORE

HSBC says the hackers would have been able to siphon off customers’ full names, mailing addresses, phone numbers, email addresses, dates of birth, account numbers, account types, account balances, transaction histories, payee account information, and statement histories.

Phishing gold in other words; basically, everything needed to hoodwink marks with carefully crafted emails, and nearly everything (minus the social security number) to steal someone’s identity.

“HSBC became aware of online accounts being accessed by unauthorized users between October 4, 2018 and October 14, 2018,” the bank will tell those whose details were likely nabbed during the cyber-raid.

“When HSBC discovered your online account was impacted, we suspended online access to prevent further unauthorized entry of your account.”

HSBC says that “out of an abundance of caution” it is going to offer one year of free credit monitoring and identity protection to those who were affected. “We have enhanced our authentication process for HSBC Personal Internet Banking, adding an extra layer of security,” it added.

It doesn’t take an abundance of caution to realize that, if you receive a letter from HSBC, you should take them up on the offer ASAP, ask for a credit freeze, and keep a very close eye on your bank statements in the future. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/06/hsbc_security_broken/

HSBC: Security Breach Exposes Account, Transaction Data

Unauthorized users accessed HSBC accounts between Oct. 4 and 14, the bank reports in a letter to customers.

HSBC Bank has informed account holders of a data breach affecting an undisclosed number of users, the organization reported this week. In a letter sent to customers and the California Attorney General’s Office, it states online accounts were compromised from Oct. 4 to 14.

The bank reports compromised information may include full names, mailing and email addresses, phone numbers, birthdates, transaction histories, payee account data, statement histories, and account numbers, types, and balances.

HSBC suspended access to affected accounts and is contacting victims about changing their online credentials. It says it has improved its authentication process for HSBC Personal Internet Banking and is offering customers a complimentary, year-long subscription to Identity Guard, which they can use to monitor accounts for credit fraud and malicious activity.

Data leaks caused by negligent third-party providers are increasingly common, says High-Tech Bridge founder and CEO Ilia Kolochenko. Oftentimes, large businesses deploy demo systems to production and forget about them, leaving data and systems vulnerable. Abandoned US-based Web systems containing customer data could be a possible attack vector.

HSBC’s response has been prompt and technically adequate, he explains, but there is still potential for consequences. “This will, however, unlikely exonerate them from private lawsuits and, perhaps, even a class action by disgruntled customers and privacy watchdogs,” Kolochenko says.

Read more details here.

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/hsbc-security-breach-exposes-account-transaction-data/d/d-id/1333208?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Critical Encryption Bypass Flaws in Popular SSDs Compromise Data Security

Vulnerabilities in Samsung, Crucial storage devices enable data recovery without a password or decryption key, researchers reveal.

The full disk hardware encryption available on some widely used storage devices is so poorly implemented there may as well not be any encryption on them at all, say security researchers at Radboud University in the Netherlands.

Hardware full disk encryption encryption is generally perceived as good as or even better protection than software encryption. Microsoft’s BitLocker encryption software for Windows even defaults to hardware encryption if the feature is available in the underlying hardware.

But when the researchers tested self-encrypting solid state drives (SSDs) from two major manufacturers — Samsung and Crucial — they found fundamental vulnerabilities in many models that make it possible for someone to bypass the encryption entirely.  

The flaws allow anyone with the requisite know-how and physical access to the drives to recover encrypted data without the need for any passwords or decryption keys.

“We didn’t expect to get these results. We are shocked,” says Bernard van Gastel, an assistant professor at Radboud University and one of the researchers who uncovered the flaws. “I can’t imagine how somebody would make errors like this” in implementing hardware encryption.

Together, Samsung and Crucial account for some half of all SSDs currently sold in the market. But based on how difficult it is to get full disk encryption at the hardware level right, it wouldn’t be surprising if similar flaws exist in SSDs from other vendors as well, van Gastel says. “We didn’t look at other models, but it is logical to assume that Samsung and Crucial are not the only ones with the problems,” he notes.

Many of the problems have to do with how difficult it is for vendors to correctly implement the requirements of TCG Opal, a relatively new specification for self-encrypting drives, van Gastel says. The standard is aimed at improving encryption protections at the hardware level, but it can be complex to implement and easy to misinterpret, resulting in errors being made, he adds.

One fundamental flaw that Gastel and fellow researcher Carlo Meijer 
discovered in several of the Samsung and Crucial SSDs they inspected was a failure to properly bind the disk encryption key (DEK) to a password. “Normally when you set up hardware encryption in a SSD, you enter a password. Using the password, an encryption key is derived, and using the encryption key, the disk is encrypted,” Gastel says.

What the researchers found in several of the SSDs was an absence of such linking. Instead of the encryption key being derived from the password, all the information required to recover the encrypted data was stored in the contents of the drive itself. Because the password check existed in the SSD, the researchers were able to show how someone could modify the check so it would always pass any password that was entered into it, thereby making data recovery trivial.

Another fundamental flaw the researchers discovered allows for a disk encryption key to be recovered from an SSD even after a user sets a new master password for it. In this case, the vulnerability is tied to a property of flash memory in SSDs called “wear leveling,” which is designed to prolong the service life of the devices by ensuring data erasures and rewrites are distributed evenly across the medium, van Gastel says. Several of the devices that the researchers inspected stored cryptoblobs in locations that made it possible to recover the DEK even if a new master password is set for it.

In total, the researchers discovered six potential security issues with hardware encryption in the devices they inspected. The impacted devices are Crucial MX 100 (all form factors); Crucial MX200 (all form factors); Crucla MX300 (all form factors); Samsung 840 EVO; Samsung 850 EVO; and the Samsung T3 and T5 USB drives.

The key takeaway for organizations is to not rely on hardware encryption as the sole mechanism for protecting data, van Gastel says. Where possible, it is also vital to employ software full-disk encryption. He recommends using open source software, such as Veracrypt, which is far likelier to have been fully audited for security issues than a proprietary encryption tool.

Organizations using BitLocker should adjust their group policy settings to enforce software encryption in all situations. Such changes, however, will make little difference on already-deployed drives, van Gastel notes.

In a brief consumer advisory, Samsung acknowledged the issues in its self-encrypting SSDs. The company advised users to install encryption software in the case of nonportable SSDs and to update their firmware for portable SSDs. Crucial has so far not commented publicly on the issue.

For the industry at large, the issues that were discovered in the Samsung and Crucial drives highlight the need for a reference implementation of the Opal spec, van Gastel says. Developers need to have a standard way of implementing Opal that is available for public scrutiny and auditing, he says.

Related Content:

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/critical-encryption-bypass-flaws-in-popular-ssds-compromise-data-security/d/d-id/1333207?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why the CISSP Remains Relevant to Cybersecurity After 28 Years

The venerable Certified Information Systems Security Professional certification has been around for a very long time — and for good reason.

I’m often asked why anyone should pursue and obtain a Certified Information Systems Security Professional (CISSP) certification and what advantages having the cert holds for an aspiring security professional. I’ve been enjoying helping others achieve this goal for almost three years, so I’m always happy to provide an answer. However, to provide a good answer, I need perspective — so I always reply with the qualifier, “It depends.”

Depends on what? Allow me to offer some common perspectives.

A significant portion of people looking to land their first cybersecurity job want to know how having a CISSP influences employer decisions during the hiring process. The remainder have been in the information technology or information security field for years and view the CISSP not as a hiring advantage but as a necessary benchmark in their career. In some instances, these experienced professionals seek certification to stay employed during an economic downturn or to switch jobs when there is an employer preference or requirement for the certification.

For those in the former camp, please know that the International Information System Security Certification Consortium — (ISC)2 — requires CISSP candidates to have a minimum of five years of experience within at least two of the eight Common Body of Knowledge (CBK) security domains or four years of experience and a college degree. These requirements are necessary for maintaining the credibility of the certification. Those not meeting these minimum requirements can still sit for the CISSP certification exam and will be granted associate status until they meet them. Since cybersecurity is such a dynamic career field, (ISC)2 additionally requires all certified professionals and associates to continuously learn and upgrade their knowledge and skills.

CISSP’s Storied History
Most newcomers are surprised that the CISSP has been around for a very long time. Created in 1994, (ISC)2 currently identifies over 70,000 CISSPs throughout the world. A widely recognized standard of achievement, the CISSP holds the distinction of being accredited by major organizations, including ANSI, ISO/IEC, the Department of Defense, and the National Security Agency. For people in DoD and NSA camps who are part of the Information Assurance (IA) workforce as defined by DoD Directive 8570.01, this means the CISSP is required, as are US federal civilian employees and government contractors interfacing with these organizations. Similar requirements may apply for non-U.S. candidates pursuing the CISSP for employment in non-U.S. military, intelligence and civilian government agencies.

To further enable employers, educators, employees and job seekers, recent NIST efforts have produced the August 2017 NICE (National Initiative for Cybersecurity Education) Cybersecurity Workforce Framework, which maps knowledge, skills, and abilities to standardized cybersecurity workforce roles and recommended certifications, like the CISSP, directly to those roles. Since a standard simplifies candidate selection during the hiring process, I predict that more employers will engage the NICE Framework to make informed candidate decisions in the future. As NICE is a NIST initiative, it’s also a given that current and future US federal agency employees will be held to these new standards to a greater degree. In addition, progressive learning institutions are also leveraging the Framework as a tool for curriculum development. These exciting changes within the industry should provide all potential certification seekers an additional rationale on why having the CISSP is still relevant now more than 20 years since its inception.

“CyberSeek” the CISSP
A practical application of the Framework is illustrated by the NICE CyberSeek project. CyberSeek is a useful website for employers, employees, educators, and students seeking statistics and career planning insight regarding the current US cybersecurity workforce landscape. One of the most interesting features of this site includes a cybersecurity supply-demand heat map focusing on the number of jobs filled and available based on each Framework role and cybersecurity certification type, including the CISSP. I recommend that everyone seeking a CISSP certification explore this site, particularly the heat map tool, which provides cyber workforce statistics at the national, state, and municipal levels. Motivated job seekers should note that the CISSP is the highest employer-requested certification of all those listed on CyberSeek.

Finally, some personal insight: I started my cybersecurity career in 2010 after serving in various IT roles for the previous 15 years. When I decided I wanted to focus on cybersecurity, I realized how much variety existed across roles and became increasingly aware of my own confusion regarding concepts and terminology. I did not have a mentor to guide me. Industry hype and product marketing were not helping. I decided to set a goal to study for and obtain my CISSP certification and slowly began to wrap my head around fundamentals.

Since obtaining my certification, I’ve learned one of the most important aspects of being a CISSP is living out the values embodied by the (ISC)2 Ethics Statement. I choose to actively pursue those values by seeking to advance the profession, mentoring, and teaching others about cybersecurity. Today, the greatest degree of satisfaction I have in being a CISSP is helping others realize their goal of advancing their own career by also becoming a CISSP.

If you wish to learn more about CISSP certification, check out the SANS MGT414: SANS Training Program for CISSP® Certification course or research this topic online.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

A native Houstonian and proud Texan by birth, Steven’s cultural and technical roots are naturally and irreversibly intertwined within the oil and gas industry. His range of operations, engineering, and major capital project experience spans multiple sectors within this very … View Full Bio

Article source: https://www.darkreading.com/why-the-cissp-remains-relevant-to-cybersecurity-after-28-years/a/d-id/1333178?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Most Businesses to Add More Cloud Security Tools

Cloud adoption drives organizations to spend in 2019 as they learn traditional security practices can’t keep up.

Three quarters of organizations plan to buy more cloud security tools in 2019 as a means to better secure increasingly complex cloud environments, new research shows.

The data comes from Alcide, which today released its “2018 Report: The State of Securing Cloud Workloads.” Nearly 350 security, DevOps, and IT pros weighed in to share their cloud security plans. Most are struggling to secure complex cloud setups, and think more tools will help.

Results show cloud security workflows remain fragmented. Across all company sizes, about 53% of respondents distribute their cloud workloads across a hybrid infrastructure; 18% use multi-cloud. The larger the business, the higher the degree of fragmentation, researchers found.

More than 20% of organizations with more than 1,000 employees are using at least 10 cloud security tools, compared with 3% of medium-to-large businesses with less than 1,000 workers. Many respondents were unsure how many solutions were being used to secure their cloud workflows, a problem which experts point out can hold the entire business back.

“Fragmented stacks and poor visibility into deployed solutions are very often constraints of business velocity, due to difficulties in scaling securely and reliable,” the report said.

Despite the potential for business slowdown, 75% percent of respondents expect their cloud security stack to increase over the next year. One-quarter expect it will remain the same, and none expect to use fewer cloud security tools in 2019. The tools they’re looking to buy are “quite different than existing security tool stacks,” explains Alcide CTO Gadi Naor.

As it stands, organizations currently use cloud security controls for security groups (63%), host-based threat protection (59%), file integrity monitoring (44%), account compliance features (42%), and visibility tools (3%). Naor expects as they invest in security tools, they will more closely focus on microservices architecture, threat protection, and serverless architecture.

There seems to be a gap between the growth of serverless computes and the expertise needed to secure them, researchers report. While 60% of respondents say their business’ serverless computes are “very secure,” none were ready to admit they were “completely secure.” Despite some security concerns, 57% of serverless users are running it in production and development.

Part of the challenge in cloud security is the shared responsibility model, which dictates how cloud providers and customer handle security for applications deployed in the cloud, Naor says.

Who’s in Charge

So who handles all these purchases? While the responsibility for securing the cloud still largely falls to corporate IT (46%), specialized DevOps or DevSecOps teams are taking over the job within 34% of organizations. Alcide researchers say this indicates a trend toward specialization.

Most (73%) of security professionals still manually configure their application security policies. Forty-four percent of medium-large businesses, and 74% of large enterprises, have at least three people involved with configuring security for any app. It’s a time-consuming process that can leave the company exposed to human error, which Naor calls “a weak link.”

He advises companies to drive their security awareness and understanding before they adopt new tech. “This is where I recommend enterprises take a step back and build their security stack before you build your applications on new technologies,” he explains.

Alcide is far from the only company to find holes in enterprise cloud security. In its 2018 Cloud Security Report, Crowd Research Partners found only 16% of businesses report their traditional security tools are sufficient to manage security across the cloud. Eight-four percent say traditional security tools don’t work at all, or have limited functionality, in the cloud.

Visibility of cloud data is also an issue. Only 7% of businesses have strong visibility of all critical data, ForcePoint found, and 58% say they only have slight control over information in the cloud. On top of that, data from RedLock shows nearly half of databases in the cloud aren’t encrypted.

Related Content:

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/most-businesses-to-add-more-cloud-security-tools/d/d-id/1333214?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

CIA’s secret online network unravelled with a Google search

According to reports, the US government is still reeling from a catastrophic, years-long intelligence failure that compromised its internet-based covert communications system and left CIA informants vulnerable to exposure and execution worldwide.

In 2013, following the compromise, CIA experts worked feverishly to reconfigure their secret websites and try to move their informants to safety, but intelligence sources say that damage this severe probably can’t be wholly undone.

Yahoo published a report last week about the previously unreported intelligence disaster.

According to Yahoo, which relied on 11 former intelligence and national security officials for the report, the problem started in Iran and “spiderwebbed” out to countries that were friendly to Iran.

It wasn’t just one point of failure: it was a string of them. One of the worst intelligence failures of the past decade was in 2009, when the Obama administration discovered a secret Iranian underground enrichment facility. The Iranians, furious about the breach, went on a mole hunt, Yahoo reports, looking to dig out foreign spies.

Unfortunately for the US and its agents, it didn’t take long to find the moles. That’s due in large part to what one former official called an “elementary system” of internet-based communications – one that was never meant to stand up to sophisticated counterintelligence efforts such as those of China or Iran, let alone one that should have been entrusted with the extremely sensitive communications between the CIA and its sources.

That system had initially been used in war zones in the Middle East, and entropy kept it in use by far more people, for far longer, than originally intended. Part of the problem is that it was easy to use, tempting intelligence agencies to overlook its shortcomings. Yahoo quotes a former official:

It was never meant to be used long term for people to talk to sources. The issue was that it was working well for too long, with too many people. But it was an elementary system.

Another former official:

Everyone was using it far beyond its intention.

Two of Yahoo’s sources from the intelligence community said that the Iranians had cultivated a double agent who led them to the CIA’s secret communication system, which it was using in areas such as China and Iran, where in-person meetings can be dangerous. The CIA eventually learned from Israeli intelligence that Iran had likely identified some of its agents.

Finding out about Iran’s discovery of its secret communications system didn’t put an end to the intelligence breakdown, given that the Iranians used a simple method to take the single thread of the initial website and use it to unravel the far wider CIA network.

Namely, they Googled it.

A former intelligence official says that once the Iranians were shown the website where CIA handlers communicated with their sources, they began to search for other websites with similar digital signifiers or components. By using simple Boolean search operators – like “AND,” “OR,” as well as more sophisticated ones – the Iranians eventually came up with advanced search terms that would lead them to other secret CIA websites.

After that, it was just a question of tracking who was visiting the CIA’s sites, and from where.

By 2013, Iranian cyber experts had gone on the offensive, tracking CIA agents outside of Iran’s borders. “Iran was aggressively going out to hunt systems down,” a former intelligence official said. “They weren’t just protecting themselves anymore.”

It’s not clear whether Iran shared its findings with its counterparts in China or whether Chinese intelligence figured it out on its own, but between 2010 and 2012, China dismantled the CIA’s spying operations within the country.

This all may have been avoided if a whistleblower’s warnings had been heeded. In 2008 – well before Iran or China found and arrested CIA agents – John Reidy, who worked for CIA subcontractors helping to identify, manage, and report on human assets in Iran, had already warned about fraud involving a CIA subcontractor, and a “catastrophic intelligence failure” in which “upwards of 70% of our operations had been compromised” by hostile penetration of US intelligence computer networks.

Reidy’s disclosure is publicly available – here it is in an appeal he filed regarding a decision from an external review panel about his whistleblower report – though it’s heavily redacted.

According to that disclosure, by 2010 he’d been told, by multiple government employees, that the “nightmare scenario” he had warned about regarding the secret communications platform had, in fact, transpired.

According to Reidy, the communications system compromise became evident after operation “anomalies” began to surface in operations, including “sources abruptly and without reason ceasing all communications with us.”

Nobody did anything but brush it aside and cover it up, Reidy said, including congressional oversight committees. He was sidelined, and then he was fired. Yahoo spoke to his attorney, Kel McClanahan, who said that things could have turned out far differently if they’d listened and acted:

Can you imagine how different this whole story would’ve turned out if the CIA [inspector general] had acted on Reidy’s warnings instead of going after him? Can you imagine how different this whole story would’ve turned out if the congressional oversight committees had done oversight instead of taking CIA’s word that he was just a troublemaker?

Irvin McCullough, a national security analyst with the Government Accountability Project, a nonprofit that works with whistleblowers, said the failure of intelligence and government agencies turned it into an intelligence disaster of epic scale:

This is one of the most catastrophic intelligence failures since Sept. 11. And the CIA punished the person who brought the problem to light.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KcmzmzBMV1c/

Children’s apps contain an average of 7 third-party trackers, study finds

When it comes tracking mobile app users, internet advertising companies like to start them young, according to a new University of Oxford study.

Researchers analysed nearly one million Android apps downloaded from the US and UK Google Play Stores and found that those used by children now embed some of the highest numbers of third-party trackers of any app category.

Most of these fall under in the ‘family’ category (8,930 apps), which had a median of seven trackers each, just ahead of the vast games and entertainment category (291,952 apps) on six.

Some family apps had even more trackers, with 28.3% exceeding 10. The only category that could match this was ‘news’ (26,281 apps), 29.9% of which had more than 10, or a median of seven trackers per app.

So, if you’re someone who gets their news from an app, chances are that what you’re doing is being watched very closely – something that’s at least as likely if you’re a child using a family app.

It’s no big reveal that advertisers are out to track people for commercial purposes, although the extent to which apps have become the front line in this endeavour is still quite surprising.

The extent to which children are being tracked through apps is even more unexpected given the wave of regulations that are supposed to limit how this is done, especially for anyone under the age of 13.

Join the dots

The study also looked at where the tracking is done from, finding that 100,000 of the million or so apps sent data back to more than one jurisdiction.

This is an obvious back door for data collection – the fact that data collection is restricted in one geography doesn’t mean that same data can’t in theory end up somewhere else too, a regulatory complication.

Even so, it’s who was doing the tracking that proved the most interesting discovery, with a tiny handful of big internet companies and their subsidiaries embedded in a large percentage of apps.

Google, for example, had tracking in 88.4% of all apps, ahead of Facebook in 42.5%, and Twitter in 33.8%.

The study’s authors don’t say whether they think this ubiquity is connected to the issue of tracking children and the young specifically.

But whichever companies are doing it, the question is whether they should be, both morally and legally.

Given the relatively higher level of protection set in the law regarding profiling children for marketing, it seems that tracking is most rampant in the very context in which regulators are most concerned to constrain it.

The picture drawn by the research is of an unregulated free-for-all when companies track whomever they want because, frankly, it’s not hard to do but is hard to stop.

That is starting to change, which raises the prospect that the tracking of children and the young (defined as those under 16 in general) might be a future data scandal-in-the-making.

Earlier this year, child advocacy organisations filed a complaint with the Federal Trade Commission (FTC) alleging that Google was making money by collecting data from children on YouTube.

A few weeks ago, New Mexico’s Attorney General filed a suit alleging Google, Twitter and mobile games company Tiny Labs of “commercial exploitation” along the same lines.

Internet companies find themselves facing a wave of cynicism for the way they wield power in advertising by profiling web users.

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3cx9NC9-duE/

Is the US about to get a nationwide, privately owned, biometrics system?

Two US biometric companies, SureID and Robbie.AI, have partnered to research a private, nationwide biometrics system that could combine fingerprint and facial recognition data.

SureID runs a nationwide fingerprint collection system designed to make identity and background checks less painful. Users go to one of around 800 fingerprint collection stations around the US and scan their digits. A few hours later, SureID will deliver the user’s background check to their employer, landlord or whichever other authority they choose. Robbie.AI sells an AI-powered facial recognition technology.

By combining the two technologies, SureID hopes to create “the United States’ first nationwide biometrics gathering system for broad consumer-focused initiatives”. The idea is to use facial recognition to confirm that the person providing the fingerprints is legitimate.

Is it secure?

The worry with biometric authentication has always been that someone might crack it by replicating a person’s features. In the past, when companies have claimed high levels of security for their biometric systems, hackers have figured out a way past them.

For example, researchers pilfered publicly available photos online, created 3D-animated renditions that could be displayed on a smart phone, and then used them to fool facial recognition systems.

That approach wouldn’t have fooled Apple’s FaceID system. It uses projected dots and infrared imagery to create a point cloud that it translates into depth information. That means that it needs a real 3D face, rather than a 2D image of one, to grant access.

Still, that didn’t stop hackers from cracking FaceID anyway. Just a week after Apple released the device, Vietnamese security company Bkav fooled the system by printing a silicon mask wrapped around a 3D frame and 2D infrared images of a person’s eyes.

The more sophisticated the system, the harder hackers must work to circumvent it. And SureID is extra confident about the security of Robbie.AI’s facial recognition in any case, because it is based on bone structure geometry, it says, adding:

Even weight gain or loss, glasses, and darker rooms do not impact Robbie.AI’s results.

No matter how good the system is that scans your face, there’s always the possibility that a hack might be found in the future that fools the technology. In the meantime, though, there is another worry.

Storing your data

The system that performs the recognition is only one part of the identity management system. The other part has to store that data, and do so securely. While every organization that ever stored any biometric data anywhere will claim that it’s secure, we know that nothing is ever 100% secure, and there’s always the danger that the database itself could be compromised.

This has happened before. One of the most worrying examples came in 2014, when hackers from China compromised the Office of Personnel Management’s computers and stole the fingerprints of 5.6 million people.

Another came more recently in January this year, when one journalist purchased access to India’s Aadhaar national ID database, and another acquired access to an admin account.

Nevertheless, the companies are steaming ahead with their research. They said:

In the future, we hope to use this innovation to respond to customer issues immediately, alert people of fraud in real time, and potentially provide instant authentication to vehicles, IoT devices and smart homes.

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cfmkCiYo1KA/

Facebook wants to reveal your name to the weirdo standing next to you

Not entirely unlike dogs socializing via their nether regions, Facebook’s latest idea is to wirelessly sniff out people around you and make friend suggestions based on what it finds. Only it’s slightly more intrusive than how dogs do it.

The patent, which got the go-ahead last month, is like the current People You May Know feature sprouting legs and trotting up to random strangers who have the awesome good luck of finding themselves in your proximity.

Does Facebook need yet more technology for this? It’s not as if it’s not already adept – to put it lightly – at rummaging through our everything to find ties that bind.

Take, for example, the interview published by Fusion editor Kashmir Hill a few years ago: it was with a father who attended a gathering for suicidal teens. The father was shocked to discover that following the highly sensitive meeting one of the participants duly appeared in his People You May Know feed.

The only thing the two people seemed to have in common was that they’d been to the same meeting.

According to Hill:

The two parents hadn’t exchanged contact information (one way Facebook suggests friends is to look at your phone contacts). The only connection the two appeared to have was being in the same place at the same time, and thus their smartphones being in the same room.

Hill said that Facebook’s response gave her “reportorial whiplash”: first, it suggested that location data was used by People You May Know if it wasn’t the only thing that two users have in common, then said that it wasn’t used at all, and then finally admitted that it had been used in a test late in 2015 but was never rolled out to the general public.

Introduced in 2008, People You May Know has been both remarkably accurate and extremely opaque about how it makes friend suggestions. As in, “the networks that you are a part of, mutual friends, work and education information, contacts imported using the Friend Finder,” and the murky kitchen junk drawer of “many other factors.”

The feature is designed to help users discover new connections, be they long-forgotten school chums or colleagues. Of course, besides helping people to build out their own networks, it’s also darn handy when it comes to enabling Facebook to build a treasure trove of valuable data about us and the people with whom we associate.

That daisy-chaining analysis has enabled people like National Security Agency (NSA) agents to pull the communications of innocent people into far-reaching surveillance dragnets that snare friends of friends of actual targets, as was shown in leaked documents from Edward Snowden.

In 2016, Germany actually said no to all that, with the Federal Court of Justice ruling that Friend Finder constituted advertising harassment.

Patterns of movement

At any rate, to make its friend-suggesting, data-vacuuming technologies all the more data-grabby, the new patent describes a method of using the devices of Facebook app users to identify wireless signals – including Bluetooth, Z-Wave or Zigbee, NFC or PAN communications – from other users’ devices.

The patent says that the Facebook mobile app might be designed to make suggestions based on how physically close the new “friend” might be, plus how often the two people have met and how long the meetings have lasted. Or even, say, patterns of when users have likely had meetings. Imagine the possibilities: if you take the subway at a given time each day, for example, the guy who always sits across from you could pop up in your suggested friends list.

Outfitted with the technology described in the patent, the Facebook app could record not only how often devices are close to one another and meeting time and duration, but also their movement patterns. Relying on a device’s gyroscope and accelerometer to analyze movement patterns could, for example, help Facebook determine whether the two users went for a run together, strolled down the street together, or are habitually two sardines packed into that subway car together.

We’ll make them all blips on our radar, Facebook says:

In one embodiment, the movement pattern can include at least one of a stationary pattern, a walking pattern, a running pattern, or a vehicle-riding pattern.

In one embodiment, a graphical element representing the second user can be presented on a display element of the computing system. The graphical element can be moved on the display element based on a locational proximity between the computing system and a source of the second wireless communication. The locational proximity can be determined using the signal strength data associated with the second wireless communication.

Facebook’s algorithm would crunch all this data to figure out the likelihood of two users having actually met, even if they’re not Facebook friends already and have no other virtual connections. If that algorithm finds people’s patterns of “meeting” are “sufficiently significant,” they could receive nudges about possibly becoming friends.

This all would come in handy when you meet somebody at a cocktail party or convention, say, forget to ask for their contact information, and don’t apparently share mutual connections, Facebook suggests:

If, for example, the first user meets the second user but forgets to obtain the second user’s contact information and does not apparently share any mutual connections with the second user, it can be challenging or inefficient for the first user to search for and find the second user within the social networking service. These and other similar concerns can reduce the overall user experience associated with using social networking services.

You know who else might appreciate sniffing out people wirelessly? Cyberstalkers.

Oh, and police. This could be a good thing: a few years ago, a victim identified an armed robber after Facebook suggested him as a friend to his victim. Nothing like being held at knifepoint to get “proximity” bells ringing!

It could also be yet another investigative tool brought to law enforcement courtesy of Facebook, willingly or otherwise – the platform recently scolded police for using fake accounts to snoop on citizens.

It could be any or all of those things. For now, it’s just a patent. But given Facebook’s history with suggesting friends and what it’s already admitted about trialling such proximity-based technologies, it sounds like we’ll likely see it rolled out sooner, rather than later.

Time for somebody to invent a Faraday handbag!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0hJEy12l8V8/