STE WILLIAMS

Stalker attacks Japanese pop singer – after tracking her down using reflection in her eyes

A Japanese man indicted on Tuesday for allegedly attacking a 21-year-old woman last month appears to have found where his victim lived by analyzing geographic details in an eye reflection captured in one of her social media photos.

According to Japanese broadcaster NHK, Hibiki Sato, 26, located the woman’s residence by matching the reflected image of a train station she frequented to a Google Street View image and waiting for her so he could follow her and find where she lived.

Later, when the woman, identified as Ena Mastuoka, a member of a Japanese idol group, returned home after a concert on September 1, she was reportedly ambushed, assaulted, and injured by Sato, said to be a fan of her group.

According to Tokyo Reporter, Sato waited for her inside her building. He’s alleged to have located her specific apartment by analyzing videos she’d posted for the positioning of her curtains and light patterns.

Finding telling details in photos used to be the stuff of science fiction, as depicted in the famous zoom and enhance clip from Blade Runner where detective Rick Deckard scans and enhances a photograph to help his investigation. Now it’s part of the playbook for sleuths and stalkers alike.

Basic digital image forensics tools are available online at no cost. And companies like Amped Software and FDI market digital forensic tools for professional investigators, to say nothing of photo applications offered by Adobe and the like.

Rutger Hauer (1990), the Dutch actor who played replicant Roy Batty in Blade Runner

It’s 2019, the year Blade Runner takes place: I can has flying cars?

READ MORE

Speaking with the BBC, Elliot Higgins, founder of investigative website Bellingcat, said even the smallest details can reveal where photos were taken and other information.

Bellingcat made a name for itself by using open source intelligence – online photos, public data sources, crowdsourcing, and so on – to investigate events like the downing of Malaysia Airlines Flight 17 in 2014 and the poisoning of Sergei Skripal in 2018.

Many online services like Facebook, Instagram, and Twitter remove Exif metadata – a potential privacy risk – added to photo files by digital cameras, though some, like Google Photos or iCloud Photos preserve it.

The metadata that can be derived from reflections in photos won’t be so easily suppressed. But there’s an opportunity for some large cloud photo service to come up with a machine learning algorithm capable of blurring all the telling details that show up the reflections captured in digital images. ®

Sponsored:
What next after Netezza?

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/10/stalker_japan_eyes/

Check Out New Cybersecurity Tools in the Black Hat Europe Arsenal

Black Hat Europe returns to the Excel in London December 2-5 bearing a cornucopia of intriguing cybersecurity tools in its Arsenal.

Need some new cybersecurity tools? Check out the Arsenal at Black Hat Europe in London this December, where you can see live demonstrations of the latest open-source security tools!

Taking place once again at The Excel in London, this year’s Black Hat Europe event will feature an Arsenal fully loaded with tools to help you tackle everything from repelling malware to cracking passwords.

For example, you could catch a demonstration of CrackQ: Intelligent Password Cracking, a Pytho- based queuing system for managing hash cracking using Hashcat. CrackQ was born from the frustration of using similar tools and adds some new and interesting additional features as solutions to these frustrations, So swing by and check it out.

AVCLASS++: Yet Another Massive Malware Labeling Tool will introduce you to open-source AVCLASS successor AVCLASS++, an improved malware-labeling tool that might be just what you need to stay on top of your work. AVCLASS++ is built to perform labeling more accurately than the vanilla one, and even if you have never used AVCLASS, AVCLASS++ is designed to be easy to use and support both practitioners (such as SOC operators, CSIRT members, and malware analysts) and academic researchers.

Speaking of research, don’t overlook the more education-focused tools in the Black Hat Europe Arsenal. Haaukins: A Highly Accessible and Automated Virtualization Platform for Security Education, for example, offers a highly accessible platform for security education, affording you room to try out ethical hacking and penetration testing using Kali Linux through a browser. It makes it possible to conduct trainings for even large groups without the need for installing virtual environments or other tools; the participants can work on their own laptops through their web browser of choice, and have access within a couple of minutes.

Find further details on these and many other cool tools in the Arsenal lineup for Black Hat Europe, which returns to The Excel in London December 2-5, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/check-out-new-cybersecurity-tools-in-the-black-hat-europe-arsenal/d/d-id/1336057?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Think Like a Hacker

In the arms race of computer security, it’s never been more important to develop an adversarial mindset that can identify assumptions and determine if and how they can be violated.

Computer security is a very unique field. Unlike other fields in which the challenge is to overcome the scale of a problem or the complexity of an algorithm, in computer security the challenge is the wit of another human being who is trying to carry out an attack in order to compromise and disrupt a computing infrastructure.

Because of its adversarial nature, computer security is in continuous evolution. As it happens in many game-theoretical models, every move from either an attacker or a defender changes the state of the game and might invalidate current defenses or foil future attacks. In this arms race, everything evolves, all the time, and anticipating the possible threats becomes of paramount importance.

Therefore, security practitioners need to always think as an adversary, or, essentially, “think like a hacker.”

This mindset is necessary during the response to an actual attack, in order to understand the tools, techniques, and goals of the attacker, based on the information collected in the field. But it’s also important for security pros to continuously work on the skills they need to anticipate possible new attacks in the future.

But can someone actually learn how to think like a hacker? The answer is absolutely, “yes.”

To start, a security professional needs to study past attacks in order to understand the common patterns attackers follow during the compromise of a network. There are many lessons to be learned from understanding even rather old attacks. For example, the book “The Cuckoo’s Egg,” which was published 30 years ago (before the advent of the Internet), describes several techniques that we see today in many sophisticated, state-sponsored attacks. These include the creation of backdoors, lateral movement, and intelligence gathering.

In addition, thanks to the Internet, there is now an enormous amount of information about the tools and techniques used by cybercriminals. Nowadays, this information is collected and shared among organizations using a number of different tools and standards — among them, MITRE’s ATTCK framework.

What’s even more important is the need for security professionals to develop vulnerability analysis skills. Vulnerability analysis is the process of analyzing a networked system to identify possible security problems. While there are a number of scanning tools that can be used for network analysis, an in-depth analysis requires a more holistic approach that takes into account the design of the network, its goals, and its actual configuration. Given this information, it is then necessary to identify the underlying assumptions of the system’s design, especially the undocumented ones.

For example, the developers of a web-accessible service that uses a back-end component, providing a functional API, might have assumed that the API endpoints will always be invoked through the Internet-facing web application, following the workflow defined by the user interface. However, a misconfiguration might provide direct access to the back-end server, giving the attacker the ability to invoke the API endpoints that implement the service directly, without necessarily following the workflow enforced by the web application. This could result in an authentication bypass.

In general, security professionals need to develop “oblique thinking,” which is an adversarial mindset that focuses on identifying assumptions and determining if and how these assumptions can be violated. One way to develop this mindset is by participating in hacking — or capture the flag (CTF) competitions, as they are often called. These competitions, which were once few and incredibly selective — if not secretive — have become mainstream and have diversified to support the training of security professionals at any experience level.

By trying to solve security challenges in a variety of settings and topics — from binary analysis to memory and file system forensics to web security and cryptography — security professionals acquire skills and mindsets that are useful in the vulnerability analysis of real-world systems. Even though a systematic, well-structured learning experience cannot be replaced by playing CTFs, these competitions provide a motivation for acquiring new security skills in an entertaining, competitive setting.

Since security professionals need to think like hackers, leveraging hacking challenges is a fun way to acquire this new mindset.

Related Content:

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dr. Giovanni Vigna leads technology innovation at Lastline. He has been researching and developing security technology for more than 20 years, working on malware analysis, web security, vulnerability assessment, and intrusion detection. He is a professor in the Department of … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/how-to-think-like-a-hacker/a/d-id/1335989?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Works of Art: Cybersecurity Inspires 6 Winning Ideas

The Center for Long Term Cybersecurity recently awarded grants to six artists in a contest to come up with ideas for works with security themes and elements. Check ’em out.

Image: kirasolly, via Adobe Stock)

Powerful art can cause upset as easily as it can soothe. At its most effective and high functioning, art fosters conversation and changes the ways we think about challenging topics — like cybersecurity.

That, in a nutshell, is the driving force behind a series of grants the Center for Long Term Cybersecurity (CLTC) recently awarded to six artists in a contest to come up with ideas for works with security themes and elements.

“Cybersecurity works like an emergency room, where you fix it up and send it back out,” observes Ann Cleaveland, executive director of CLTC, part of UC Berkeley. “We want the world to take a longer-term view.” The winning artists’ proposals are intended to invite viewers into that perspective, she adds.

Projects were considered for their potential to illuminate the human impacts of security and provoke critical dialogue about important issues, such as privacy, surveillance, cyberattacks, and malware, according to the center. Winning artists will receive anywhere from $5,000 to $25,000, and CLTC will help the artists show and publicize their works, which will debut sometime in 2020.

The Cybersecurity Arts Contest was launched as part of the Daylight Security Research Lab, a new initiative and offshoot of CLTC that aims to shift how people identify and understand technology’s potential ills. “The imagery we have — the hacker and the hoodie, or green screens of code streaming by — doesn’t really resonate with security professionals,” says Nick Merrill, director of the lab.

And outside the world of security, the images don’t foster understanding or insight. “Our biggest hope is that the artists will create works that change the way people think about security — and how decision-makers approach it,” Merrill adds.

Contest entries showed tremendous creativity and will challenge norms and assumptions about security, Merrill adds. “It’s an experiment … no one’s done anything like this, so we don’t know what will come of it,” he says.

Browse on for the six winning projects — and make sure to add your own artsy reactions and high-falutin’ interpretations in our “Comments” section, below. We may just award you your own grant.

Terry Sweeney is a Los Angeles-based writer and editor who has covered technology, networking, and security for more than 20 years. He was part of the team that started Dark Reading and has been a contributor to The Washington Post, Crain’s New York Business, Red Herring, … View Full BioPreviousNext

Article source: https://www.darkreading.com/works-of-art-cybersecurity-inspires-6-winning-ideas/d/d-id/1336055?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Akamai Snaps Up ChameleonX to Tackle Magecart

The Israel-based ChameleonX aims to protect websites from cyberattacks targeting payment data.

Akamai has confirmed plans to acquire ChameleonX, an Israel-based security startup with a strong focus on securing websites against attacks targeting payment and personal information.

The goal is to accelerate development of a tool to protect websites from Magecart attacks, an increasingly common threat in which adversaries inject malicious code into a target website so they can steal credential and payment card data at checkout. ChameleonX’s platform is designed to detect this code on a victim system early on, before significant damage is done. 

By acquiring ChameleonX’s team and technology, Akamai plans to develop a product that detects and blocks active attacks without interfering with the user experience, officials state.

ChameleonX was founded in 2016 and has since generated $1.9 million over two rounds of funding. Terms of the deal were not disclosed; however, sources familiar with the matter have told Nasdaq it’s valued at nearly $20 million. Closing is expected in the fourth quarter of 2019.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Can the Girl Scouts Save the Moon from Cyberattack?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/akamai-snaps-up-chameleonx-to-tackle-magecart/d/d-id/1336058?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Imperva Details Response to Customer Database Exposure

The cloud security’s CEO and CTO lay out the timeline of events and the steps customers should take to protect their accounts.

Imperva today released details about an October 2018 intrusion into a database containing records on customers of its cloud Web application firewall (WAF), formerly known as Incapsula. According to a blog post from CEO Chris Hylen, a database snapshot created for testing met an internal compute instance with outside access. When the compute instance’s Amazon Web Services API key was compromised, a malicious actor was able to copy the database.

Within the blog post, CTO Kunal Anand noted that emails and hashed and salted passwords for a subset of WAF customers were exposed. The incident was discovered by a third party and then verified by Imperva, which announced the attack Aug. 27, 2019.

A number of new protection steps have since been taken, Hylen said, including decommissioning inactive compute instances, rotating credentials, strengthening credential management processes, and putting all internal compute instances behind a VPN by default.

The blog post also offers recommendations to Imperva customers, including changing cloud WAF passwords, enabling two-factor authentication, and resetting API keys.

Read more here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/imperva-details-response-to-customer-database-exposure/d/d-id/1336063?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

AppSec ‘Spaghetti on the Wall’ Tool Strategy Undermining Security

At many organizations, the attitude to securing software appears to be throwing a lot of technology at the problem, a new study finds.

New research suggests that the strategy for many companies to reduce application security risk is to simply stack up on multiple tools and hope they do the job.

Radware recently surveyed some 300 senior executives, security researchers, app developers, and IT professionals from organizations with worldwide operations. The survey focused on the types of application security technologies that organizations are deploying; responsibility for the AppSec function; the most prevalent threats and other topics related to Web application security.

The security vendor discovered that a high percentage of organizations are using an array of technologies — not always optimized for interoperability — to try and keep AppSec risks low.

Seventy-five percent in the survey had a Web Application Firewall (WAF), 63% a cloud WAF service, 59% did code reviews and 53% were using tools for dynamic application security testing (DAST), static testing (SAST) and runtime application self protection (RASP). More than half of those using containers also had container security tools including those specific to Docker.

“While this may sound promising, it feels like organizations are taking the ‘spaghetti on the wall’ approach,” to application security Radware said in a blog this week. “They hope that having multiple solutions in place will do the job.”

And Radware’s data showed that for a majority of companies, that strategy is not working especially well, at least in terms of a breach mitigation standpoint. Ninety percent of the organizations that Radware surveyed had experienced an application security related data breach, and nearly the same proportion — 88% — reported application-level attacks throughout the year. The most common security issues included access violations, SQL-injection, DoS and protocol attacks, session/cookie poisoning, API manipulations, and cross-site request forgery.

“We found that while embracing emerging technologies and concepts — and following all security practices — attacks happen [because] organizations struggle to adjust the required structures, roles, processes, and skillsets,” says Ben Zilberman, senior product marketing manager at Radware.

Containers, Microservices Proliferate

Radware found a majority of organizations have moved away from a predominantly monolithic application model to architectures that are more oriented towards microservices, containers, and serverless-infrastructures.

More than two-thirds (67%) had deployed microservices/containers, and 90% had a DevOps or DevSecOps team in place. Some of the organizations with DevSecOps teams said they had at least one DevSecOps professional for every six software developers. Others pegged the ratio at one for every 10.

The data suggests that many organizations are embracing new technologies and approaches to keep up with broader digital transformation goals. But attitudes towards security have still to catch up.  DevOps teams focused on agility often settle for a “good enough” or even a “hell no” approach to security, Radware said.

Not surprisingly, while it’s the CISO or CSO who’s primarily responsible for enterprise security, at many organizations they are not the ones calling the shots on application security. Radware found that the broader IT department is still the main influencer for security tools selection, policy definition, and application security implementation. When it comes to tool selection, in fact, Radware found the CISO has less of a say than IT, the business owner, and the DevOps team.

“The fact that DevOps and security are still equal powers in terms of influence and [that] it’s still IT that has the most weight in decision making” is surprising, says Zilberman.

False confidence in technology is another issue. Sixty-seven percent of the respondents in the Radware survey described open-source code as being more secure — though many have identified it as being one of the primary sources of security vulnerabilities in software. Sixty-eight percent felt that microservices provide for better security and 77% believed that going serverless would help improve proactive defense capabilities.

The biggest mistake that organizations are making is assuming that technology itself can solve all the problems, Zilberman notes. “To make the best use, they should engage security professionals better and let them be business enablers rather than pushing them off,” he says.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Works of Art: Cybersecurity Inspires 6 Winning Ideas

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/application-security/appsec-spaghetti-on-the-wall-tool-strategy-undermining-security/d/d-id/1336064?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

iTunes Zero-Day Exploited to Deliver BitPaymer

The ransomware operators targeted an “unquoted path” vulnerability in iTunes for Windows to evade detection and install BitPaymer.

Ransomware operators have been seen exploiting a zero-day vulnerability in iTunes for Windows to slip past security tools and infect victims with BitPaymer, researchers report.

Back in August, the Morphisec team noticed attackers targeting the network of an enterprise in the automotive industry. The researchers shared their discovery with Apple, and a patch is now available. Businesses and consumers should take note: Apple will sunset iTunes for Mac with the release of macOS Catalina this week, but Windows users will continue to rely on iTunes.

BitPaymer operators are sophisticated and savvy in launching attacks. A month before they discovered the iTunes zero-day, Morphisec researchers saw the group creating new variants of the ransomware before planting it on a target network, making detection much more difficult. This group carefully chooses its victims and sits on the network for a while before it strikes.

Now the same attackers are taking advantage of an “unquoted path” vulnerability in the Bonjour updater that comes bundled with iTunes for Windows. This is a well-known flaw that has been identified by vendors for more than 15 years but is rarely seen in active attacks.

“I had never seen this used in the wild,” says Morphisec CTO Michael Gorelik of the bug, which is usually mentioned in the context of privilege escalation because it exists in a service or other process with administrative execution rights. It’s so well-documented that programmers should be aware of it; however, as researchers say in a blog post on their findings, this is not the case.

The “unquoted path” vulnerability is a mistake in object-oriented programming. Software developers mistakenly assume using the String variable alone is enough when assigning a variable with a path. It’s not, researchers report, and the path must be surrounded by quotes. 

Bonjour, a mechanism Apple uses to deliver software updates, includes one of these unquoted paths. It has its own installation entry in the installed software section and a scheduled task to execute the process. Most people who uninstall iTunes are unaware they also need to uninstall Bonjour. As a result, they continue to run the updater task. Morphisec researchers note they were surprised by the number of computers across enterprises that continue to run Bonjour.

These factors combine to create an opportunity for attackers to break in and bypass security tools, many of which are based on behavior monitoring. Because Bonjour is signed and known, its execution of a new malicious process will generate an alert with a lower confidence score. Further, as the malicious “Program” file didn’t have an extension, security tools may not scan it.

In the attack Morphisec observed, Bonjour was attempting to run from the “Program Files” folder. Because of the unquoted path vulnerability, it instead ran the BitPaymer ransomware, which was hidden under the name “Program,” effectively sneaking past security products.

“This is a very smart way to bypass security products without creating a fuss,” Gorelik says. Most antivirus products only scan specific file extensions so as to not limit device performance. The group must have done serious reconnaissance in order to stay ahead of the defenders, according to the blog post.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Works of Art: Cybersecurity Inspires 6 Winning Ideas

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/itunes-zero-day-exploited-to-deliver-bitpaymer/d/d-id/1336065?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

California outlaws facial recognition in police bodycams

On Tuesday, California passed into law a three-year block of the use of facial recognition in police bodycams that turns them into biometric surveillance devices.

This isn’t surprising, coming as it does from the state with the impending, expansive privacy law – California’s Consumer Privacy Act (CCPA) – that’s terrifying data mongers.

As it is, in May, San Francisco became the first major US city to ban facial recognition. It might well be a tech-forward metropolis, in a state that’s the cradle of massive data gobbling companies, but lawmakers have said that this actually confers a bit of responsibility for reining in the privacy transgressions of the companies headquartered there.

When facial recognition gets outlawed, lawmakers point to the many tests that have found high misidentification rates. San Francisco pointed to the ACLU’s oft-cited test that falsely matched 28 members of Congress with mugshots.

The ACLU of Northern California repeated that test in August, finding that the same technology misidentified 26 state lawmakers as criminal suspects.

One of the misidentified was San Francisco Assemblyman Phil Ting, the lawmaker behind the bill that passed and which was signed into law by Gov. Gavin Newsom on Tuesday: AB1215.

The law, which goes into effect on 1 January 2020 and which expires on 1 January 2023, prohibits police from “installing, activating, or using any biometric surveillance system in connection with an officer camera or data collected by an officer camera.”

The law cites the threat to civil rights posed by the pervasive surveillance of facial recognition bodycams:

The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.

…and noted the technology’s tendency to screw up:

Facial recognition and other biometric surveillance technology has been repeatedly demonstrated to misidentify women, young people, and people of color and to create an elevated risk of harmful ‘false positive’ identifications.

There are many cases in point when it comes to this error-prone technology. Here’s one: After two years of pathetic failure rates when they used it at Notting Hill Carnival, London’s Metropolitan Police finally threw in the towel in 2018. In 2017, the “top-of-the-line” automatic facial recognition (AFR) system they’d been trialling for two years couldn’t even tell the difference between a young woman and a balding man.

Facial recognition failure hasn’t stopped the UK from signing up with Singapore to collaborate on developing a digital identity, mind you. As part of its Gov.uk Verify scheme, the UK Government Digital Service launched a system of biometric payment for government services earlier this year. For its part, France is set to implement a nationwide facial recognition ID program next month, in spite of protests from privacy groups and from its independent data regulator, CNIL.

London’s history of failure with the technology is underscored by an oft-cited study from Georgetown University’s Center for Privacy and Technology that found that AFR is an inherently racist technology. Black faces are over-represented in face databases to begin with, and FR algorithms themselves have been found to be less accurate at identifying black faces.

In another study published earlier this year by MIT Media Lab, researchers confirmed that the popular FR technology it tested has gender and racial biases.

All of these tests were confirmed by the ACLU’s recent test that labelled him a potential suspect, Ting said. Besides, pervasive, error-prone surveillance that could and has led to the arrest of innocent people isn’t what police bodycams were intended to do, he said – namely, it’s supposed to be a tool for police accountability. The San Francisco Chronicle quoted the lawmaker following the passage of his bill in the legislature:

Let’s not become a police state. [Police bodycams should be used] as they were originally intended – to provide police accountability and transparency.

Why just 3 years?

The bill originally called for a permanent ban, but Ting scaled it back to three years – a period after which it may be renewed – due to the protest of law enforcement groups. They argued that a full-out ban on the technology would rob them of a vital crime-solving tool: one that could be used to identify repeat offenders, to solve cold cases and old crimes, and to deter future crime.

Police groups said that facial recognition could help identify criminals at large events – similar to how China has used it to pick out suspects as they travel during the Lunar New Year, for example. San Francisco Chronicle quoted what the California Police Chiefs Association told lawmakers:

Prohibiting the use of biometric surveillance systems severely hinders law enforcement’s ability to identify and detain suspects of criminal activity.

…while critics of facial recognition say that the public is jeopardized by that same technology. From ACLU technology and civil liberties attorney Matt Cagle:

Unleashing this inaccurate and racially biased technology on police body cameras …would undoubtedly lead to unjust arrests and even death.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bY53XoZiIlc/

Twitter used 2FA phone numbers for targeted advertising

Does Twitter know your email address and your phone number?

Depending on how long ago you started using Twitter, it’s a near certainty the company has at least one of these – the email address – because people often hand that over when registering.

As for phone numbers (usually mobile numbers) these are entered to enable Twitter’s two-factor authentication (2FA) security, Login Verification.

We mention this because Twitter this week made the you have to be kidding admission that it might have “inadvertently” handed this data from some users to advertisers as part of the company’s Tailored Audiences system that targets users’ feeds with ads.

As apologies go, this one is unsatisfactory, particularly if you like Twitter but think ‘targeted’ ads sound intrusive:

We’re very sorry this happened and are taking steps to make sure we don’t make a mistake like this again.

Twitter glosses over some of the detail so let’s explain how Tailored Audiences is supposed to work.

Well-tailored

As many Twitter users will already know to their chagrin, Twitter posts ads to people’s feeds in the form of Promoted Tweets.

The advertiser logs into their ad account, chooses the Twitter demographic it wants to reach (country, language, device type, gender, and people who’ve tweeted about topics that interest the advertiser). The ad then appears in the feed of users meeting these criteria.

However, Twitter’s admission relates to a second type of targeting that sounds incredibly similar to what Facebook was accused of doing a year ago – allowing advertisers to match Twitter’s data to their own databases not simply to target uses but, hypothetically, to identify them too.

What Twitter describes as being “inadvertent” is in fact described quite explicitly on its website on a page for advertisers.

The advertiser logs into their ad account, this time uploading their own user list which is then matched to Twitter users with the same email addresses and mobile numbers (Android or iOS advertising IDs and Twitter handles can also be used).

So, when the ad appears in someone’s feed, it’s been put there because the advertiser already knows something about that person and believes the message will be better received.

Owning up

Twitter said that as of 17 September, it no longer allowed access to mobile numbers or email addresses (the latter of which can still be used by other Twitter users to hunt for you unless you turn that feature off).

We cannot say with certainty how many people were impacted by this, but in an effort to be transparent, we wanted to make everyone aware. No personal data was ever shared externally with our partners or any other third parties.

Of course, the fact that Twitter didn’t let advertisers see phone numbers and email addresses is moot if advertisers might be able to infer this by matching their databases with its.

The involvement of mobile numbers entered by users to enable security is unfortunate, but we wouldn’t advise removing this data in case it proves useful should an account recovery become necessary.

Twitter has no plans to tell users if they’re part of this mini-scandal. For now, users who want to know more should contact the company using its data protection query page.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/28JZN9DLRbI/