STE WILLIAMS

US midterms barely over when Russians came knocking on our servers (again), Democrats claim

Russian hackers attempted to infiltrate the Democratic National Committee (DNC) just after the US midterm elections last year, according to a new court filing.

The attack in November 2018 was previously reported as targeting a number of organizations including law enforcement, defense contractors, and media companies, but the filing this week claims that the DNC was also a direct target.

“On November 14, 2018, dozens of DNC email addresses were targeted in a spear-phishing campaign, although there is no evidence that the attack was successful,” an amended complaint, filed late Thursday in New York, states.

The filing [PDF] is part of an ongoing lawsuit against Russia for hacking the DNC during the 2016 presidential election during which emails from the Clinton campaign’s chairman were stolen and sent to Wikileaks, which posted them online.

In this new filing, the DNC says that the content and the timing of the emails means it was targeted as part of a wider phishing campaign. The hacking effort has previously been connected to a Russian hacking group known as Cozy Bear, which is connected to Russian intelligence and also thought to be behind the 2016 DNC hacking.

“It is probable that Cozy Bear again attempted to unlawfully infiltrate DNC computers in November 2018,” the filing reports. The new claim is the fourth addition to the lawsuit claiming that the Russian hacking attempts have continued past the 2016 elections.

While the lawsuit does not claim that President Trump or his campaign team knew about either hacking attempt, it references the Trump campaign’s and the president’s repeated denials of links with Russian intelligence figures.

Those denials that have increasingly rung hollow over time as various figures from Trump campaign manager Paul Manafort, personal lawyer Michael Cohen, and national security advisor Michael Flynn, among others, have all admitted lying about their Russian contacts.

Kushner, Assanage, Manafort

The lawsuit does not name Donald Trump personally but does include his son-in-law Jared Kushner, Paul Manafort and WikiLeaks founder Julian Assange. It strongly implies that the Trump campaign was part of a broader conspiracy with the Russian government to influence the US political system.

Bear attack

Russian government hackers spent a year in our servers, admits DNC

READ MORE

DNC chairman Tom Perez said previously of the lawsuit: “This constituted an act of unprecedented treachery: the campaign of a nominee for president of the United States in league with a hostile foreign power to bolster its own chance to win the presidency.”

In typically robust language, the Trump campaign has dismissed the case as a “sham lawsuit about a bogus Russian collusion claim filed by a desperate, dysfunctional and nearly insolvent Democratic Party.”

Meanwhile the Russian government has responded in a now-familiar troll-like fashion, denying it was behind the hacking efforts but making a point of stating that even if was behind them it would be immune from prosecution because of sovereignty rules.

The logic behind bringing the lawsuit appears to be that a similar lawsuit filed back in 1972 by the Democratic Party against President Nixon’s re-election campaign over the Watergate break-in. That lawsuit provided a legal route to raise accusations and introduce evidence against Nixon and was a key aspect in the president’s eventual resignation.

The filing in this case – a second amended complaint – is effectively a 111-page record of Russian efforts to disrupt the Democratic party and the shifting positions that the Trump campaign has taken with respect to the allegations. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/18/russia_hack_democrats/

The Rx for HIPAA Compliance in the Cloud

For medical entities, simply following HIPAA cloud service provider guidelines is no longer enough to ensure that your practice is protected from cyber threats, government investigations, and fines.

The challenges of secure healthcare IT are really hard! It’s akin to navigating a barbwire minefield that’s surrounded by an endless sea infested with fast, hungry sharks. And it’s raining.

That may sound a bit flippant — and it is — but it’s also an insightful way to describe the frustrating level of complexity involved in making digital healthcare information more secure and legally compliant. At best, your chance of success is always questionable, and even small missteps come with severe penalties.

About five years ago, the Department of Health and Human Services (HHS) expanded the Health Insurance Portability and Accountability Act of 1996 (HIPAA), a set of laws aimed at providing continuous healthcare coverage and controlling the electronic transmission of healthcare data. The expansion laid out the responsibilities of covered entities (CEs), such as medical practices and insurance companies, and business associates (BAs), referring to technology providers and vendors.

In 2016, the law got even more detailed with guidance for how BAs and cloud service providers (CSPs) should behave in order to be suitably “cyber secure.” The new regulations issued very thorough written guidance and set up very stiff penalties for things like data breaches, data loss, and data theft. Since that time, investigations and legal charges have been ongoing — and increasing in volume. If you are curious, you can take a look at the current list of open cases under investigation by the Office of Civil Rights, which has the job of policing HIPAA-covered cyber events.

Recently, a healthcare customer of mine described HIPAA cloud compliance as a big boat that’s been hastily built and not tested on the open water. Holes spring up everywhere and, when you plug one, another opens up at the same time.

With all these holes, simply following HIPAA CSP guidelines is not nearly enough to ensure that your company or practice is protected from cyber threats, government investigations, and fines. Each entity involved in handling electronic healthcare information must continuously take extra precautions to keep their risk levels as low as possible.

If those companies (or their IT providers) were to take a look at the cloud networks and client-accessible web portals where they’re storing data and hosting things, they’ll often find some interesting and unexpected “meetings” going on between their networks and external, often high-risk, IP addresses and networks around the globe.

In other words, if you look at the servers your cloud apps and networks are talking to behind your back and assess them for their cyber threats, you’ll find you’re opening yourself up to all sorts of potential badness — and HIPAA-policed things like breaches — via vectors like Tor, ransomware, phishing, malware, and botnet-delivered attacks.

By using focused IP and network threat intelligence, healthcare companies and their technology providers can add simple but effective protection measures to more traditional firewall approaches that automatically block access to their networks, data, sites, and apps before the bad things occur.

How do you get started? Here’s a checklist to cover the fundamentals:

  • Make sure all your apps, websites, and other networked assets are sufficiently logging inbound and outbound network connections to grab IPs that attempt or make connections.
  • Ensure you have adequate tooling to search, alert, and report on the IP and network data you’re collecting with a SIEM or similar tool.
  • Use a quick and accurate service or tool to identify, score, and “rack and stack” the threat profiles of all the collected IP addresses by things like blacklist type and volume, type of threat with which the IPs have been associated, and recency of incidents.
  • Proactively and aggressively provide feedback about the worst and most threatening offenders to your network and web application firewalls for block-listing.
  • Monitor collected IP data over time to catalog baseline patterns and anomalies that warn of current or future incidents.

These methods will help develop a core process you can rely on to profile “normal” traffic patterns between your cloud and external services in order to spot suspicious communications patterns more quickly, which will lead to better threat response and mitigations downstream in your environments before data goes missing. For example, if your cloud all of a sudden starts talking to strange servers in China for a week, you can immediately start looking around for another hole in the boat. At the very least, it’s a sign of potential problems to come. 

Too often, medical practitioners and healthcare companies assume that technology providers bear all the burden (and legal risk) for protecting them from the dangers of HIPAA violations. In actuality, the burden to protect businesses and health IT practices is still very much on the non-technical covered entities.

Related Content:

Jason Polancich is co-founder, app designer and digital marketing lead for Musubu.io. Polancich is also a linguist, software engineer, data scientist, and intelligence analyst. He originally founded HackSurfer/SurfWatch Labs (Pre-VC), a cyber analytics firm founded in 2013 … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-rx-for-hipaa-compliance-in-the-cloud/a/d-id/1333657?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

PCI Council Releases New Software Framework for DevOps Era

The PCI Software Security Framework will eventually replace PCI DA-DSS when it expires in 2022.

This week the PCI Security Standards Council released a new software security standard that is designed to help it validate the security of payment ecosystems in the face of newer software architectures and modern development methods like DevOps and continuous delivery. The new standard would ultimately replace the PCI Payment Application Data Security Standard (PA-DSS).

“Software development practices have evolved over time, and the new standards address these changes with an alternative approach for assessing software security,” explains Troy Leach, chief technology officer for the PCI Security Standards Council, explaining the impetus to roll out the PCI Software Security Framework. “The PCI Software Security Framework introduces objective-focused security practices that can support both existing ways to demonstrate good application security and a variety of newer payment platforms and development practices.”

Like many other standards and guidance documents from the council, the framework was developed with input from a range of industry experts across the payment technology and security communities.

“They’re really trying to make a standard that works for modern software development,” says Jeff Williams, co-founder and CTO of Contrast Security and a participant in the expert council that contributed to the new standard. 

Williams explains that the current PA-DSS standard is “very brittle.” It doesn’t offer enough flexibility, he says, to account for growing trends in DevOps adoption and software delivered in a world of microservices, hybrid cloud, containerization and so on.

“It said you had to do A, B, and C and it just didn’t work for a lot of different kinds of software,” Williams says. “So when you’re looking at DevOps projects that are releasing seven times a day and moving super fast and using tons of libraries, and building APIs, and deploying in the cloud, that old standard just didn’t work well.”

As a part of the new standard, the council allows organizations greater freedom of choice in the security testing methods they use to find vulnerabilities in software. Notably, in addition to static, dynamic, and manual testing, the new framework also adds interactive application security testing (IAST) as a viable method. This continuous testing architecture is one that is designed to monitor security in the face of rapid development cycles seen in mature DevOps organizations, Williams says. 

In developing the framework, the council needed to walk a line between validating security in payment software delivered via traditional software development methods while also accounting for newer methods. Whereas PA-DSS is meant to guide traditional payment software developers in securing the software development lifecycle (SDLC), the new framework expands beyond this to address overall software security resilience, Leach says.

“The framework provides a new methodology and approach to validating software security and a separate secure software lifecycle qualification for vendors with robust security design and development practices,” he says, comparing the framework to PCI PA-DSS. “In other words, they’re not mutually exclusive but offer a progressive approach that allows for additional alternatives to demonstrating secure software practices.”

The ultimate endgame is to retire PA-DSS and assess all applications under the new framework. A validation program is expected to be released in 2019. 

“There will be a gradual transition period to allow organizations with current investments in PA-DSS to continue to leverage those investments,” Leach explains, stating that current PA-DSS validated applications will still be governed under that program until 2022. 

 

Related Content:

·      Why Password Management and Security Strategies Fall Short

·      Beyond Passwords: Why Your Company Should Rethink Authentication

·      Nearly Half of Security Pros Reuse Passwords

·      7 Privacy Mistakes That Keep Security Pros on Their Toes

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/risk/compliance/pci-council-releases-new-software-framework-for-devops-era/d/d-id/1333686?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GDPR Suit Filed Against Amazon, Apple

An Austrian non-profit, led by privacy activist and attorney Max Schrems, has filed suit against 8 tech giants for non-compliance with the EU General Data Protection Regulation.

An Austrian non-profit organization, noyb, has filed suit under GDPR against eight firms for non-compliance with the privacy regulation. The suit named Apple, Amazon, Netflix, Spotify, Youtube, and three others for violating the terms of the European law.

The suit was filed with the Austrian privacy authority on behalf of 10 users. According to attorney Max Schrems, who heads noyb, the firms have built structural violations of users’ rights into their systems.

Schrems has been filing privacy suits since his university days, with his first legal action against Facebook in 2011. Last year, he filed suit against Google, Facebook, Instagram, and WhatsApp, alleging that they forced users to accept onerous license terms or lose access to the services.

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/privacy/gdpr-suit-filed-against-amazon-apple/d/d-id/1333690?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

YouTube bans dangerous and harmful pranks and challenges

Driving while blindfolded is stupid. Ingesting laundry detergent pods is stupid. Asking your girlfriend to shoot you through an encyclopedia is stupid. And, in the case of Pedro Ruiz III, it’s lethal.

These are all so-called “pranks” that have been filmed and posted on YouTube. After reports of people getting hurt or even killed, YouTube has explicitly called it quits on the genre.

On Tuesday, Google announced that it had updated its dangerous challenges and pranks enforcement.

Specifically, Google updated its external guidelines to clarify that challenges like the Tide pod challenge, that’s when teens dare each other to bite into the laundry pods, which can and has led to poisoning, or the fire challenge which involves pouring flammable liquid onto your skin, then lighting it on fire, resulting in multiple cases of kids giving themselves second- and third-degree burns, “have no place on YouTube.”

A history of violence and/or stupidity

Dangerous pranks and challenges may not have a place on YouTube now, but the content has certainly made itself at home before this. Some examples:

  • In 2016, four members of the YouTube channel TrollStation – known then as the septic tank of prankster sites – were jailed for staging and filming fake robberies and kidnappings. Their aggressive and/or violent public antics have included trolls enacting brawls and smashing each other in the head with bottles made out of sugar.
  • In 2017, a couple in the US reportedly lost custody of two of their five children, whom they had filmed while screaming profanities at them, breaking their toys as a “prank” and blaming them for things they didn’t do. Some of the videos, posted to their DaddyOFive YouTube channel, showed the kids crying and being pushed, punched and slapped.
  • In February 2018, Australian YouTube prankster Luke Erwin was fined $1,200 for jumping off a 15-meter-high Brisbane bridge in the viral “silly salmon” stunt.
  • US YouTube prankster Pedro Ruiz III was killed last year by his girlfriend and the mother of his children after insisting that she shoot a .50 caliber bullet through an encyclopedia he was holding in front of his chest. She was sentenced to 180 days in jail.

How will YouTube yank the pranks?

It’s a step in the right direction to say that this type of material “has no place” on YouTube. But what exactly is YouTube going to do about it? Its previous moderation successes haven’t exactly been stellar, after all.

In April 2018, during the earnings call for Google parent Alphabet, Google CEO Sundar Pichai pointed to the success of automatic flagging and human intervention in the removal of violent, hate-filled, extremist, fake-news and/or other violative YouTube videos. According to YouTube’s first-ever quarterly report on removed videos, between October and December 2017, it removed a total of 8,284,039 videos. Of those, 6.7 million were first flagged for review by machines rather than humans, and 76% of those machine-flagged videos were removed before they received a single view.

It was an impressive number, but if YouTube’s ongoing problems with getting bestiality off the platform are any indication, it won’t be easy. BuzzFeed News told YouTube back in April that searching for the word “girl” along with “horse” or “dog” was returning dozens of videos with thumbnails suggesting women having sex with those animals.

Well, that won’t do, YouTube said, removing the videos and emphasizing that the “abhorrent” content was in violation of its policies. But what exactly did it do to enforce that no-horse-and-pony-show policy?

Not much, apparently. After BuzzFeed published an article on Tuesday about such content still being plentiful on YouTube, the company said that it’s gone after the culprits by way of throttling ad revenue, “aggressively [enforcing] our monetization policies to eliminate the incentive for this abuse.”

YouTube also said in its statement that it’s beefing up enforcement against abusive thumbnails and trying to “get it right.”

We recognize there’s more work to do and we’re committed to getting it right.

Are those steps better than nothing? You can’t punish content producers via ad revenue if they haven’t actually monetized their videos. As a senior YouTube employee told BuzzFeed last year, the graphic thumbnails well may have been coming from a content farm that’s keeping videos ad-free until their views spike and they can cash in big-time.

Could artificial intelligence (AI) help YouTube moderate its vast reams of content?

BuzzFeed talked to one AI expert who thinks it could spot bestiality imagery, though training AI on human-on-animal content wouldn’t be much fun. Bart Selman, a Cornell University professor of AI:

It is definitely possible for AI to detect bestiality-related porn, but it would need to be trained on images related to that. So, it requires a special effort to do that kind of training and it’s not fun to work on. Another issue is that the content spreading mechanisms may actually push this stuff widely, going around content safety checks.

Training AI to recognize pranks that veer into the realm of dangerous seems more difficult still.

To echo what Netflix said after people were inspired by its movie “Bird Box” and started coming up with “do-such-and-such-while-blindfolded” dares, which led to at least one car crash…

…winding up in the hospital with meme-related injuries isn’t the best way to start the new year.

Let’s hope that Google figures out substantive ways to moderate this dangerous content, be it through throttling ad revenue or by training AI to be a lot smarter than the humans who are drawn like moths to the flame… or to the blindfold… or to the sugar bottles crashed over their skulls.

You can’t legislate people out of stupidity, but perhaps you can strip them of stupidity-derived internet glory.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GLvH16C_K-E/

Ep. 015 – USB anti-hacking, bypassing 2FA and government insecurity [PODCAST]

In this episode, the Naked Security Podcast looks at whether the latest USB hardware proposals will be used to boost security or tighten up anti-piracy controls, investigates an open-source toolkit for bypassing 2FA, and explains how the US government shutdown is affecting online security.

With Anna Brading. Paul Ducklin, Mark Stockley and Matthew Boddy.

This week’s links:

If you enjoy the podcast, please share it with other people interested in cybersecurity, and give us a vote on iTunes and other podcasting directories.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ATvGWMcMGOM/

Did you know you can see the ad boxes Facebook sorts us into?

Fitbit? Pollination? Jaguars? Snakes? Mason jars?

OK, fine, Facebook, I’m not surprised that I’ve clicked on those things. But when did I ever click on anything related to Star Trek: Voyager? Or Cattle?!

My “this feels weird” reaction makes me one of the 51% of Facebook users who report that they’re not comfortable that the ad-driven company creates a list that assigns each of us categories based on our real-life interests.

It’s called “Your ads preference.” You can view yours here. If you drill down, you can see where Facebook gets its categorization ideas from, including the things we click on or like, what our relationship status is, who employs us, and far more.

Most people don’t even know that Facebook keeps a list of our traits and interests. In a new survey from Pew Research Center that attempted to figure out how well people understand Facebook’s algorithm-driven classification systems and how they feel about Facebook’s collection of personal data, the majority of participants said they never knew about it until they took part in the survey.

Overall… 74% of Facebook users say they did not know that this list of their traits and interests existed until they were directed to their page as part of this study.

Once the participants were directed to the ad preferences page, most – 88% – found that the platform had generated material about them. More than half – 59% – said that the categories reflected their real-life interests. But 27% said that the categories were “not very” or “not at all” accurate in describing them.

And then, after they found out how the platform classifies their interests, about half of Facebook users – 51% – said they weren’t comfortable about Facebook’s list creation.

The Pew Research Center’s conclusions come out of a survey of 963 US adults over the age of 18 who have a Facebook account. It was conducted from 4 September to 1 October, 2018. You can see the full methodology here.

What inputs does Facebook’s algorithm chew over?

The “Your ad preferences” page, which is different for every user, is only one factor that Facebook uses to slice users’ lives into categories to which advertisers can market. Unless you’ve drilled down on that page’s categories and told it to forget about certain things that you’ve posted, liked, commented on or shared, all of that activity will be taken into account by the algorithm.

But as we well know, Facebook also follows us around off the site. One of the outcomes of the two days of Congressional grilling that Facebook CEO Mark Zuckerberg went through in April 2018 was that Facebook coughed up details on how it tracks both users and non-users when they’re not even on the site.

In explaining why Facebook collects non-users’ data, David Baser, Product Management Director, said that one tool, Audience Network, lets advertisers create ads on Facebook that show up elsewhere in cyberspace. In addition, advertisers can target people with a tiny but powerful snippet of code known as the Facebook Pixel: a web targeting system embedded on many third-party sites. Facebook has lauded it as a clever way to serve targeted ads to people, including non-members.

Beyond those tools, Pew Research Center said that Facebook also has a tool that enables advertisers to track users who’ve “converted”: in other words, they saw or clicked on a Facebook ad and then gone off and purchased whatever it was for. Bear in mind that you can opt out of that on your ads preference page.

Out of the ocean of data that comes from all those sources, Facebook knows us by demographic, by our social network and personal relationships, our political leanings, what’s happening in our lives, what foods we prefer, our hobbies, what movies we watch, what musicians we shell out money to hear, and what flavor of digital device we use. That’s a lot of grassland for advertisers to graze on.

It’s no surprise, then, that 88% of Facebook users in the study said they were assigned categories, while 11% found, after being directed to their ad preferences page, that they don’t exist in slice-and-dice advertising terms: they were told that they “have no behaviors.”

Political and racial buckets

The study asked targeted questions about two touchy, highly personal subjects in Facebook’s bucket lists: political leanings and racial/ethnic “affinities.”

The study found that Facebook assigns a political leaning to about half of Facebook users – 51%. Out of that group, 73% said that Facebook got it very, or somewhat, right. But 27% reported that it describes them “not very” or “not at all” accurately. In other words, 37% of Facebook users are assigned a political label that they say describes them well, while 14% get put into a box that they feel doesn’t fit.

Pew Research Center found that people who describe themselves as moderates are more likely than others to say that Facebook didn’t classify them accurately. Some 20% of those who say they’re liberal and 25% of those who describe themselves as conservative disagree with the labels Facebook assigns to them, but 36% of people who call themselves moderates say that Facebook mis-categorized them.

When it comes to race/ethnicity, Pew Research Center reminds us that this is all about selling ads, rather than categorizing what race or ethnicity we are. Hence, it’s about “multicultural affinity” that can be used for targeted marketing.

Do you have an affinity for African American or Hispanic music, for example? Do you like and share rap videos or Latino love songs? If so, you could be a tasty target for whoever’s selling things to those markets. Then again, you could be the group that they’d just as soon rather snub.

As Pew Research Center points out, this particular category has been controversial, as advertisers have chosen to exclude marketing to certain groups. From its article:

Following pressure from Congress and investigations by ProPublica, Facebook signed an agreement in July 2018 with the Washington State Attorney General saying it would no longer let advertisers unlawfully exclude users by race, religion, sexual orientation and other protected classes.

The Pew study found that out of the users whom Facebook assigned this type of affinity, Facebook labelled 43% of them as being interested in African American culture, while 43% were assigned an affinity with Hispanic culture, and 10% were assigned an affinity with Asian American culture. Those are the only racial/ethnic affinities Facebook tracks in the US.

How accurate are the platform’s algorithms at guessing where our racial/ethnic affinities align? Sixty percent of study participants said that Facebook got it “somewhat” or “very” right, 37% said the company got it wrong – they don’t have a strong affinity for the group that Facebook suggested. And while 57% of those assigned a group reported that they do consider themselves to be a member of that group, another 39% said nope, that’s not me.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SB_B4IZp1Xg/

Google cracks down on access to your Android phone and SMS data

Google is manually reviewing Android apps that request access to a smartphone’s phone or texting features. The move fulfils a promise to restrict how apps can access these functions on Android phones.

In the announcement last October, the company explained that it would restrict which apps could ask for access to SMS data and phone functions, including call logs.

Under the new rules, only apps selected as the default text or phone app will be allowed access to that data. Google will grant exceptions, but only when an app needs to ask for those permissions for specific activities that are part of its core functionality. These include backing up and restoring user data, spam protection, synchronizing between devices or transferring calls, and task automation.

For an app to request this access at all, it must first be approved by a Google employee. To get that approval, developers must fill out a declaration form. Google’s teams will consider several factors when approving an app, including the benefit to the user, and whether users will understand why the app needs full access to this data.

They will also consider whether there are alternative ways for the app to achieve its goals. On its help page, Google lists other ways for apps to access the phone and SMS functions on a phone, but they require user intervention.

The Dial Intent APIs enable an app to open the phone app and specify a number to call, but the user has to manually hit the dial button. Similarly, SMS Intent initiates an SMS message for the user to send.

If an app wants to use SMS for two-factor authentication (2FA), its developer can use the SMS Retriever API. This listens for a code sent via SMS message from the app provider’s back-end server to the user’s phone. When the API sees the message arrive, it can automatically route it to the app so that the user doesn’t have to enter it manually.

Apps that get access to the call or SMS functions in the Android operating system unlock a treasure trove of data. They are able to download extensive metadata about calls that users have made and SMS messages that they have sent.

Clearly wise to the privacy implications, Google has completed the new access rules with an update on its permissions page. It is being far more explicit about how default apps could use phone and SMS data on the page, which it fleshed out with extra rules last year:

You may never sell this data. The transfer, sharing, or licensed use of this data must only be for providing critical core features or services within the app, and its use may not be extended for any other purpose (e.g. improving other apps or services, advertising, or marketing purposes). You may not use alternative methods (including other permissions, APIs, or third-party sources) to derive data attributed to the above permissions.

Google says that over the coming weeks, it “will be removing apps from the Play Store that ask for SMS or Call Log permission and have not submitted a permission declaration form.”

I wonder whether Facebook’s Android app, which has accessed phone and text logs on Android phones with users’ permission, will make it through?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iG_PJCmxgZc/

Vast data-berg washes up 1.16 billion pwned records

The Have I Been Pwned? (HIBP) website has revealed another huge cache of breached email addresses and passwords discovered last week circulating among criminals.

Named “Collection #1”, its statistics are as impressive as they are worrying: 87GB of data, 12,000 files, and 1.16 billion unique combinations of email addresses and passwords.

After cleaning up the data, Hunt reckons 773 million email addresses are unique, as are 21 million of the passwords, which is to say appearing in unhashed form only once within the cache.

Hunt said the data was discovered by “multiple people” on the MEGA cloud service being advertised as a collection made up of 2,000 or more individual data breaches stretching back some time.

Who has the data?

Given that it was being advertised and discussed on a criminal forum, in theory almost anyone visiting that source.

How far back in time does it go?

Probably many years as evidenced by Hunt himself, who discovered in Collection #1 an email address and old password used by him many years ago. In conclusion:

If you’re in this breach, one or more passwords you’ve previously used are floating around for others to see.

Which part of the data should we worry about?

Principally, the new data not already in HIBP’s databases – that’s 140 million email addresses and around 11 million of the 21 million unique passwords.

Hunt has published an incomplete list of the sites mentioned (although not verified) as being sources for Collection #1.

How might it be misused?

Hunt’s guess is the data was being marketed for automated credential stuffing in which credentials are entered on lots of other sites to see whether they’ve been re-used.

Credential stuffing is not new of course but it’s become standard issue these days – if web credentials are stolen, they’ll be tried on other services at some point. Observes Hunt:

You signed up to a forum many years ago you’ve long since forgotten about, but because it has subsequently been breached and you’ve been using that same password all over the place, you’ve got a serious problem.

What to do

To check whether your email addresses are in this cache (or any previous breach discovery), run a search using HIBP. If your email address was found in a breach where passwords were also stolen, such as the massive LinkedIn breach in 2012, then change your password for that site, if you haven’t already.

Of course, the sooner you change your password the better. If you’re changing your password now for a breach that happened in 2012, you have to expect that most of the damage has already been done (you should still change it though).

You can give yourself a chance to respond in a more timely fashion by signing up for email alerts about future compromises, or by using a browser or password manager that integrate with HIBP.

If you want to test if your go-to passwords have been involved in any breaches, HIBP has a search tool for that too – Pwned Passwords. You enter a password and the site tells you if it’s appeared in any breaches.

For example, Pwned Password search revealed the incredibly weak password ‘elvispresley’ has appeared 3,800 times in its database which means that anyone using it should use something else asap.

What it won’t tell you is the where the password was found. If a password you enter turns out to have been compromised but you don’t know which sites you used it on… then you’re left guessing.

(Incidentally, if you’re worried about the security of entering current passwords on a website to see whether they’ve been breached or used previously by someone else, read this explanation of how they are checked securely using something called k-anonymity.)

To give your passwords the best possible chance of not appearing on Pwned Passwords, use a properly secured password manager that will create and store secure passwords.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/j4p84jKmLVc/

I used to be a dull John Doe. Thanks to Huawei, I’m now James Bond!

Something for the Weekend, Sir? The name’s McLeod. Alessandro McLeod. I am a spy for the secret services.

You must now be assuming I must be pretty rubbish at this secret agent thing if I’m telling you this here, rather like the way Kelvedon Hatch isn’t terribly adept at being a secret nuclear bunker.

Kelvedon Hatch secret nuclear bunker signpost

Not to worry, I’m not spying for the British Secret Services anyway, I’m doing it for China. It turns out that it’s really easy to do. As I understand it, all you need to get into the spying game on behalf of everyone’s favourite inscrutable superpower is to own something with a Huawei logo on it.

How did I get into the spying game? Well, it was a case of accidental entrapment. More than a year ago, Huawei decided to release its Mate 9 series handset with a soft launch targeted at raising the brand’s status in Western Europe via social media rather than bothering with the hated mainstream IT press. Unfortunately for them, however, Huawei accidentally added The Reg‘s Andrew Orlowski and me to the list of carefully selected online influencers (ie, friendless nobodies) when trying to fill empty seats in the auditorium.

Yes, auditorium. Huawei’s idea of a “soft launch” is to invite several thousand blaggers to an aircraft hangar-like conference centre in Munich to experience a spectacular light show, blaring music and acrobatic dancing girls as the company’s Consumer business group CEO Yu Chengdong arrives onstage in a Porsche. Tim Cook he is not.

By coincidence, my Apple iPhone 6s was stolen exactly a month later. Huawei’s Mate 9 review test unit duly came back out of its box and became my main handset until I’d bought a new iPhone…

…which of course I never bothered to do because Huawei never got around to asking for its Mate 9 back. At the time, I got the impression the company lost interest in its Mate 9 series about 17 seconds after the Munich launch event ended, turning its focus already to its imminent replacement, the Mate 20.

Gosh, I thought, nobody has ever failed to ask me to return their review kit before. Usually, manufacturers insist I pay the return courier fee too, then threaten me with legal action because I didn’t write a shamelessly sycophantic review showering the product with unwarranted five-star praise (ie, like an influencer). I thought to myself: they must do things differently in China. I like that.

Youtube Video

Looking in hindsight, this was just all part of their intricate plan to entrap me in a devilish web of data espionage. First, I receive a Huawei smartphone to test even though I never requested one; second, my own iPhone gets stolen in suspicious circumstances (come on, who gets their phone nicked while sitting on a beanbag in the middle of an art installation at Tate Modern?); third, I find I am still using the Huawei loan unit more than a year later. What can I say? It’s a good Android smartphone.

Since then, the US government has condemned the use of Huawei equipment by telecoms companies, hinting that it has something to do with national security but without explaining what. The Pentagon banned its staff from using Huawei handsets or and ordered US military bases to stop selling them. US commerce delegations then did the rounds of European allies, trying to persuade local telcos from installing Huawei kit into their forthcoming 5G network infrastructure.

The US has also accused Huawei of breaking internationally agreed embargos on shipments to North Korea and Iran. Of course! Now it all makes sense! Remember when Paul Simon’s 1986 album Graceland broke the cultural boycott against South Africa? Clearly he was a communist China stooge too, having recorded an earlier hit with Art Garfunkel called Cathay’s Song.

Personally, I think it would be a shame if I suddenly found my phone no longer worked because my network provider introduced some Huawei-blocking tech. This partly because, as I may have mentioned in this column before, Mme Dabbs likes to refer to the company as “Wha-hey”, which always raises a smile because of its Carry On movie connotations.

“Could you grab your Wha-hey and bring it over here?”

“I can’t concentrate on TV because your Wha-hey keeps buzzing.”

“Is that a Wha-hey in your pocket or are you just pleased to see me?”

The other reason I’d be unhappy to part with my Huawei is that being told it’s a spy gadget for the Chinese secret services is absolutely thrilling. When I had an iPhone, I was just a sad, overcharged Apple fanboi. With a Huawei, I’m James fucking Bond.

Except, of course, I’m not James Bond. Nor Alistair Dabbs for that matter. As far as Huawei’s Chinese data servers are concerned, I am Alessandro McLeod and I am a 107-year-old woman living in Greenland. I hope the Red Army makes good use of this valuable information, not least because it’ll have changed again by the time you read this.

Whatever Huawei is supposed to be doing secretly remains vague but I am assured it is jolly naughty. My understanding is that my data could be being slurped, my personal information sold on without my permission or knowledge, my social networks infiltrated, my on-screen ads configured to manipulate my political opinions, and possibly the device is even listening to my conversations when I’m not making a phone call.

Er… hang on, isn’t that what Google, Facebook et al have been doing for years already? Ah, but these companies are American! They would never misuse the personal information of citizens. They would steadfastly refuse to bow to the political demands and censorial whims of despotic governments and vested financial interests.

Ahem.

It’s difficult to pick a national side in the Huawei confrontation when you’re neither American nor Chinese. You’re all bally foreigners as far as we in Brexit Britain are concerned, and you are not to be trusted.

Miss Moneypenny, grab my Wah-hey would you? I feel a vibrating alert coming in from the Far East.

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He actually couldn’t give two hoots about which handset he might use in the 5G future, as long as it’s not a Samsung. A return to iOS is equally unlikely, at least at current prices thanks to Apple greed and crappy Brexit-induced sterling exchange rates. @alidabbs

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/18/i_used_to_be_a_dull_john_doe_thanks_to_huawei_im_now_james_bond/