STE WILLIAMS

Apple: Sure, we banned VPN iOS apps in China, but, um, er, art!

Apple has told the US government it cooperated with China’s demands to block VPN services so it could get other concessions from the Middle Kingdom on human rights.

The Cupertino watchmaker said in a letter [PDF] to Senators Patrick Leahy (D-VT) and Ted Cruz (R-Zodiac) that while it did cave to China’s demands it axe VPN apps from its software store, it only did so in order to continue selling other products that helped advance human rights and speech on the mainland.

“We believe that our presence in China helps promote greater openness and facilitates the free flow of ideas and information,” explained Apple’s vice president for public policy Cynthia Hogan.

“Our products and services offer our customers opportunities to communicate in many forms, including through personal communications services, podcasts, photos, and millions of apps.”

Both Cruz and Leahy earlier grilled Apple for its readiness to ban privacy tools from its iThings, which can be used by people in China to circumvent the tight restrictions the government places on access to what it deems “inappropriate” material, such as websites critical of the state.

Apple argued that, in order to get a wider set of stuff into the hands of citizens on the Chinese mainland, it had to agree to strip encrypted VPN tools from the iOS App Store in China. Now, Apple claims it is still working with China in hopes of eventually getting that ban lifted.

“We believe in the critical importance of an open society in which information flows freely,” Hogan told the senators. “We will continue to express that view.”

Apple also considers China to be one of its key business markets. In its last quarterly report, execs said the Greater China region accounted [PDF] for $9.8bn in revenues, up 12 per cent from 2016.

Leahy, meanwhile, wasn’t quite sold on Apple’s justification, noting that it and other tech companies shouldn’t limit people’s free speech just to make a fast buck.

“Global leaders in innovation, like Apple, have both an opportunity and a moral obligation to promote free expression and other basic human rights in countries that routinely deny these rights,” he said in a response to the letter. “Apple is clearly a force for good in China, but I also believe it and other tech companies must continue to push back on Chinese suppression of free expression.” ®

PS: Microsoft’s Skype has been scrubbed from online software stores, including the official iOS store, in China, due to breaking local laws on voice-over-IP. Presumably, Chinese spies are upset at being unable to snoop on encrypted Skype comms.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/22/apple_vpns_china_skype/

Crypto-jackers enlist Google Tag Manager to smuggle alt-coin miners

Crypto-jackers using Coin Hive code to secretly mine Monero via computing power supplied by the unsuspecting have found Google Tag Manager to be a convenient means of distribution.

Security researcher Troy Mursch told The Register that he recently found Coin Hive’s free-to-use JavaScript running on the Globovisión website – Globovisión being a 24-hour telly station for Venezuela and Latin America.

The code was invisibly spawned, he said, “from the embedded Google Tag Manager script gtm.js?id=GTM-KCDXG2D,” which invoked cryptonight.wasm, a Web Assembly form of Coin Hive’s JavaScript mining code.

Google Tag Manager allows marketers, or anyone else with a website, to create code – dubbed a tag – that can be placed in webpages to dynamically inject JavaScript snippets rather than using hardcoded JavaScript in those files.

Google’s service, handily enough, provides more control and flexibility than static code delivery.

Because the code gets served by Google Tag Manager, it’s not present in the source files on a web server. The JavaScript file and appended parameter gtm.js?id=GTM-KCDXG2D don’t say anything about the function of the code invoked. Essentially, miscreants are hacking websites and quietly adding Google-hosted tags that contain the malicious code-mining code, thus obfuscating the source of the scripts.

Mursch said the Globovisión mining code was removed within an hour of discovery, and it’s not clear how it got there. He found the Monero-crafting JS, he said, while reviewing another crypto-jacking incident with a Brazilian singer’s website.

Google did not immediately respond to a request for comment.

A month ago, when The Register reported that Google short URLs were being co-opted for Monero, there were about 113,000 instances of cryptonight mining. Presently, there are about 180,000.

The Chocolate Factory’s Tag Manager Terms of Service prohibits misuse, and the ad distribution biz has systems in place to look for malware in tags and prevent them from firing when found.

“In most cases, affected users are unaware that there are tags serving malware from their containers,” the web giant explained on its website. “Usually through no fault of your own, a network provider becomes malware infected when they install 3rd party libraries or templates onto their websites, and subsequently transmit that malware to your site via the custom HTML tag that you published onto your website via Tag Manager.”

That being the case, it appears that Google either cannot detect Coin Hive code through Tag Manager or it doesn’t consider it to be malicious. Most ad blockers, as well as antivirus tools, kill Coin Hive’s code on sight these days.

Coin Hive’s development team did not respond to a request for comment.

Noting that crypto-jacking tops Malwarebytes’ list of security ills likely to be visited upon businesses and consumers in 2018, Mursch said: “We should expect this trend to continue.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/22/cryptojackers_google_tag_manager_coin_hive/

Once more unto the breach: El Reg has a go at crisis management

Hacks played representatives of a hacked company in an incident response exercise run by F-Secure this week.

The Live Security product interactive workshop was based on an actual customer experience adapted for a media audience. Around 20 members of the international media became the board members and managers of a company that had been attacked.

Attendees were split into four teams (CSIRT, management, IT management, press) to roleplay a breach at fictional VPN vendor COMSEC.

The groups were collectively taken through the processes the board needs to follow when such a hack hits – understanding what is under attack, where the vulnerabilities lie and how to stop the attack, what the responsibilities of staff are and how can they protect themselves from future attacks.

Stealing, scamming, bluffing: El Reg rides along with pen-testing ‘red team hackers’

READ MORE

Competition, especially in the Chinese market, has intensified for the fictional firm. The competitors’ devices are not technically superior to COMSEC’s products, but the competitors’ sales and marketing efforts have succeeded in drawing attention to the weaknesses of COMSEC’s comparable products in some detail.

COMSEC sponsors an internship program in Italy where approximately 15 students from local universities are brought in and taught security fundamentals, participating in the configuration of multiple network devices for COMSEC customers.

The firm has strengthened its position as a technology provider for made-up telco GermanTel Communications. As an important part of the agreement with GermanTel, COMSEC is (for the first time) also providing remote maintenance and operation of their products as a service.

COMSEC recently entered into an agreement with the German government, which has obligated COMSEC to notify them of significant vulnerabilities within COMSEC products deployed to German customers. The IT kit supplier has also agreed to report any breaches of data that adversely affect GermanTel customers.

Action stations

COMSEC’s VPN flagship product’s source code appeared on a blog along with scathing commentary over allegedly negligent security practices. The blogger’s identity was unknown. COMSEC’s CSIRT reported the incident to HQ and launched an investigation. COMSEC had outsourced its cybersecurity through another fictional outfit called FSC, which handled forensic analysis and the like.

Your reporter worked on this team, whose main tasks were to identify the source of the breach and contain it. COMSEC experienced an increase in spam emails in all countries throughout the summer and autumn of 2017. One infection of a lab server in Milan exposed a serious breach that was challenging to address.

The CEO ordered IT management to review the blog, access information exposures and the theft of confidential data from the labs.

Outside of CSIRT, an IT management team in Milan and local management team in Rome, no information related to this security exposure had been shared with employees or customers. Pundits were commenting on the reported leak on Twitter, representatives of the German telco partner expressed public displeasure while “COMSEC workers” complained through social media about being swamped with spam and (later) issues with a file server.

A set of “Action Cards” were given to each group (except the press). All necessary actions for solving the crisis were included, but not all groups had all the cards, and not all cards were needed. In this way the exercise was akin to a game of Cluedo.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/22/f_secure_hack_incident_response_exercise/

Uber suffered massive data breach, then paid hackers to keep quiet

News has surfaced today claiming that oft-controversial taxi ride-sharing company Uber suffered a massive data breach in 2016.

According to Bloomberg, the data of 57,000,000 drivers and customers was stolen, after which Uber not only kept the breach secret from the victims, but also paid the hackers $100,000 to “delete the data [and] keep quiet”.

Apparently, Uber’s security chief, Joe Sullivan, lured to Uber from Facebook in 2015, has been sacked in the fallout.

Bloomberg quotes Uber as follows:

Compromised data from the October 2016 attack included names, email addresses and phone numbers of 50 million Uber riders around the world… The personal information of about 7 million drivers was accessed as well, including some 600,000 US driver’s license numbers. No Social Security numbers, credit card information, trip location details or other data were taken.

It seems that Uber’s programmers uploaded security credentials to a GitHub repository – GitHub is a place where you are supposed to store source code, not the keys to the castle! – where the hackers stumbled across them.

From there, the crooks were able to get into Uber servers hosted on Amazon, and from there to access the personal information involved in the breach.

If this sounds terribly familiar, Uber suffered a breach with a similar cause just ocer three years ago, an intrusion that was discovered in May 2014 but not disclosed until February 2015.

Reliable details of what data was stolen this time round are not yet available.

As mentioned above, driving licence details were acquired by the hackers, meaning that Uber certainly ought to have declared the breach promptly, because sensitive data was involved.

Uber’s claim that customer details such as credit card data and social security numbers were not involved in the heist is a slight silver lining, but how many customers are willing to believe Uber at this point is anybody’s guess.

What to do?

There’s so much still untold in this story that the only sensible recommendation we can make to Uber customers is: “Keep your eyes open for what comes out next.”

If you’re a programmer, repeat these words to anyone who will listen: “GitHub is for code, not for security keys!”

As our friend and colleague Chester Wisniwewski bluntly put it:

Uber’s breach demonstrates once again how developers need to take security seriously and never embed or deploy access tokens and keys in source code repositories. I would say it feels like I have watched this movie before, but usually organisations aren’t caught while actively involved in a cover-up as well. Putting the drama aside and the potential impacts from the upcoming GDPR enforcement in Europe, this is just another careless development team with shared credentials and poor security practices. Sadly, this is common more often than not in “agile” development environments, especially in high-growth technology startups.

Oh, and if you do suffer a breach, do the right thing: report it quickly, not just because the law requires you to do so, but because it’s the decent thing to do.

And, for goodness sake, put something on your website to inform customers what you know so far; what additional information you are trying to unccover; and when you expect to provide the next update.

At the time of writing [2017-11-22T01:00Z], Uber’s website was still leading with the strapline “Get there – your day belongs to you”, and urging you sign up to drive for the company.

Update. Uber has published a help page about this breach. [2017-11-22T01:40Z]


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/w4YI6O1YkmU/

Uber: Hackers stole 57m passengers, drivers’ info. We also bribed the thieves $100k to STFU

Uber’s CEO Dara Khosrowshahi today revealed hackers broke into the ride-hailing app’s databases and stole personal information on 57 million passengers and drivers – information including names, email addresses, and phone numbers.

And the cyber-thieves made off with 600,000 US driver records that included their license numbers.

And the hack happened in 2016 – yet, biz executives hushed up the break-in rather than alert the public.

In a statement on Tuesday, Khosrowshahi said the intruders accessed cloud-hosted data stores:

I recently learned that in late 2016 we became aware that two individuals outside the company had inappropriately accessed user data stored on a third-party cloud-based service that we use. The incident did not breach our corporate systems or infrastructure.

At the time of the incident, we took immediate steps to secure the data and shut down further unauthorized access by the individuals. We subsequently identified the individuals and obtained assurances that the downloaded data had been destroyed. We also implemented security measures to restrict access to and strengthen controls on our cloud-based storage accounts.

You may be asking why we are just talking about this now, a year later. I had the same question, so I immediately asked for a thorough investigation of what happened and how we handled it.

“Obtained assurances” is a funny way of putting it.

No doubt this is what the chief exec discovered from that probe of his: in October 2016, two miscreants snatched from the app biz’s GitHub code repo the keys needed to access its AWS S3 data stores containing the aforementioned personal information, Bloomberg reports. The hackers then demanded $100,000 from Uber in exchange for their silence and to destroy all their swiped copies of the records.

Rather than warn state and federal authorities of the personal data theft, as is required by the California upstart, Uber’s chief of information security Joe Sullivan ordered that the crooks be paid off, the stolen files erased, and the whole thing hushed up, leaving riders and drivers none the wiser. The payout was disguised as a bug bounty prize complete with non-disclosure agreements signed.

Sullivan, previously a federal prosecutor, and one of his lieutenants were ousted from the company as a result of the new CEO’s investigation, we’re told. Khosrowshahi, who was installed at the San Francisco-based upstart over the summer, said steps have now been taken to ensure this kind of coverup is never repeated, and that security breaches will be disclosed in public in future as required:

While I can’t erase the past, I can commit on behalf of every Uber employee that we will learn from our mistakes. We are changing the way we do business, putting integrity at the core of every decision we make and working hard to earn the trust of our customers.

The top boss was adamant that “outside forensics experts have not seen any indication that trip location history, credit card numbers, bank account numbers, Social Security numbers or dates of birth were downloaded.” He added that the company was monitoring the affected accounts, and has flagged them for “additional fraud protection.” Anyone affected by the hack will be notified, he said.

It’s worth pointing out that while the company is now alerting the authorities, California’s data security breach notification law requires disclosure in “the most expedient time possible and without unreasonable delay.” Ie, not 12 months later.

As well as trouble potentially brewing in Cali over the hush up, New York Attorney General Eric Schneiderman has also launched an investigation into Uber’s cockup – by our reckoning, perhaps only the fifth worst thing the controversial bad-boy biz has done over the past year. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/22/uber_2016_data_breach/

Wait, did Oracle tip off world to Google’s creepy always-on location tracking in Android?

Analysis Having evidently forgotten about that Street View Wi-Fi-harvesting debacle, Google has admitted constantly collecting the whereabouts of Android devices regardless of whether or not they have location tracking enabled.

Between 2007 and 2010, during the debut of its Street View service, Google gathered all the Wi-Fi network names and router MAC addresses it could find from wireless networks encountered by its cars as they drove around snapping photos of buildings and roads. It also captured some network traffic from open Wi-Fi networks and, in the years that followed, was pilloried and fined some measly millions by privacy authorities around the world for doing so.

On Tuesday, Google said since the beginning of 2017, it has been collecting the locations of cell towers near Android phones. But having not found much use for the info, the practice is supposedly on its way out.

Essentially, when an Android handheld passes a phone mast, it quietly contacts Google’s servers to report the location of the tower, even if the user has disabled location services – allowing the ad giant to potentially figure out folks’ whereabouts as they wander about town. Google claims the collection is part of an experiment to optimize the routing of messages through mobile networks.

The admission came in response to a Quartz report, one that security researcher Ashkan Soltani, via Twitter, said had been shopped around the press by Oracle…

Ashkan Soltani tweet, 9:27am Nov 21, 2017

Soltani’s tweet … Click to enlarge

Soltani, who served as the chief technologist for America’s trade watchdog, the Federal Trade Commission, from 2014 through 2015, and then did a four-month stint advising the White House, did not respond to requests for comment.

Oracle, Google, and Quartz reporter Keith Collins also did not respond to requests for further information about Soltani’s claim. Not even a “no comment,” nor a single denial.

Oracle has been antagonistic toward Google for years as a result of the success of Android. It sued Google in 2010 claiming the Chocolate Factory had infringed its Java copyrights and patents, a case it ultimately lost last year, though there’s an appeal underway.

The database giant has seen more success siccing regulators on Google’s search and Android business practices, specifically in Europe, through its participation in FairSearch, a lobbying group composed of various Google foes.

Oracle also helped fund a nonprofit advocacy group formed last year called the Campaign for Accountability. The group’s ostensible mission is to hold the powerful accountable, through its Google Transparency Project has a very specific focus.

Google, unsurprisingly, has been critical of the group’s claims.

Back to today’s discovery of the cellular base station tracking, and Google said it considered mapping cell coverage to improve message delivery. The web goliath insisted it never actually used or retained the info, though, and Android has been revised to no longer phone home mast information.

Monopoly photo via Shutterstock

Google will appeal €2.9bn EU fine

READ MORE

Google slurped the data regardless of whether or not location services was enabled because, according to an unnamed source cited by Quartz, the data was tied to Google’s Firebase Cloud Messaging service.

The internet billboard’s explanation is that its push notification and messaging infrastructure is distinct from Android’s location services, which provide location data to apps. A consequence of that separation is that there was no way an Android user could disable the cellular tower checks using location data privacy settings short of powering down his or her device.

Google’s privacy policy disclosure on location data discusses cell tower data collection, but does so in the context of location services. It says: “We use various technologies to determine location, including IP address, GPS, and other sensors that may, for example, provide Google with information on nearby devices, Wi-Fi access points and cell towers.”

Google may be able to recover from this latest privacy blunder with sufficient contrition and repetitions of “we take your privacy very seriously,” but it could still get in trouble with regulators for not adequately disclosing its cell tower shenanigans and for failing to provide a way to opt out. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/22/google_oracle_location_privacy/

Microsoft says Win 8/10’s weak randomisation is ‘working as intended’

Microsoft has rebutted analysis that suggested its Address Space Layout Randomisation (ASLR) technology could be exploited.

Redmond’s response, posted here, was that ASLR is working as intended, and that the lack of randomisation discovered by Will Dormann – with assistance from Matt Miller of Microsoft – was a feature, not a bug.

“In short, ASLR is working as intended and the configuration issue described by CERT/CC only affects applications where the EXE does not already opt-in to ASLR. The configuration issue is not a vulnerability, does not create additional risk, and does not weaken the existing security posture of applications”, the post (attributed to the Microsoft Secure Windows Initiative) stated.

Microsoft ASLR table

The post said Dormann’s discovery only applied for the case coloured yellow in the table above, adding: “the entropy of images rebased by mandatory ASLR is inherently reliant on bottom-up randomisation being enabled for the process”.

Because bottom-up randomisation isn’t automatically enabled if the EXE has not opted into ASLR, the post said, the developer has to explicitly do so “for entropy to be applied to images that are rebased by mandatory ASLR. In practice, this issue only affects scenarios where an administrator is intentionally attempting to enable mandatory ASLR for a process that would otherwise not fully benefit from ASLR.”

So, no bug, no vuln, right?

Not quite, the post continued:

“CERT/CC did identify an issue with the configuration interface of Windows Defender Exploit Guard (WDEG) that currently prevents system-wide enablement of bottom-up randomisation … Similarly, EMET does not support enabling bottom-up randomisation system-wide and therefore cannot directly configure this setting.”

Vulture South checked this with Dormann, who agreed:

Microsoft agreed with the workarounds Dormann suggested in the CERT/CC announcement: either edit a registry value, or force it on a program-specific basis. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/22/microsoft_says_aslr_a_feature_not_a_bug/

New OWASP Top 10 List Includes Three New Web Vulns

But dropping cross-site request forgeries from list is a mistake, some analysts say.

After months of review, the Open Web Application Security Project has finally formally updated its widely used, if somewhat disputed, ranking of top Web application security vulnerabilities.

OWASP’s Top 10 list for 2017 replaces three vulnerability categories from the previous list with new ones and shuffles a couple of others around in moves that not everybody agrees with.

As with previous years, injection vulnerabilities such as SQL and LDAP injection topped the list of OWASP’s concerns for 2017, followed by incorrectly implemented authentication and session management functions. Cross-site scripting errors, which ranked third in OWASP’s 2013 list, dropped to the seventh spot in this year’s ranking, while cross-site request forgeries (CSRF) dropped out altogether.

Making its appearance for the first time in OWASP’s top 10 list is a category dubbed XML external entities (XXE), pertaining to older and poorly configured XML processors. Data gathered from source code analysis testing tools supported inclusion of XXE as a new vulnerability in the top 10 list, according to OWASP.

The two other new additions to the list are insecure deserialization errors, which enable remote code execution on affected platforms, and insufficient logging and monitoring. Both of these new vulnerability categories were added to the list based on feedback from community members who contribute to the OWASP effort.

Making way for these new categories were insecure direct object references and missing function level access control errors, which along with CSRF, dropped out of the OWASP’s top 10 ranking.

The list was compiled using community feedback, from data collected from dozens of organizations that specialized in application security and from a survey of more than 500 individuals. Data in the report was distilled from vulnerability information gathered from more than 100,000 applications and APIs used by hundreds of organizations.

Like OWASP’s previous vulnerability rankings, the new one — the first major revision to the list in four years — should end up being a vital asset for organizations looking for high-level guidance on prioritizing Web application vulnerabilities. But not everyone is convinced that the updated list necessarily includes the top Web application security concerns.

Jeremiah Grossman, chief of security strategy at SentinelOne, says one problem is that the list focuses less on legacy application concerns and more on what developers of modern applications should be paying attention to. It’s a bit surprising, for instance, that CSRF has been removed from the list, considering how common the vulnerability is in existing legacy environments. In contrast, XXE, one of the flaws on the list, is not very common but is of high severity.

“The change speaks partially to bias in the data and a split between what legacy applications and modern applications tend to be vulnerable to,” Grossman says.  While modern application frameworks tend to have native protections against CSRF, legacy applications do not.

“It’s important to remember that the OWASP top 10 is not an accounting for all the vulnerabilities that might cause an organization to get hacked, but more a list of the most common and risky issues that should be considered. In that way, the list is a great community resource.”

Previously, Grossman has also expressed some skepticism over the influence some security vendors have had in shaping the OWASP list. He has called out how a single vendor with a potentially vested interest has influenced two of the newly listed vulnerability categories in the OWASP list.

Ryan Barnett, principal security researcher at Akamai, expressed similar surprise at the removal of CSRF from the list and lowering the importance of cross-site scripting errors.

“While strides have been made within frameworks to build in protections for these issues, many are not used or are incorrectly applied. Additionally, XSS and CSRF are often linked in attack chains. If you have an XSS flaw on your site, it can circumvent CSRF protections,” he notes.

Another concern is that the data used to justify inclusion of some security vulnerabilities in the list is almost exclusively based on static and dynamic code analysis and penetration testing from vendors, Barnett says.

While the data highlights vulnerability prevalence, it does not offer much perspective on attack likelihood. “I hope in the future that we can get more data from Web application defender organizations such as Web application firewall vendors,” he says. “This data could help to justify including/excluding different items as well as help with rankings.”

Some of the new vulnerability categories in OWASP’s list, such as security misconfigurations, are also a bit too broad and may lead to confusion for organizations in detecting and remediating security issues, says Ilia Kolochenko, CEO of High-Tech Bridge.

But while the ordering and prioritizing of some of the flaws in the list are certainly subjective, the OWASP list does a decent job reflecting the overall state of affairs in Web security, he said. “It’s pretty difficult to make a one-size-fits-all rating for Web vulnerabilities,” he says.

The OWASP top 10 provides a valuable application security framework for companies and organizations. “It reminds, enumerates, and guides through the most common perils and pitfalls related to Web applications,” Kolochenko says.

Related Content:

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/application-security/new-owasp-top-10-list-includes-three-new-web-vulns/d/d-id/1330479?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ex-Facebook privacy manager dishes the dirt on your data

Up until September 2017, when Facebook handed Congress details of thousands of propaganda ads placed by a Russian troll farm to sow political strife, CEO Mark Zuckerberg dismissed the idea that fake news on Facebook influenced last year’s election. It’s “a pretty crazy idea,” he scoffed.

Yeah, he would say that, according to former Facebook privacy problem fixer and Operations Manager Sandy Parakilas: because Facebook doesn’t give a flying Farmville fig about protecting users from abuse.

What it cares about is data collection it can sell to advertisers, he said in a scathing opinion piece published by the New York Times on Monday.

What I saw from the inside was a company that prioritized data collection from its users over protecting them from abuse. As the world contemplates what to do about Facebook in the wake of its role in Russia’s election meddling, it must consider this history. Lawmakers shouldn’t allow Facebook to regulate itself. Because it won’t.

The editorial was titled “We Can’t Trust Facebook to Regulate Itself,” and that about sums up Parakilas’ take on the matter: an opinion based on having led Facebook’s efforts to fix privacy problems on its developer platform in 2011 and 2012, leading up to the company’s 2012 initial public offering.

You don’t pay for your Facebook account, at least not in terms of cash, but of course you pay when you interact with it: that’s when you become the product that it sells to marketers. Facebook gets more than 1 billion visitors in a day, Parakilas says: no wonder it’s blown up into a $500 billion behemoth in the five years since its IPO (Initial Public Offering – when a company goes public on a stock exchange), he says.

Given that kind of data-fueled, money-making brawn, unless regulators come knocking or angry mob users come bearing torches and furious headlines, Facebook just doesn’t care what happens to our data, he says:

The more data it has on offer, the more value it creates for advertisers. That means it has no incentive to police the collection or use of that data – except when negative press or regulators are involved. Facebook is free to do almost whatever it wants with your personal information, and has no reason to put safeguards in place.

Parakilas points to the golden years of addictive social games that thrived on Facebook’s developer platform: think Farmville and Candy Crush. (Remember all the maddening, non-stop invitations from friends that made you want to throw your laptop into a bubble bath with the power cord attached? Yes, those years).

Users didn’t have to pay for the addictive games – except with their data, of course.

The problem was that there were no protections around that data, Paralikas says. The data flowed through Facebook out into the eager hands of developers, and Facebook didn’t do much to stop any abuse that happened to it after that. Not, that is, until the impending IPO grew nearer and the media started piping up about how the data was being misused, Paralikas said. That’s when he got tasked with solving the problem.

He found that the issue was, shall we say, not on the front burner. In fact, nobody at Facebook was checking up on those developers at all. Paralikas says that at one point, it looked like a developer was using Facebook data to generate profiles of kids, without consent. The app developer’s response: we’re not violating Facebook policies on data use! Paralikas’s discovery: it was an unverifiable claim, because nobody was checking up on developers and how they handled data. “We had no way to confirm whether that was true,” he said.

Once data passed from the platform to a developer, Facebook had no view of the data or control over it. In other cases, developers asked for permission to get user data that their apps obviously didn’t need – such as a social game asking for all of your photos and messages. People rarely read permissions request forms carefully, so they often authorize access to sensitive information without realizing it.

Has Facebook’s attitude to privacy improved since the time that Parakilas worked there and the company went public?

Maybe, but as recently as September 2017, Facebook was hit with a privacy fine because of its missteps.

Spain penalized Facebook to the tune of €1.2m for privacy violations: the Spanish regulator, AEPD, said that the social media giant hadn’t gained adequate user consent for how it collects, stores and uses data for advertising, and found two serious infringements and one very serious infringement.

The total fine of €1.2m was made up of €600,000 for the very serious breach plus two charges of €300,000 for the two lesser infringements.

The AEPD said that Facebook had kept information for more than 17 months after users had closed their accounts, and also said that “the social network uses specifically protected data for advertising, among other purposes, without obtaining users’ express consent as data protection law demands – a serious infringement”.

So, what does all this add up to? Is Facebook as bad as it was in 2011?

Regardless of its progress or lack thereof, is Paralikas right when he says that Facebook can’t be trusted with regulating itself so that Russia doesn’t use it for a springboard to spread fake news?

What do you think?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Z3q0IaZWOSA/

US Senate takes aim at “warrantless surveillance”

The US Congress still hasn’t passed any legislation to reign in what critics call “warrantless surveillance” of US citizens by the nation’s multiple spy agencies. But there are now five proposals on the table aimed in that direction.

The latest, introduced last week, is the Senate version of the USA (United and Strengthening American) Liberty Act of 2017, which at least some privacy advocates say is a marked improvement over a House bill of the same name that was introduced in early October 2017.

Both bills would put restrictions on the most contentious provision – Section 702 – of Title VII of the Foreign Intelligence Surveillance Act (FISA), which is up for renewal, reform or expiration by the end of the year.

Section 702, according to its numerous critics, has allowed government intelligence agencies – the NSA, CIA, FBI and the National Counterterrorism Center – to collect and sift through vast troves of information on an unknown number (intelligence agencies won’t say how many) of American citizens, all because it is “incidentally” collected during surveillance of foreign targets. And they have been able to do it without probable cause, a warrant or any evidence of criminal activity.

The House bill, which has bipartisan sponsorship, would require that law-enforcement agencies get a warrant before they can get an American citizen’s emails or phone calls recorded by the NSA.

But – there is always a but – it would also grant a number of exceptions to that requirement, including whether a person, “is the subject of an order or emergency authorization” and if “the Attorney General has a reasonable belief that the life or safety of a person is threatened…”

Which, as Rainey Reitman of the Electronic Frontier Foundation (EFF) put it, improves things a little… but:

…doesn’t effectively end the practice of ‘backdoor searching,’ when government agents – including domestic law enforcement not working on issues of national security – search through the NSA-gathered communications of Americans without any form of warrant from a judge.

She had other complaints – a lack of transparency and oversight and, “most importantly, the bill won’t curtail the NSA’s practices of collecting data on innocent people.”

Elizabeth Goitein, co-director of the Liberty and National Security Program at the Brennan Center for Justice at NYU School of Law, agreed, arguing in a post in US News that the exceptions to the warrant requirement could “swallow the rule.”

She said it could make things even worse, by introducing, “a new and dangerous principle into the law: the notion that Americans have lesser rights when the government is acting with a ‘foreign intelligence’ purpose.”

And what about the Senate bill, which also has bipartisan sponsors?

EFF did not respond to calls or emails, and hadn’t posted comments about it on its website as of Monday, but Neema Singh Guliani, legislative counsel for the American Civil Liberties Union (ACLU), said in a statement that it is much better:

This bill is a dramatic improvement over the House version of this legislation. For years, the NSA, CIA, and FBI have engaged in illegal ‘backdoor’ searches, deliberately looking for and accessing Americans’ private information collected under Section 702 without a warrant…

This bill would help to rein in these illegal searches by requiring the government to get a warrant when they deliberately search for and then subsequently seek to view Americans’ private communications.

Guliani said the Senate bill doesn’t address all of what she said were “constitutional concerns” with Section 702, but called it, “an important step forward from the dismal status quo.”

What are the Senate bill’s chances? Not good, according to the Washington Post, which predicted it has, “little chance of passage.”

And if an earlier vote by the Senate Intelligence Committee is any guide, the Post is probably correct. That committee voted 12-3 in late October 2017 to renew FISA for eight years, with only a minor tweak.

Still, there are legislative moves besides the USA Liberty Act that would curb what are seen as the most egregious elements of Section 702.

A bill filed in October 2017 by a group of 14 senators, titled the USA Rights Act, is the most aggressive – it would impose virtually all the restrictions sought by privacy advocates, but intelligence officials contend they would give terrorists and criminals the cover they want.

Those moves may be because in recent years members of Congress have found their own communications made public, thanks to the Section 702 dragnet. Frank Miniter, writing in Forbes, noted that elected officials now have “skin in the game.”

Recent “unmaskings” show that even a congressman’s conversations with a foreign official might go public with their names un-redacted. Then, even if the member of Congress didn’t do anything wrong, what they said and whom they spoke with could quickly be taken out of context by the media outlets that root for the opposing team.

The deadline for a vote on any of these options is the end of the year – less than six weeks away.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WL6OzXchZDk/