STE WILLIAMS

How to protect your Facebook data [UPDATED]

How do you protect your data following the ongoing revelations that Facebook can’t, or won’t?

Well, first you do fill-in-the-blank, and then you wait for an hour, because Facebook’s sure to squirt out another data privacy overhaul in this Cambridge Analytica, Cubeyou, data spillage-induced fix-it time.

Not to complain about the data privacy overhauls, mind you. It’s just hard to keep up.

Here’s the latest box of muffins, fresh as of mid-April. Note that much of this is copy-pasted from our 20 March set of protect-your-data muffins, at least one of which was rendered stale a week later.

One of the changes was to do away with the Apps Others Use privacy option, which formerly allowed users to control how they share their data with third-party apps.

Up until Facebook’s privacy settings update late last month, Apps Others Use was located under the Apps Setting page and gave you a slew of data categories to control what types of your data people could bring with them when they used apps, games and websites. Here’s what it looked like:

Now, it’s just a grayed-out option:

Facebook ditched Apps Others Use because it was made redundant years ago, after the platform stopped people from sharing friends’ information with developers, a spokesperson told Wired:

These controls were built before we made significant changes to how developers build apps on Facebook. At the time, the Apps Others Use functionality allowed people to control what information could be shared to developers. We changed our systems years ago so that people could not share friends’ information with developers unless each friend also had explicitly granted permission to the developer.

In other words, the privacy control was like a light switch that wasn’t connected to a light: toggling it didn’t do anything at all.

As you can see in the image above, Facebook’s data oversharing – which enabled quiz developer Aleksander Kogan to harvest data from tens of millions of Facebook users and illicitly send it on to the political targeted-marketing data firm Cambridge Analytica – was thorough. Apps your friends use could get at your birthdate, whether or not you were online, your religious and political views, your interests, posts on your Timeline, and much more.

No wonder that Cambridge Analytica could get at so much data – we’re talking about 87 million users, or what Facebook called “most people on Facebook.” That includes the data of CEO Mark Zuckerberg himself, he revealed in testimony before the House and Senate last week.

Note also that Facebook didn’t offer checkboxes to ensure that we could prevent apps from getting at our gender, friends list, or public information: to do so, you’d have to turn off all apps on the platform.

At any rate, back to the muffins. These should all be fresh, but if Facebook bakes some new privacy changes, we’ll be heading back into the kitchen to keep you updated.

Check your privacy settings

We’ve written about this quite a bit. Here’s a good guide on how to check your Facebook settings to make sure your posts aren’t searchable, for starters.

That post also includes instructions on how to check how others view you on Facebook, how to limit the audience on past Facebook posts, and how to lock down the privacy on future posts.

The security and privacy settings changes Facebook promised at the end of March fall into these three buckets:

  • A simpler, centralized settings menu. Facebook redesigned the settings menu on mobile devices “from top to bottom” to make things easier to find. No more hunting through nearly 20 different screens: now, the settings will be accessible from a single place. Facebook also got rid of outdated settings to make it clear what information can and can’t be shared with apps. The new version not only regroups the controls but also adds descriptions on what each involves.
  • A new privacy shortcuts menu. The dashboard will bring together into a central spot what Facebook considers to be the most critical controls: for example, the two-factor authentication (2FA) control; control over personal information so you can see, and delete, posts; the control for ad preferences; and the control over who’s allowed to see your posts and profile information.
  • Revised data download and edit tools. There will be a new page, Access Your Information, where you can see, and delete, what data Facebook has on you. That includes posts, reactions and comments, and whatever you’ve searched for. You’ll also be able to download specific categories of data, including photos, from a selected time range, rather than going after a single, massive file that could take hours to download.

Not all these changes have taken place yet so it’s a good idea to keep checking in on the settings periodically to see if anything has changed.

Audit your apps

You should always be careful about which Facebook apps you allow to connect with your account, as they can collect varying levels of information about you.

People still get surprised to see what they’ve opted into sharing with various apps over the years, so again, it’s smart to audit apps regularly.

To audit which apps are doing what:

1. On Facebook in your browser, drop down the arrow at the top right of your screen and click Settings. Then click on the Apps and websites tab for a list of apps connected to your account. This takes you to the App and websites settings page.

2. Check out the permissions you granted to each app to see what information you’re sharing and remove any that you no longer use or aren’t sure what they are for.

And finally, if you’re ready to disengage entirely, there’s the cut-it-out-completely option:

Delete your profile.

This is a lot more serious than simply deactivating your profile. When you deactivate, Facebook still has all your data. To truly remove your data from Facebook’s sweaty grip, deletion is the way to go.

But stop: don’t delete until you’ve downloaded your data first! Here’s how:

1. On Facebook in your browser, drop down the arrow at the top right of your screen and click Settings.

2. At the bottom of General Account Settings, click Download a copy of your Facebook data.

3. Choose Start My Archive.

Be careful about where and how you keep that file. It does, after all, have all the personal information you’re trying to keep safe in the first place.

You ready?

Have you downloaded the data? Have you encrypted it or otherwise stored it somewhere safe? OK, take a deep breath. Here’s comes the doomsday button.

Go to Delete My Account.

Blow the platform a kiss, and away you go.

Now, are you truly gone forever? It’s worth asking, given all the data Facebook collects on people who never even signed up for the service – the data that forms what’s known as shadow profiles.

Facebook has also said that it keeps “backup copies” of deleted accounts for a “reasonable period of time,” which can be as long as three months.

CBS News notes that Facebook also may retain copies of “some material” from deleted accounts, but that it anonymizes the data.

Facebook also retains log data – including when users log in, click on a Facebook group or post a comment – forever. That data is also anonymized.

One type of data about you that you won’t be able to delete: anything posted by your friends and family. As long as your BFFs keep their profiles active, whatever they’ve shared about you won’t be deleted along with your own account. After all, it’s not your content. That includes, for example, the messages you’ve sent to others via the platform.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8ZRlQH0SWxg/

UK spy agency warns Brit telcos to flee from ZTE gear

GCHQ’s cyber security advice group has formally warned of the risk of using ZTE equipment and services for the UK’s telco infrastructure.

The National Cyber Security Centre, the cyber part of the UK’s nerve centre, founded in 2016, has written to UK telecoms companies warning that using gear from the Chinese firm “would present risk to UK national security that could not be mitigated effectively or practicably”.

In a statement, the spooky agency confirmed the veracity of an FT report, but declined to elaborate on what specific vulnerability or threat had prompted the assessment:

“NCSC assess[es] that the national security risks arising from the use of ZTE equipment or services within the context of the existing UK telecommunications infrastructure cannot be mitigated,” the agency told us in a statement.

Both Huawei and ZTE have been singled out by US spooks and Congress-critters as posing a potential threat. Unlike privately owned Huawei, with its roots in the bustling Hong Kong trading area, ZTE is a state-owned enterprise, and that’s something the NCSC has pointed out.

However Huawei worked hard to address concerns, establishing a centre in Banbury, close to GCHQ, Huawei Cyber Security Evaluation Centre, nicknamed “the Cell”. This allowed spooks to examine Huawaei’s wares, including its source code. After initial issues about oversight, officials declared the Cell a success.

“HCSEC fulfilled its obligations in respect of the provision of assurance that any risks to UK national security from Huawei’s involvement in the UK’s critical networks have been sufficiently mitigated,” the third annual report by the centre’s Oversight Board noted last year. HCSEC demands the full source code to Huawei’s products so it can rebuild the binaries and replicate their functionality. This isn’t always easy, the report noted, due to “complex and subtle technical issues”.

No backdoor has ever been found in any Huawei phone, but in 2012 one was found in a ZTE phones.

Last March we exclusively revealed that ZTE’s Tier 2 visa licence had been suspended by the Home Office. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/16/zte_gchq_warning/

Large Majority of Businesses Store Sensitive Data in Cloud Despite Lack of Trust

Researchers report 97% of survey respondents use some type of cloud service but continue to navigate issues around visibility and control.

RSA CONFERENCE 2018 – San Francisco – Businesses relying on public cloud storage aren’t entirely sure their data will be safe there, researchers at McAfee report. Eighty-three percent of companies surveyed store sensitive data in the public cloud, but only 69% trust the cloud will keep their information secure.

Results of the survey, which polled 1,400 IT professionals on cloud adoption and security, showed 97% of respondents are using some type of cloud service but continue to navigate issues around visibility and control. Some are moving to the cloud slowly, held back by poor visibility; others are moving ahead despite the risk of security issues.

Personal customer information is the most common form of cloud-based sensitive data, 61% of organizations report. About 40% use the cloud to store at least one of the following data types: internal documentation, payment card data, personal staff information or government identification. About 30% keep intellectual property, healthcare records, competitive intelligence, and network pass cards in the public cloud.

Survey results show once it’s in the cloud, this information is at risk. One in four organizations using infrastructure-as-a-service (IaaS) or software-as-a-service (SaaS) has had their data stolen. One in five has been hit with an advanced attack against their public cloud infrastructure.

McAfee researchers discovered an overall decline in the “cloud-first” mentality, with only 65% of respondents reporting a cloud-first strategy compared with 82% one year ago. This drop can be attributed to two factors, says Vittorio Viarengo, vice president of marketing for McAfee’s Cloud Business Unit. The first is a growing awareness of the responsibility that comes with storing data in the public cloud.

“Customers are realizing they’re still on the hook to provide security for some of the things that happen in the cloud,” he explains. They’re learning, for example, service providers don’t ensure their logins are properly set up, or the security risks of remote employees using cloud services. They’re learning what they’re responsible for when they use IaaS platforms versus SaaS.

The second is an acceptance that they don’t immediately need to move everything to the public cloud, an option especially appealing to institutions like the government, which is one of many industries that’s still skeptical of the cloud, says Viarengo.

“They are realizing the hybrid cloud and private cloud they’ve been building for years, are going to be around for a long time,” he says. If an organization has invested twenty years in on-prem processes, it might be easier to keep running them on-prem than move them into the cloud.

The combination of public and private cloud is the most common architecture, with 59% of respondents stating they use hybrid cloud. The larger the business, the more likely it is to go hybrid: in organizations with up to 1,000 employees, 54% relied on hybrid cloud; in enterprises with more than 5,000 employees, 65% use it.

As the cloud becomes more popular, security teams should be looking outside their organization’s perimeter and rethinking their security models. Tasks IT used to do will be replaced as cloud continues to grow and businesses lose control over the networks, devices, and applications storing their data. Cloud-focused IT teams don’t have the same visibility as they did with on-prem environments.

“User preference is in the cloud,” Viarengo points out. “And in the cloud, you don’t own anything but you’re still on the hook for security … [organizations] need to ascertain visibility and control over enterprise data when they don’t own the back end.”

Companies leading the charge in cloud adoption are most concerned about visibility, which lets them adopt cloud sooner, and improved controls. Those who prioritize visibility are more likely to have a relaxed approach to shadow IT, researchers found. They view it not as something to shut down, but instead a sign of how the workplace will operate in the future.

Viarengo emphasizes three steps for companies to take when moving data and processes to the cloud. The first of these is to classify information. “As data is uploaded or created in the cloud, you need a mechanism to know what’s inside it,” he says, noting that the cloud holds credit card information, corporate secrets, patent data, or healthcare data, you’ll need to know how to secure it.

Next up: define the policy, and what’s acceptable and unacceptable as far as your company is concerned. Is it ok to share data that has confidential information? If so, with whom can that information be shared? Can people access confidential data from their personal devices?

His third recommendation is to “track everything that goes on.” Know which users can access which applications, and from which locations and devices they access them. You’ll be able to establish patterns for each user and, when something happens, you can go back and conduct forensics on the information you collected. If someone normally accesses data from Palo Alto, and ten minutes later they access the same data from China, it’s a red flag.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/large-majority-of-businesses-store-sensitive-data-in-cloud-despite-lack-of-trust/d/d-id/1331538?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Symantec Now Offers Threat Detection Tools Used by its Researchers

TAA now is part of Symantec’s Integrated Cyber Defense Platform.

RSA CONFERENCE 2018 – San Francisco – Symantec today rolled out new threat detection features that are based on tools its research team used to discover and the recent DragonFly 2.0 attacks on energy organizations.

The new Targeted Attack Analytics (TAA) was developed with the help of Symantec researchers as well as its machine learning experts, the company said. TAA is now built into Symantec’s existing Integrated Cyber Defense Platform.

“Up until now, we’ve had the telemetry and data necessary to uncover the warning signs of dangerous targeted attacks but the industry has lacked the technology to analyze and code the data quickly,” said Eric Chien, technical director of Symantec Security and Response and Symantec Fellow. “With TAA, we’re taking the intelligence generated from our leading research teams and uniting it with the power of advanced machine learning to help customers automatically identify these dangerous threats themselves and take action.”

Read more here

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/symantec-now-offers-threat-detection-tools-used-by-its-researchers-/d/d-id/1331543?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How GDPR Forces Marketers to Rethink Data & Security

The European regulation is making marketing technology companies re-examine their security, and that’s a good thing.

Multinational marketers are closing in on the May 25 date by which they must comply with the EU’s General Data Protection Regulation (GDPR). As the date looms, marketers are tying up loose ends to ensure they meet the deadline. However, most view the GDPR — a regulation that governs the way in which consumer data is collected as well as how it’s used and stored — as a major challenge and remain uncertain how much their data policies will change. Scrambling to meet the deadline, companies are in various states of preparedness.

The GDPR will offer consumers in the EU more control over their personal data and outlines requirements for data collection, storage, and use. It will also impose potentially steep fines on companies with poor data-handling practices and those that experience data breaches in which they are found at fault. While the regulations are limited to the personal data of consumers living in the EU, they apply to any company handling, transmitting, or storing that data, whether it has a physical location in the EU or not. This includes marketing technology (martech) companies that process data for and receive personal data from their customers.

What Is Personal Data?
Many martech companies don’t collect personally identifiable information, which means the data does not directly identify an individual. Generally, the consumer is assigned a cookie with some random, unique value to tie certain website events together. With the GDPR, the notion of personal data is extended to include online identifiers such as IP addresses and cookie values.

These identifiers do not identify an individual, but if you combine these with additional information, you can identify a person. So it becomes critical that we understand the nature of the additional information to process it in a secure and compliant manner. Securing data may include what the GDPR refers to this as pseudonymization, where the data is processed so that it cannot be attributed to a specific person. Hashing and encryption are examples of pseudonymization.

What About Personal Data You Didn’t Ask For?
Martech companies need to think through and map out how all their data is collected and what is sent to them from their partners and customers. I recommend answering the following questions:

  1. What data are you collecting, and what data are your customers sending you?
  2. As a data processor, do you really need the data customers are sending you? If you do not really need it, do not accept the data. Period.

Furthermore, I recommend that customers perform pseudonymization on any personal data before the data processor collects it. The less personal data martech companies handle, the better.

The Right to Forget
Within the martech sphere, companies will either obtain consent or have a legitimate interest for processing personal data and need to comply with requirements such as data portability, also referred to as the “right to forget.” The right to forget revolves around the concept that consumers have a right to demand the deletion of their personal data from companies that have that data, even if they previously have given permission for its collection.

Brands collect and store consumers’ first-party data as a matter of course — that is, any data consumers offer when they buy something or conduct transactions online. If you’re shopping on Amazon, banking with Wells Fargo, buying tickets with Ticketmaster, or booking rides on Uber, you have offered your data. Besides the brands, the requirements apply to their third-party vendors, including their data processors. Martech companies may also have access to all or part of this data, or process some or all of it for the brands, including pseudonymized personal data.

The right to forget can be technically challenging to solve for, especially in martech, where millions of records are processed daily.

If you apply pseudonymization to personal data, it becomes very important to store this data in one place (database normalization). Any reference to the personal data will come via a foreign key or token. When martech companies receive a request to delete personal data, it is a matter of updating the record with some value that does not mean anything (e.g., “unknown”).

The idea here is that companies do not physically delete everything associated with the customer but, rather, change the pseudonymized value and leave everything else in place because retailers have legitimate business interests in the data. The net effect is that the retailer will have its metrics available — for example, the number of sales for a given marketing channel. If the pseudonymized data is spread to multiple data stores and systems, it becomes very hard to control and satisfy the right-to-forget principle.

The GDPR Effect
With the GDPR in place, the martech sector must look at privacy issues as part of the requirements for building its systems — these issues cannot be afterthoughts. As software is built, the privacy piece must be part of it from the start — a “privacy by design” approach. The sector also needs to start treating IP addresses, device identifiers, and other identifiers as personal data. Just because these don’t identify a person by themselves, the combination with additional information could.

It is also important that martech companies train their teams and make it clear to customers that they do not want any of their customers’ personal data that they do not need to provide services to the customer. On the training front, teams need to make this a requirement at the beginning of the process when they integrate and onboard customers. If, however, the customer does need to send personal data, the data must be pseudonymized.

Overall, the GDPR forces companies in the martech sector to rethink their systems and how they handle data. We want to be transparent and build systems that protect consumers’ data according to what they consented to. In an era when security breaches are pervasive, the GDPR is something we need. 

Related Content:

 

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop ITX 2018 agenda here.

Roger Kjensrud is Co-Founder and Chief Technology Officer at Impact, where he’s tasked with architecting and enhancing the company’s natively integrated marketing technology platform for addressing fraud detection and prevention; marketing intelligence; and managing and … View Full Bio

Article source: https://www.darkreading.com/risk/compliance/how-gdpr-forces-marketers-to-rethink-data-and-security/a/d-id/1331475?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

INsecurity Conference Seeks Security Pros to Speak on Best Practices

Dark Reading’s second annual data defense conference will be held Oct. 23-25 in Chicago; call for speakers is issued.

INsecurity, Dark Reading’s annual conference on data defense, this week is issuing a call for security practitioners to speak on key issues and best practices for protecting enterprise data.

The call for speakers invites security professionals from any enterprise to present on leading-edge methods, processes, and technologies they use to defend critical data. The talks chosen from the call for speakers will be presented at INsecurity in Chicago, Oct. 23-25.

Insecurity, which was held for the first time in Washington, D.C. last November, is a conference that allows IT and security pros to share information about what works – and what doesn’t – in cybersecurity defense. Most of the speakers in the sessions are themselves security practitioners, and some sessions are protected by the Chatham House Rule, which allows attendees to share the information they hear but not the identities of the speakers.

The INsecurity call for speakers seeks proposals on a wide variety of data defense practices, including cloud security, application security, incident response, identity management, mobile security, intelligence analysis, malware defense, risk management, compliance practices, endpoint security, and other processes or disciplines commonly used for enterprise defense.

Proposals will be chosen on the basis of their usefulness to an audience of IT and security professionals and the ability of the speaker to show practical experience on the topic. No vendor product presentations will be accepted in this call for speakers, though there will be vendor presentations in the exhibit hall.

Additional information about the INsecurity program, as well as registration for the conference Oct. 23-25 at the Sheraton Chicago will be posted in the near future.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Tim Wilson is Editor in Chief and co-founder of Dark Reading.com, UBM Tech’s online community for information security professionals. He is responsible for managing the site, assigning and editing content, and writing breaking news stories. Wilson has been recognized as one … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/insecurity-conference-seeks-security-pros-to-speak-on-best-practices/d/d-id/1331545?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facial recognition cameras on lamp posts to be tested in Singapore

Singapore last year announced that it wants to convert every single lamp post in the country – there are about 110,000 in the island state – into an interconnected network of wireless sensors.

Now, it looks like the plan is to put surveillance cameras equipped with facial recognition on top of those posts, where they can pick out faces and identify pedestrians, bicyclists, motorcyclists or motorists as they pass by, Reuters reports.

A pilot of the project is scheduled to begin next year, as part of Singapore’s Smart Nation project. GovTech, the federal agency in charge of the “Lamppost-as-a-Platform” (LaaP) project, has set a deadline of May for companies to register their interest in providing the network technology.

A GovTech spokesman sent a statement to Reuters saying that the plan is to use facial recognition to analyze crowds and to investigate terrorists:

These capabilities may be used for performing crowd analytics and supporting follow-up investigation in [the] event of a terror incident.

Use of the technology concerns privacy advocates.

Adam Schwartz, senior staff attorney for the Electronic Frontier Foundation (EFF), responded to Reuter’s request for comment by urging Singapore and other governments not to adopt facial recognition surveillance technology. His concern is mirrored by rights advocates in other surveillance-happy countries: namely, that the technology will be used against political opponents, peaceful protesters, journalists or activists.

Facial recognition technology typically allows police to match people picked up by surveillance cameras against images in databases. Countries including the US and the UK have been building huge image databases.

According to a Government Accountability Office (GAO) report from August 2016, the FBI’s massive face recognition database has 30m likenesses. Last year, during a scathing US House oversight committee hearing on the FBI’s use of the technology, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record.

Likewise, in the UK, the Home Office’s database of 19m mugshots contains hundreds of thousands of facial images that belong to individuals who’ve never been charged with, let alone convicted of, an offense.

Asian countries are likewise increasingly adopting facial recognition technologies: China’s police are using it to identify and publicly shame jaywalkers, to wipe out toilet paper thieves, and to pick out suspects as they travel during the Lunar New Year.

Singapore has pledged to be sensitive to privacy concerns with the LaaP rollout. Reuters quoted a GovTech spokesperson:

The need to protect personal data and preserve privacy are key considerations in the technical implementation of the project.

Prime Minister Lee Hsien Loong said that the Smart Nation project is aimed at improving people’s lives and that he doesn’t want it done in a way “which is overbearing, which is intrusive, which is unethical”.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/NQfJ_yJD3UA/

Tracking protection in Firefox for iOS now on by default – why this matters

Mozilla has announced that the tracking protection in Firefox 11 for iOS is now turned on by default.

It might sound like a mere tweak, but Firefox 11 for iOS becomes the first version of the browser to do this without the user having to consciously turn it on.

The obvious question, then, is how Firefox 11 iOS now differs from simply launching a privacy tab or using something like Mozilla’s privacy-oriented Focus app, both of which integrate the same anti-tracking technology.

The short answer is it doesn’t – they both block in exactly the same way. However, privacy mode and Focus also keep no web history, cache, search history, passwords, or form data.

The same protection is available in the desktop versions without launching privacy mode by enabling the never remember history setting (Tools Privacy and Security).

The other benefit of tracking protection is that it claims to increase page loading speed, which is why users might want to turn it on in other versions.

Tracking protection is implemented across Firefox versions as follows:

Firefox iOS: turned on by default

Firefox Focus: turned on by default

Firefox Android: enabled via private tab

Firefox macOS: enabled via Tools Privacy and Security (or private tab)

Firefox Windows: enabled via Tools Privacy and Security (or private tab)

Firefox Linux: enabled via Tools Privacy and Security (or private tab)

It’s also worth mentioning that Firefox iOS’s tracking protection offers a disable for this site button in case users find that blocking trackers causes problems.

The app also counts trackers in the style of ad-blockers so users gain visibility on the trackers used by publishers and advertisers to help build profiles of user’s browsing habits.

Ultimately, the point of enabling tracking protection by default is that every user gets it regardless of whether they know or remember to turn something on, which is why all versions will enable it by default in time.

Interestingly, an external influence (and one of the reasons iOS is leading the Firefox charge) is Apple’s policy of promoting privacy even when that makes life harder for advertisers.

As Mozilla made clear when it launched tracking protection in Firefox iOS last September:

We’re always looking to bring the latest features to our users, and we’re finally able to deliver it to Firefox for iOS thanks to changes by Apple to enable the option for 3rd party browsers.

Between them, what Apple, Mozilla and a handful others (but perhaps not Google) are proposing is a big change from the days when browsers were simple windows that open to whatever the web wanted to throw at users.

As the world has seen from Facebook’s travails, this model is coming under some pressure as users wise up to the way they are constantly being watched.

The browser is a long way from being becoming an impervious shutter to surveillance but tracking protection by default is certainly a good start.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JcFXUAa93FA/

Cisco backs test to help classical crypto outlive quantum computers

Cisco and quantum security outfit Isara reckon they’ve got at least as far as alpha stage in one a problem of the future: securing public key certificates against quantum computers.

“Quantum computers will break cryptography” is a popular mass media trope, but the big brains of crypto have been aware of the risk for some time. Academics have therefore pondered quantum-safe crypto schemes for some time.

Deployments are less common at this stage, which is why the Cisco-Isara PQPKI test caught Vulture South’s attention.

The PQPKI test acts as a TLS 1.2 server with post-quantum authentication certificates implemented as one of the ciphersuites available to sign the certificate.

Promotional still from Quantum Leap, the TV series

Boffins pull off quantum leap in true random number generation

READ MORE

As the partners explained at the test site, America’s National Institute for Science and Technology has a post-quantum crypto project with around 70 submissions. However, “Most of these schemes have significantly larger public key and/or signature sizes than the ones used today. There are concerns about the effect their size and processing cost would have on technologies using X.509 certificates today, like TLS and IKEv2”.

The PQPKI test has adopted a hybrid approach to the problem, allowing certificates to be tested using post-quantum schemes if machines support them, but falling back to traditional certificate checks if not.

A hybrid scheme would also save certificate authorities and users from having to run duplicate systems, Isara explained.

Cisco’s Panos Kampanakis said: “Once the quantum-safe algorithms are standardised, we may have a very short time frame in order to migrate our systems.”

Isara added that the test server used “Leighton Micali Scheme (LMS) stateful hash-based digital signatures” (described at the International Association for Cryptologic Research in this paper, co-authored by Isara’s Edward Eaton).

Another scheme, SPHINCS+, is planned for a second phase of the test. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/16/post_quantum_pki_test/

Australian Feds cuff woman who used BTC to buy drugs on dark web

Australia’s Border Force (ABF) has warned that “people shouldn’t assume the dark web is invisible to Australian agencies” after cuffing a woman who bought illicit drugs using Bitcoin and had them shipped from the UK to Australia.

The ABF chose Friday the 13th of April to announce that a woman had run out of luck after she “used a dark web portal and crypto-currency Bitcoin to order, pay and organise multiple shipments of illicit drugs from United Kingdom.”

“Through close collaboration with our law enforcement partners we are able to detect imports purchased through these sites,” said ABF regional commander for Queensland Terry Price.

The accused allegedly imported MDMA (aka Ecstacy), plus the opioids Oxycodone and Fentanyl. The latter is between 50 and 100 times more concentrated than Morphine, making it extremely difficult for recreational users to assess safe doses.

Backdoor key

UK spookhaus GCHQ can crack end-to-end encryption, claims Australian A-G

READ MORE

After detecting transactions, the ABF involved Australia’s Federal Police (AFP) which investigated and later executed search warrants.

The AFP has previously warned that it can detect criminal activity on the dark web, boasting in 2017 that arrests followed an online investigation. The ABF did likewise in September 2016.

Detecting the use of cryptocurrency suggests the two agencies’ capabilities have evolved further.

It is unclear if the collaboration Price referred to crossed national borders, but as Australia’s law enforcement and intelligence communities maintain ties with sibling agencies, it seems likely that UK authorities played some role in this incident. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/16/australian_border_force_reveals_dark_web_btc_buy_survelliance_capability/