STE WILLIAMS

Snapchat takes a swipe at fake news

Here’s the problem with social media echo chambers: they contain the opinions of other people!

Snapchat wants to fix that. It’s working on a redesign that strips the social from media, separating friends and family updates in one section of the screen and putting media – as in, vetted news from publishers, stories from around the world, or stories from people you follow but don’t know personally – in another spot, in a “Discover” page.

It’s also changed the way you’ll view your friends’ updates. Snapchat says its new “sophisticated Best Friends algorithm” chooses which friends you see the most of based on the way you communicate with them.

As always, the app will open to the camera. On the left of the screen will be friends’ chats and stories, and on the right of the camera will be Stories from publishers, creators, and the community.

Both the Friends slot and the Discover page will over time learn what you like and which friends you really want to talk to.

The redesign is intended to promote more intimate sharing among friend groups while pushing professionally produced content into a separate feed.

We’ll see the redesign starting this week as it’s rolled out for a small test group. It’s expected to roll out more broadly in coming weeks.

In a blog post, Snapchat said that the way that social media has mixed friends with brands has been “an interesting internet experiment,” but it’s one that’s had some “strange side-effects,” such as fake news:

While blurring the lines between professional content creators and your friends has been an interesting internet experiment, it has also produced some strange side-effects (like fake news) and made us feel like we have to perform for our friends rather than just express ourselves.

In an opinion piece posted by Axios on Wednesday morning, Snapchat CEO Evan Spiegel said that social media has fueled fake news because…

…content designed to be shared by friends is not necessarily content designed to deliver accurate information. After all, how many times have you shared something you’ve never bothered to read?

Snapchat’s solution to the fake news dilemma is to base algorithms on a user’s interests – not on the interests of “friends” – and to make sure media companies also profit off the content they produce for Snapchat’s Discover platform.

We think this helps guard against fake news and mindless scrambles for friends or unworthy distractions.

In order to personalize the Stories created by publishers – as in, those that aren’t curated by friends – Snapchat is taking a page from what Netflix does: it uses machine learning algorithms to recommend content based on what subscribers have watched in the past.

Research shows that your own past behavior is a far better predictor of what you’re interested in than anything your friends are doing. This form of machine learning personalization gives you a set of choices that does not rely on free media or friend’s recommendations and is less susceptible to outside manipulation.

Siegel went into the same kind of soul-searching as that of other social media moguls who’ve been oops!-ifying in these days of the Congressional investigation into Russian trolls planting fake news… and of the people who created the industry stepping back to question the repercussions, such as Facebook “like” button co-creator Justin Rosenstein and former Facebook product manager Leah Pearlman having both implemented measures to curb their social media dependence

…and of Facebook ex-president Sean Parker doing his own “what were we thinking?” last week, when he told Axios that from the start, social media engineers have been knowingly exploiting a vulnerability in human psychology to get us all addicted to social validation feedback loops and their sweet, sweet dopamine jolts… and of Loren Brichter, designer of the pull-to-refresh tool first seen in the Twitter app, also admitting that the social media platform and the tool he created are addictive.

For his part, Spiegel says that personalized newsfeeds have “revolutionized the way people share and consume content,” but the collateral damage has included “a huge cost to facts, our minds and the entire media industry.”

While combining social and media has meant big bucks, it’s “ultimately undermined our relationships with our friends and our relationships with the media,” Siegal says.

Snapchat thinks the best path out of this fake news craziness is to disentangle social and media, provide a personalized content feed based on what subscribers want to watch and not what your echo-chamber friends post, and to build content feeds on top of human-curated content, rather than just any old globs that pop to the surface of the internet.

Siegel:

Curating content in this way will change the social media model and also give us both reliable content and the content we want.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4u1Vk4jJMl8/

Google sued over iPhone ‘Safari Workaround’ data snooping

Did you use an iPhone in the UK between 1 June 2011 and 15 February 2012?

If you did, you’re one of an estimated 5.4 million people who might one day be in line for a compensation payment from Google over a long-running controversy known as the “Safari Workaround”.

The legal barebones are that a campaign group called Google You Owe Us has launched a “representative action” (similar to a class action in the US) alleging that the search giant:

Took our data by bypassing default privacy settings on the iPhone Safari browser which existed to protect our data, allowing it to collect browsing data without our consent.

Specifically, Google used a bit of JavaScript code – the workaround – to bypass Safari’s default blocking of third-party cookies (set by domains other than those being visited) in order to allow sites within its DoubleClick ad network to track users.

This was despite Google giving assurances that this would not happen to users running Safari with its default privacy settings.

The case involves Safari because it was a browser that by default imposed restrictions on the cookies set by ad networks.

By this point, some US readers might be feeling a sense of déjà vu – all over again.

The origins of the British case lie with the discoveries made by a Stanford University researcher called Jonathan Mayer in 2010, which eventually led to legal cases by the Federal Trade Commission (FTC) and 38 US states in 2012 and 2013 which concluded with Google paying fines of $22.5m (then £15m) and $17m respectively.

Google’s defence has always been that the feature was connected to allow Safari users who’d signed into Google, and opted to see personalised content, to interact with features such as the company’s Google+ button or Facebook likes.

In 2012 it said:

To enable these features, we created a temporary communication link between Safari browsers and Google’s servers, so that we could ascertain whether Safari users were also signed into Google, and had opted for this type of personalisation.

Which seemed like a way of saying that internet services, and people’s interaction with them, was getting so complex that strict lines of privacy and consent were blurring.

The latest UK case will, essentially, see these arguments re-run with a few more years’ hindsight to sharpen the case on both sides.

It’s not the first UK Safari workaround case Google has had to fight: in 2015 the Court of Appeal ruled that the issue had enough merit to allow the litigants involved to sue the company (reportedly settled out of court).

As for iOS users who might qualify for any settlement, there are conditions.

Assuming you were using Safari on a lawfully-acquired iPhone, and didn’t opt out of seeing Google’s personalised ads, you must have been resident in England or Wales both during the period covered by the case, and on 31 May 2017 (Scotland has a separate legal system and isn’t covered).

How users prove this years after the event is not clear, but having used an Apple ID with an iPhone during the period mentioned will probably be enough.

The case is specifically about iPhone users and doesn’t include iPads and OS X computers. Naked Security understands this is for legal reasons (including additional devices complicates matters even though they might also have been affected).

Is this just a dose of bad publicity about mistakes long past?

The possibility of pay-outs from a company like Google will grab headlines, but in the UK in 2017 this has become about deeper issues. As Google You Owe Us states:

Together, we can show the world’s biggest companies are not above the law.

Recently, sentiment has turned against large tech companies for a variety of reasons, including attitudes to privacy, the alleged non-payment of taxes, and the popular perception that some companies have become too big for their boots.

It’s a seeming paradox that describes our age. Millions of us use Google’s software, yet for some at least this is building not love and respect, but suspicion.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lyA_7mIn9BQ/

Lauri Love’s US extradition appeal judges reserve decision

London’s High Court has reserved judgment on the extradition of accused hacker Lauri Love after hearing this morning that his appeal should be granted because conditions in the US prisons he may be sent to are “unconscionable”.

“For this particular appellant, going to MDC [the Metropolitan Detention Centre in Brooklyn, New York], there is a serious risk it will result in a serious health deterioration, or death,” Love’s barrister, Edward Fitzgerald QC, told the Lord Chief Justice of England and Wales, Lord Burnett of Maldon, and Mr Justice Ouseley.

Love sat in the same spot in the well of Court 4 as he had done yesterday, flanked once again by his parents and girlfriend and wearing his sombre, tieless suit. He was noticeably less tense than he was at the previous day’s hearing. The light in the courtroom was almost natural as the low winter sun filtered in through the skylight.

Key to Love’s appeal is the forum bar, formally known as section 83A of the Extradition Act 2003. This was introduced after the Gary McKinnon case, where an accused British hacker was eventually not extradited from the UK to the US.

This morning Fitzgerald drew heavily on the case of Haroon Aswat, an alleged jihadist who was extradited from the UK to America.

Aswat, a paranoid schizophrenic, was extradited in spite of pleading to the High Court that sending him abroad for trial would breach his rights under Article 3 of the Human Rights Act, namely that he would be subject to inhuman or degrading treatment.

Lauri Love and girlfriend Sylvia Mann leaving the Royal Courts of Justice. Pic: Richard Priday

Love and girlfriend Sylvia Mann seen leaving the Royal Courts of Justice. Pic: Richard Priday for The Register

Advancing a similar argument, Fitzgerald told the court that the district judge who previously approved Love’s extradition “failed to address the point that the mere fact of extradition to the US away from home, our style of environment, would almost inevitably create a serious deterioration in his health”.

This, said Fitzgerald, continuing the previous day’s theme, would result in Love being placed at an increased risk of suicide, particularly if he was separated from the support of his family.

“The district judge misdirected herself that the risk to his health was conjecture. He is fit now but anything could happen,” thundered Fitzgerald, causing both judges to momentarily pause in their methodical note-taking.

“There’s a high risk of suicide in MCC [New York’s Metropolitan Correction Centre] and MDC,” he added. “We submit he will be exposed to the ‘unconscionable’ conditions that the women’s judges’ report refers to,” said Love’s barrister, citing a report on conditions in those two prisons produced by American judges who inspected the women’s half of each jail.

“There is a substantial ground for real risk that he will be subjected to Article 3 inhumanity” – Edward Fitzgerald QC, Lauri Love’s barrister

Peter Caldwell, barrister for the Crown Prosecution Service and appearing on behalf of the US government, briefly responded to some points of law made by Fitzgerald, before the judges shuffled their papers and sat up. Earlier he had told the court that District Judge Tempia’s decision to extradite Love was “not wrong”.

The Lord Chief Justice announced he was reserving both verdict and full judgment to a later date and said he would “let the parties know in the usual way” when judgment was ready to be handed down. It is expected that this will take place in early 2018.

Outside court, Love’s father, the Reverend Alexander Love, said: “To be born in this country is to win the lottery of life. We trust in the justice system, and in God.” Love’s solicitor, Kevin Kendridge, added: “We are happy with how things went, and we trust the judiciary.”

Love himself said he was “glad” that people had taken an interest in his case. ®

The view from the public gallery

The public gallery in Court 4 of the Royal Courts of Justice this morning was barely a third full. But those present were all listening keenly to the final arguments.

Some were taking notes, typing on their phones or scribbling in notepads. Others were quietly discussing proceedings, but the most interesting person there, however, was a man I shall call “Sign Guy”. He arrived a few minutes late, sporting a grey suit, dishevelled hair and a notebook. It was clearly not for reporting, however, as his pen was a thick board marker, rather than the standard-issue biro.

After scribbling for a moment, he propped up his notebook on the edge of the gallery, which bore the message “Trial at Home”. When a spectator behind him leant over and informed him that this was not a wise course of action, Sign Guy wrote a new message, unseen by your reporter, which elicited a few giggles from those sat nearby.

An usher then walked over to Sign Guy and said a few words to him. His reply was quite gruff, like how one might speak to someone loudly rustling their popcorn in the cinema. The usher then fetched a member of security for backup, only to find Sign Guy had decided to have a lie down on the bench.

After some time, he sat up again and wrote Love’s name on his knuckles. The security guard had by this point moved to sit beside Sign Guy, but he appeared unfazed as he continued to work on his body art.

Sign Guy finally ran out of patience when Mr Fitzgerald finished speaking. He sidled his way past the security guard and into the corridor beyond.

Richard Priday reports

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/30/lauri_love_extradition_appeal_judgement_reserved/

Crypto-cash souk Coinbase forced to rat out its high rollers to probing US taxmen

Cryptocurrency exchange Coinbase will be turning over information on 14,000 of its users to the IRS – Uncle Sam’s tax collectors – thanks to an order from a US court.

Judge Jacqueline Corley of the San Francisco district court ruled on Wednesday the Bitcoin, Ethereum, and Litecoin trading website will be required to hand over records on anyone who moved more than $20,000 in transactions between 2013 and 2015.

The order [PDF] comes after a year-long battle between the IRS and Coinbase over the US tax authority’s request that Coinbase cough up any identifying personal details on users it believes were skirting taxes on money that had been moved through the site and underreporting gains that should be taxed. Basically, the IRS wants to know who exactly has been potentially evading their taxes by moving money through the site.

Coinbase objected, claiming the demand was a violation of its customers’ privacy. Now, after narrowing the scope of the request, the IRS will get its hands on info on 14,355 people it believes moved the most amount of money through Coinbase without paying tax on their gains. Coinbase estimates the targeted group is about one per cent of its total customer base.

bubble

Bitcoin exchange Coinbase crashes after Asian buying frenzy

READ MORE

Under the terms of the order, Coinbase will disclose users’ taxpayer ID numbers, names, birth dates, and addresses along with transaction logs and account invoices.

The court order covers those with “with at least the equivalent of $20,000 in any one transaction type (buy, sell, send, or receive) in any one year during the 2013 to 2015 period.”

Coinbase heralded the decision as a “partial victory” in that it got the judge to narrow the request down from the hundreds of thousands of records that the IRS had first asked it to cough up.

“Thanks to Coinbase’s efforts, more than 480,000 customers’ records were preserved from disclosure,” the site said.

“This is a 97 per cent reduction in the number of customers impacted by this summons.”

Now, Coinbase said, it will review the order and, prior to handing over the information to the IRS, notify those users whose info is being disclosed. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/30/oh_god_here_come_the_libertarians/

Google Chrome vows to carpet bomb meddling Windows antivirus tools

By mid-2018 Google Chrome will no longer allow outside applications – cough, cough, antivirus packages – to run code within the browser on Windows.

This is according to a post today on the Chromium blog that laid out the July release of Chrome 68 for Windows as the target for new rules that will block all third-party apps from injecting scripts into browser sessions.

The idea, explained the Chocolate Factory, is to cut down on stability issues that arise when Chrome lets other apps execute code that can be buggy or incompatible with other software.

“Roughly two-thirds of Windows Chrome users have other applications on their machines that interact with Chrome, such as accessibility or antivirus software,” said Chrome stability team member Chris Hamilton.

“In the past, this software needed to inject code in Chrome in order to function properly; unfortunately, users with software that injects code into Windows Chrome are 15 per cent more likely to experience crashes.”

Man confused by laptop

Wondering why your internal .dev web app has stopped working?

READ MORE

In particular, the target here seems to be poorly coded AV tools can not only crash the browser or cause slowdowns, but also introduce security vulnerabilities of their own for hackers to exploit.

Rather than accept injected code, Chrome will require applications to use either Native Messaging API calls or Chrome extensions to add functionality to the browser. Google believes both methods can be used to retain features without having to risk browser crashes.

For now, the policy will likely only be of concern to developers. Users won’t notice the development until April 2018, when Chrome 66 will begin showing notifications after Chrome crashes due to injected code. These alerts will finger third-party programs for the cause of the breakdown.

With Chrome 68, the browser will block third-party code in all cases except when the blocking itself would cause a crash. In that case, Chrome will reload, allow the code to run, and then give the user a warning that the third-party software will need to be removed for Chrome to run properly. The warning will be removed and nearly all code injection will be disabled in January of 2019.

“While most software that injects code into Chrome will be affected by these changes, there are some exceptions,” said Hamilton.

“Microsoft-signed code, accessibility software, and IME software will not be affected.”

Google is advising developers to get out ahead of the changes by shifting to extensions or Native Messaging and testing their software for compatibility with Chrome Beta browser builds. Essentially, get rewriting your code, programmers. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/30/google_chrome_antivirus_shutout/

The Critical Difference Between Vulnerabilities Equities & Threat Equities

Why the government has an obligation to share its knowledge of flaws in software and hardware to strengthen digital infrastructure in the face of growing cyberthreats.

In mid-November, Rob Joyce, the White House cybersecurity coordinator, released a set of documents about the “vulnerabilities equities process,” which he noted in a recent White House blog post:

At the same time, governments must promote resilience in the digital systems architecture to reduce the possibility that rogue actors will succeed in future cyber attacks. This dual charge to governments requires them to sustain the means to hold even the most capable actor at risk by being able to discover, attribute, and disrupt their actions on the one hand, while contributing to the creation of a more resilient and robust digital infrastructure on the other. Obtaining and maintaining the necessary cyber capabilities to protect the nation creates a tension between the government’s need to sustain the means to pursue rogue actors in cyberspace through the use of cyber exploits, and its obligation to share its knowledge of flaws in software and hardware with responsible parties who can ensure digital infrastructure is upgraded and made stronger in the face of growing cyber threats. 

This is a valuable step in the right direction, and the people who’ve done the work have worked hard to make it happen. However, the effort doesn’t go far enough, and those of us in the security industry have an urgent need to go further to achieve the important goals that Joyce lays out: improving our defenses with knowledge garnered by government offensive and defensive operations. 

This is intended as a nuanced critique: I appreciate what’s been done. I appreciate that it was hard work, and that the further work will be even harder. And it needs to be done.

The heart of the issue is our tendency in security to want to call everything a “vulnerability.” The simple truth is that attackers use a mix of vulnerabilities, design flaws, and deliberate design choices to gain control of computers and to trick people into disclosing things like passwords. For example, in versions of PowerPoint up to and including 2013, there was a feature where you could run a program when someone “moused over” a picture. I understand that feature is gone in the latest Windows versions of PowerPoint but still present in the Mac version. I use this and other examples just to make the issues concrete, not to critique the companies. 

This is not a vulnerability or a flaw. It’s a feature that was designed in. People designed it, coded it, tested it, documented it, and shipped it. Now, can an attacker reliably “weaponize” it by shipping it with a script in a zip file, for example, by referring to a UNC path to \example.orgveryevil.exe? I don’t know. What I do know is that the process as published and described by Joyce explicitly excludes such issues. As stated in the blog post:

The following will not be considered to be part of the vulnerability evaluation process:

    • Misconfiguration or poor configuration of a device that sacrifices security in lieu of availability, ease of use or operational resiliency.
    • Misuse of available device features that enables non-standard operation.
    • Misuse of engineering and configuration tools, techniques and scripts that increase/decrease functionality of the device for possible nefarious operations.
    • Stating/discovering that a device/system has no inherent security features by design.

These issues are different from vulnerabilities. None of them is a bug to fix. I do not envy the poor liaison who gets to argue with Microsoft were this feature to be abused, nor the poor messenger who had to try to convince Facebook that their systems were being abused during the elections. However senior that messenger, it’s a hard battle to get a company to change its software, especially when customers or revenue are tied to it. I fought that battle to get Autorun patched in shipped versions of Windows, and it was not easy.

However, the goal, as stated by Joyce, does not lead to a natural line between vulnerabilities, flaws, or features. If our goal is to build more resilient systems, then we need to start by looking at the issues that happen — all of them — and understanding all of them. We can’t exclude the ones that, a priori, are thought to be hard to fix, nor should we let a third party decide what’s hard to fix. 

The equities process should be focused on government’s obligation to share its knowledge of flaws in software and hardware with responsible parties who can ensure digital infrastructure is upgraded and made stronger in the face of growing cyberthreats. Oh, wait, that’s their words, not mine. And along the way, “flaws” gets defined down to vulnerabilities.

At the same time, our security engineering work needs to move from vulnerability scanning and pen tests to be comprehensive, systematic, and structured. We need to think about security design, the use of safer languages, better sandboxes, and better dependency management. We need to threat model the systems we’re building so that they have fewer surprises.

That security engineering work will reduce the number of flaws and exploitable design choices. But we’ll still have clever attackers, and we need the knowledge that’s gained from attack and defense to flow to software engineers in a systematic way. A future threats equities process will be a part of that, and industry needs to ask for it to be sooner rather than later.

Related Content:

 

Adam is an entrepreneur, technologist, author and game designer. He’s a member of the BlackHat Review Board, and helped found the CVE and many other things. He’s currently building his fifth startup, focused on improving security effectiveness, and mentors startups as a … View Full Bio

Article source: https://www.darkreading.com/perimeter/the-critical-difference-between-vulnerabilities-equities-and-threat-equities/a/d-id/1330521?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Epic Games sues 14-year-old cheater, mother launches rhetorical firestorm

You can blow people to smithereens, for free, in the Battle Royale of Epic Games’ co-op survival and building action game Fortnite.

For free. As in, the studio can’t lose money on this virtual shooter game. So why in the world is it suing a 14-year-old kid for publishing cheat code?

Exactly, says The Kid’s Mom, who wrote a letter to the court in his defense. As far as she’s concerned, her son’s been made into a scapegoat. She’s also charging Epic with breaking Delaware state law by publishing a minor’s name, which has led to news agencies spreading it far and wide, and which has led me to call him The Kid and his mom Epic Mom.

Epic Mom’s letter has since been published online: you can read it here.

Last month, Epic took the unusual step of banning two Fortnite players from the game for prolific cheating. But it didn’t stop there: the studio also took them to court, charging copyright infringement.

Torrent Freak has published the complaints, one against a “Charles Vraspir”, the other against a “Brandon Broom”.

Both are accused of violating Fortnite’s terms of service and EULA by cheating. Specifically, they’re accused of modifying and changing the game’s code, committing copyright infringement in the process. From one of the complaints:

Defendant’s cheating, and his inducing and enabling of others to cheat, is ruining the game playing experience of players who do not cheat. The software that Defendant uses to cheat infringes Epic’s copyrights in the game and breaches the terms of the agreements to which Defendant agreed in order to have access to the game.

What Epic likely didn’t know is that one of those cheaters is a minor. The studio may have found his name on YouTube without knowing his real age, Torrent Freak suggests. At any rate, The Kid, whose name apparently isn’t Broom or Vraspir, got booted off the game at least 14 times since he started playing. Epic would kick him off, and he’d just cook up a new account and come back, firing away.

The Kid’s family apparently didn’t hire a lawyer. Instead, Epic Mom jumped into the fray.

Some of the points she makes in her letter to the court:

  • Epic has no proof her son modified the game and violated copyright law in the process. He got existing cheats from a public site and live-streamed them (on YouTube). If he’d modified the game in the process, the Copyright Act would apply, but he didn’t.
  • The EULA, which the game publisher strenuously points to in the complaint, isn’t legally binding. It states that minors require permission from a parent or legal guardian. The Kid’s a minor, and he didn’t have his parent’s permission to play. Nor did Epic offer a drop-down menu to specify his age or in any other way attempt to ascertain his age.
  • It’s “feasibly impossible” for Epic to claim profit loss. Epic’s attorneys would have to provide Profit Loss statements… on a free game.

Epic Mom says in the letter that Epic’s inability to curb cheat codes or to keep others from modifying the game have led to the studio “using a 14-year-old child as a scape goat to make an example of him,” instead of going after the websites that publish the cheat codes in the first place.

Furthermore, she says, Epic has released her son’s name publicly, which has led to publications spreading his name and other information far and wide. That’s illegal under Delaware state law, she said (I couldn’t locate any such House Bill No. 64, which she references, but given that I Am Not a Lawyer but Epic Mom Sure Sounds Like One, let’s just agree to keep their names out of this story).

According to the BBC, many of the commenters on The Kid’s YouTube cheating tutorial said he was in the wrong.

As Epic claims, the cheaters really just want to mess things up. According to Torrent Freak, one of the defendants, “Broom,” was banned once, has previously claimed to be working on his own cheat, and aims to create “unwanted chaos and disorder.” Both he and the other defendant are connected to the cheat provider AddictedCheats.net, either as moderators or support personnel, Torrent Freak reports.

And they aren’t exactly what you’d call polite about any of their cheating mayhem. From Torrent Freak:

They specifically target streamers and boast about their accomplishments, making comments such as ‘LOL I f*cked them’ after killing them.

It’s hard to argue with Epic Games on this point: “Nobody likes a cheater. And nobody likes playing with cheaters.”

The complaint continues:

These axioms are particularly true in this case. Defendant uses cheats in a deliberate attempt to destroy the integrity of, and otherwise wreak havoc in, the Fortnite game.

As Defendant intends, this often ruins the game for the other players, and for the many people who watch ‘streamers’.

At any rate, Epic Games, good luck arguing that with Epic Mom: she with rhetorical and legalistic ammunition akin to one of those virtual shoulder rocket launchers your virtual fighters use to such kaboomy advantage.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5yFufuxIaTA/

Mr. Robot eps3.7_dont-delete-me.ko – the security review

You can tell we’re nearing the end of the season – this episode was a deep breath before we plunge into the finale.

Not much to talk about on the tech and security front this time, just the one thing we’ll explore below. To fully recreate the mood from this episode, fire up the Bill Ted’s Excellent Adventure original soundtrack and we’ll head Back To The Future for more analysis.

WARNING: SPOILERS AHEAD – SCROLL DOWN TO READ ON

 

“Don’t delete me”

I was despairing a little that I wouldn’t have anything to write about for this week as the episode went on. Thankfully, right at the end of the episode, the briefest glimpse of Trenton’s last email to Elliot gives us something to examine. My sincere thanks to the many fast screencappers out there who were able to catch Trenton’s email (sent to and from Protonmail accounts, a service well-loved by Five/Nine).

Let’s take a look piece by piece:

I may have found a way to undo the hack. I’ve been investigating Romero. He installed hardware keyloggers on all the machines at the arcade some time before five/nine.

Remember Romero, the older phone-phreaker member of Five/Nine, who we parted ways with at the beginning of season two? He had a few things up his sleeve, and by installing keyloggers on the arcade machine he would theoretically be able to easily keep an eye on anything people were typing on those machines. Software keyloggers, often paired with malware, usually “call home” somewhere with the information they gather. Romero, however, installed hardware based keyloggers – as the name implies, they are somewhere plugged into the computer itself and are designed to be part of, or look like, normal hardware or periphery.

Hardware keyloggers sit in the middle of the target computer and its periphery, quietly logging everything that passes through it, allowing it to snoop undetected by the victim machine. Given Romero’s nifty booby trapped hardware hacks, which we saw explode back in season 2, it’s not surprising that his hardware keylogger was subtle enough to fool even the Five/Nine team for a good while.

The NYPD imaged all of his data after he was murdered. I was able to get this chain of custody document from the NYPD when they prepared to transfer the evidence to the FBI.

“Imaged,” meaning they made a direct copy of all the contents of his hard drive (the disk image).

They couldn’t get into the encrypted keylogger containers.

Romero had grabbed the keylogger data from his nifty hardware keyloggers and regularly dumped that data onto his hard drive. The keylogged data itself was encrypted (I would presume his hard drive was too).

If Romero somehow got a hold of the keys, or even the seed data and source code for the encryption tools, the answer might be in those keylogger captures, but the FBI probably has those files now.

The keys Trenton’s referring to here are the keys needed to decrypt the keylogger data. The next bit, about the seed data and source code, means Trenton thinks there’s way to potentially reverse-engineer the key used to encrypt the data.

Ideally, encryption protocols shouldn’t allow any part of the the key to be figured out from the encrypted data stream, and but the email here implies that the process wasn’t cryptographically secure, so that it might be possible to winkle out out the decryption key, or to unscramble the data without the key, after all.

Perhaps Romero wasn’t a crypto nerd and this was a mistake, but it’s more likely this was by design so he could decrypt the data without having to remember or carry around a key. After all, an encryption key could look like this…

-----BEGIN RSA PRIVATE KEY-----
MIHtAgEAAjAAziOgSCYfbckh5tLO1ztkj/ggT80/3KOj2jQBTeJtPqX+3l8pen/V
yNGbv4+pRF0CAwEAAQIveUhuwmRjs3VWU/eOKQZRyX8Ei89IFqnED3JChX5RP4kE
8Ixl/6p+i1+NMDW4MoUCGA8nge3DNwNone+ifAqSxgeNgSg+Wug/LwIYDZpH/uwK
csRIfwb6M5X2COjcmAWSarIzAhgLbu47GU6XNsX5tyhIveXEawFoAGuLz6cCGA1g
oVVvRYdAyhtC/WUmIeT5PZi0Qh50SQIYBunB28gYf39am7WDp4GKeb696mmFgYeH
-----END RSA PRIVATE KEY-----

…whereas the seed to generate the key could be something as easy as a few digits of his choosing, like his birth year, or I don’t know, 5/9.

The next step seems pretty clear: I wouldn’t be surprised if Elliot and Dom work together to undo the hack, where Dom has access to the files and Elliot will need to decrypt them.

If they’re successful in stopping Whiterose and Dark Army’s next attack, they’d have Romero’s healthy hacker paranoia to thank. That would be some fantastic justice from the phreaker set for sure.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5Rsba6TfyRE/

Apple’s “blank root password” fix needs a fix of its own – here it is

If you’ve ever been hiking in California’s High Sierra, you’re probably in awe not only of its spectacular nature but also of its many, tricky, rocky pathways.

It’s beautiful but it can bite you.

That’s pretty much how Apple must be feeling this week about its own High Sierra, the tradename of its latest 10.13 version of macOS.

News broke early in the week that the macOS authentication dialog could be used to trigger an all-but-unbelievable elevation-of-privilege vulnerability.

We’re calling it the “blank root password” bug.

The bug explained

Not just anyone can make critical configuration changes on your Mac, such as turning off the filewall or decrypting your hard disk.

Many changes need to be authorised by a user with Administrator privileges – and even if you’re already logged in with an Admin account yourself, you need to authenticate again every time you want to do something administrative.

That’s why many System Preferences panes, for example, feature a padlock you have to click if you want to make changes:

Turns out that if you changed the User Name field to root, the all-powerful superuser account that is never supposed to be used directly, and entered a blank password once…

…then the root password somehow actually ended up changed to a blank password, so that when you entered a blank password for the root account thereafter, it Just Worked.

Quite what sort of coding bug led to the bizarre situation that testing a password ended up modifying that password is something Apple isn’t saying.

As security blunders go, however, it’s a bit like giving an immigration officer a false date of birth, only to find out, when he opens your passport to catch you out in the middle of your lie, that your passport has miraculously reissued itself with your “new” birthday.

Faster and faster

Four years ago, Apple notoriously took more than six months to fix an authentication vulnerability in the sudo command, the program that sysadmins rely upon to maintain the security of privileged administrative tasks performed at a command prompt.

But Apple has come a long way in responsiveness since then, and this week’s “blank root password” bug was patched within about one day by the new-look Apple.

We wrote about the patch yesterday afternoon and wholeheartedly said, “Well done to Apple for acting quickly.”

When we said those words, we were well aware that such a rapidly-issued patch might have unintended side-effects, especially when the changes involved a system component associated with password verification and administrative authentication.

We wondered to ourselves whether Apple’s patch might end up with some system features inadvertently de-authenticated…

…but we said “Well done” to Apple anyway.

We figured that, in most cases, requiring some legitimate users to re-authenticate is far better than letting any crooks wander in unauthenticated.

And we stand by those words even now we know that there has been at least one “inadvertent de-authentication” problem caused by yesterday’s patch, a side-effect that could stop file sharing working on your Mac:

If file sharing doesn’t work after you install Security Update 2017-001, follow these steps. 

[. . .]

1. Open the Terminal app, which is in the Utilities folder of your Applications folder.
2. Type sudo /usr/libexec/configureLocalKDC and press Return. 
3. Enter your administrator password and press Return.
4. Quit the Terminal app. 

The command above, /usr/libexec/configurelocalKDC, isn’t needed often – it’s used to set up what’s known as a Kerberos Key Distribution Centre (KDC). (The prefix sudo tells macOS to run the configurelocalKDC command with Administrator privileges, which is why you need your admin password.)

The good news is that you don’t have to know exactly what that means – but, greatly simplified, Kerberos is the authentication system used for Windows-style file sharing, and the KDC is the background process that is responsible for checking that you’re authorised to use the shares you try to access.

Configuring this KDC thing is usually done for you, handled automatically when you setup your Mac; after yesterday’s emergency “blank root password” update, the KDC needs to be configured again, and that means you need to provide an administrative password to complete the task.

It’s unfortunate that this happened, but the fix for the fix is pretty simple: we think that if you can launch a mail application and paste text into the subject line of a new email, you’ll have little or no trouble with this one.

So we’re repeating our “Well done” to Apple for getting a fix out quickly.

The “blank root password” bug was publicly disclosed, which pretty much forced Apple’s hand to respond at once, and it did.

Some of us will need to put in our admin passwords to get file sharing working again, in return for all of us being rapidly protected against a widely-publicised security hole.

As far as “taking one for the team” goes, we’re comfortable with the balance in this case.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/m2SuqlET0nY/

Uber hack: EU data protection bods launch taskforce

The European Union’s group of data protection watchdogs has launched a taskforce into the Uber data breach that affected 57 million users worldwide.

The Article 29 Working Party discussed the breach, which took place in October 2016 but was only revealed last week, at its November plenary meeting yesterday.

The taskforce will be led by the Dutch Data Protection Authority – Uber’s European HQ being based in Holland.

At the moment, the group said, the taskforce is made up of representatives from the French, Italian, Spanish, Belgian and German agencies, as well as the UK’s Information Commissioner’s Office.

Full details of the taskforce’s plans for the investigation have not yet been made public, but a spokeswoman previously said the aim was to coordinate the approach taken by the European authorities.

It is likely to consider the way Uber dealt with the breach, which saw the firm attempt to cover it up by bunging $100,000 to the hackers, disguised as a bug bounty payment, in exchange for their silence.

The taxi biz’s actions have also been condemned by European justice commissioner Vĕra Jourová.

Speaking at a data protection conference in Brussels today, she said that the breach, and the fact Uber waited more than a year to ‘fess up, was an example “of the privacy challenges we face in the digital age”.

Jourová added that the incoming General Data Protection Regulation would “allow us to respond adequately to such irresponsible behaviour”.

Uber has also been called out for other regulatory failings elsewhere, with Transport for London rejecting its application for a new licence, saying it was not “fit and proper” to hold one in the capital.

Meanwhile, The Financial Times reported yesterday that Uber’s latest quarterly results showed adjusted losses had widened to $734m, up 14 per cent on the previous quarter. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/30/uber_hack_eu_data_protection_bods_launch_taskforce/