STE WILLIAMS

Fake text generator is so good its creators don’t want to release full version

Researchers at Elon Musk’s AI think tank OpenAI have created what amounts to a text version of a deepfake – and it’s too scared for humanity to release the full version.

Its AI writing tool generates reasonable-looking text on a wide range of subjects. It is based on research that the organization did to predict the next word in a sequence of text, it explains in a blog post on the topic. The tool takes a sample piece of text written by a human and then writes the rest of an article, producing dozens of sentences from a single introductory phrase.

The tool doesn’t discriminate between topics. Instead, it uses over 40Gb of text gathered from the internet to help it produce convincing-sounding copy on anything from Miley Cyrus to astrophysics.

The problem is that while the copy sounds convincing, all the facts in it are fabricated. The tool writes names, facts and figures effectively synthesized from something that the system read online. It’s like an electronic version of that old school friend who you regrettably accepted a Facebook invitation from and who now keeps writing bizarre posts with ‘alternative facts’. For example, it takes the following phrase…

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

…and builds an entire news story around a fictional event. It fabricates a quote from Tom Hicks, who it says is the US Energy Secretary. At the time of writing, that role is occupied by Rick Perry.

OpenAI built the training data set, consisting of eight million web pages, by scanning Reddit for links that received more than three Karma (the site’s reward for popular content). The researchers were not necessarily looking for truth here, so much as interesting text that was either educational or funny.

The tool is also good at reading, understanding, summarizing and answering questions about text, along with translating.

This isn’t going to replace factual reporting anytime soon (phew), but it could automate some darker things online. It’s an article spinner’s dream, and as OpenAI points out, it could easily be used to write fake Amazon reviews by the thousand.

Perhaps the most worrying use case is the production of fake news via social media and blog posts. Marry it with other forms of deepfake (such as NVIDIA’s recently launched ThisPersonDoesNotExist) for the creation of fake faces, and deepfake video and audio, and you have the makings of an automated disinformation-spewing social media machine.

OpenAI realises this. It says:

These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns. The public at large will need to become more skeptical of text they find online, just as the ”deep fakes” phenomenon calls for more skepticism about images.

No wonder the researchers decided not to release the fully-trained model. Instead, they released a scaled-down one, which uses less data and only included the sampling code. It didn’t release the broader 40Gb dataset, or the code used to train it. However, reproducing what they did is only a matter of time, they admitted:

We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.

That’s the problem in a world where knowledge – or the power to get it – is easily distributed. Secrets are difficult to keep. And with computing power increasingly cheap, AI’s processor-intensive training is becoming easier to reproduce.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SIdujtmkU2g/

Facebook acts like a law-breaking ‘digital gangster’, says official report

On Sunday, following an investigation of more than a year, the UK Parliament accused Facebook of thumbing its nose at the law, having “intentionally and knowingly violated both data privacy and anti-competition laws”.

Lawmakers called for the Information Commissioner’s Office (ICO) to investigate the social media platform’s practices, including how it uses the data of both users and users’ friends, as well as its use of “reciprocity” in data sharing.

Their report, which centered on disinformation and fake news, was published by a House of Commons committee – the Digital, Culture, Media and Sport Committee – that oversees media policy. From that report:

Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law.

The investigation focused on Facebook’s business practices before and after the Cambridge Analytica scandal.

Facebook shouldn’t be allowed to wriggle out from under culpability for the content users have pushed through on its platforms, the report said, alluding to how it was used by foreigners to tinker with the 2016 US presidential election and the Brexit campaign:

Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.

Facebook: Bring it!

According to the BBC, Facebook welcomed the committee’s report and said it would be open to “meaningful regulation”.

Facebook also patted itself on the back for having participated in the investigation:

[Facebook shares] the committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.

The committee begs to differ on those claims. Committee Chair and MP Damian Collins said that Facebook did not fully cooperate with the investigation. The BBC quoted him:

We believe that in its evidence to the committee, Facebook has often deliberately sought to frustrate our work, by giving incomplete, disingenuous and at times misleading answers to our questions.

That’s nothing new: the committee has found Facebook to have been less than cooperative for some time. Over the course of the UK investigation, Facebook steadfastly refused to appear before MPs to explain the company’s moves with regards to fake news.

That’s actually what led the committee to try to get the information it sought from another source: namely, from a lawsuit brought against Facebook by the tiny, your-Facebook-friends-in-bikinis-centered developer Six4Three. Six4Three has been wrangling with Facebook in US court since 2015 over allegations that the platform turned off the Friends data API spigot as a way of forcing developers to buy advertising, transfer intellectual property or even sell themselves to Facebook at bargain basement prices.

In December, Parliament’s fake news inquiry got its fingers into the legal Six4Three pie and came out with a fistful of Facebook staff’s private emails, which it then published.

Six4Three has alleged that the correspondence shows that Facebook was not only aware of the implications of its privacy policy, but actively exploited them. Collins and his committee were particularly interested in the app company’s assertions that Facebook intentionally created and effectively flagged up the loophole that Cambridge Analytica used to collect user data.

According to the report released on Sunday, the Six4Three court documents indicate that Facebook was…

…willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers – such as Six4Three – of that data, thereby causing them to lose their business.

At the very least, that would mean that Facebook violated its 2011 settlement with the Federal Trade Commission (FTC), the report said. The FTC had found that Facebook failed to protect users’ data and had let app developers gain as much access to user data as they liked, without restraint: an outcome of how Facebook had built the company so as to make data abuses easy, the FTC had found.

The report noted that the ICO had told the committee that Facebook needs to “significantly change its business model and its practices to maintain trust.” Yet the Six4Three documents show that Facebook “intentionally and knowingly violated both data privacy and anti-competition laws,” the report said, and thus the committee is calling on the ICO to carry out a detailed investigation into Facebook’s practices around the use of users’ data and users’ friends’ data.

”Democracy is at risk”

Cambridge Analytica is a case study in how politics have intersected with access to Facebook’s fat data APIs. It was a web analytics company started by a group of researchers with connections to Cambridge University in the UK.

The firm collected user data without permission in order to build a system that could profile individual US voters so as to target them with personalized political ads. Its researchers got at the data via a Facebook personality test called thisisyourdigitallife that billed itself as “a research app used by psychologists.”

Besides analyzing the effects of fake news, at the heart of the UK inquiry was how much of that data was used for political campaigning. The report’s conclusion:

Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day.

The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights.

The report is calling for:

  • A compulsory code of ethics for tech companies, overseen by an independent regulator, to set out what constitutes harmful content.
  • The regulator to be given powers to launch legal action if companies breach the code, including, potentially, large fines.
  • The government to reform current electoral laws and rules on overseas involvement in UK elections.
  • Social media companies to be forced to take down known sources of harmful content, including proven sources of disinformation.
  • Tech companies operating in the UK to be taxed to help fund the work for the ICO and any new regulator set up to oversee them.

Facebook denies having broken any laws in the country. Karim Palant, public policy manager for Facebook in the United Kingdom:

While we still have more to do, we are not the same company we were a year ago.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0Rh-4s4e2lI/

If you think your deleted Twitter DMs are sliding into the trash, you’re wrong

You can’t erase your Twitter footsteps, it turns out: what goes into Twitter stays lodged in its guts for years.

That’s because of a glitch that a bug hunter is calling a “functional bug.” The bug, discovered by security researcher Karan Saini, keeps direct messages (DMs) from being completely deleted, regardless of whether you or others have deleted the messages or even if the accounts that sent or received the DMs have been deactivated and suspended:

Saini told TechCrunch that he found years-old messages in a file when he downloaded an archive of his data from Twitter accounts that he’d previously deleted.

You can download data from your own account(s) here to get an idea of everything that Twitter collects, and retains, on you.

The researcher says that he reported a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve DMs even after a message was deleted from both the sender and the recipient. That earlier bug couldn’t get at DMs from suspended accounts, however.

According to Twitter’s privacy policy, when you delete your account, everything is supposed to go up in smoke after a grace period of 30 days:

When deactivated, your Twitter account, including your display name, username, and public profile, will no longer be viewable on Twitter.com, Twitter for iOS, and Twitter for Android. For up to 30 days after deactivation it is still possible to restore your Twitter account if it was accidentally or wrongfully deactivated.

…with the exception of log data, which it keeps for up to 18 months. Log data includes information such as IP address, browser type, operating system, referring web pages, pages visited, location, mobile carrier, and device information.

Back in 2013, Twitter users could “unsend” DMs, meaning that they could rub them out of someone else’s inbox by simply deleting the messages from their own. Years ago, Twitter changed that: users can now only delete messages from their own accounts. From Twitter’s help page:

When you delete a Direct Message or conversation (sent or received), it is deleted from your account only. Others in the conversation will still be able to see Direct Messages or conversations that you have deleted.

According to Fortune, Saini reported the bug through HackerOne, a bug bounty platform that works with Twitter.

A Twitter spokesperson told TechCrunch that as of Friday, the company was looking into the matter “to ensure we have considered the entire scope of the issue.” Twitter also told Fortune that the issue is “still open,” so as of Saturday, they couldn’t publicly comment on specifics.

Like Saini, Twitter is also calling this a “functional bug,” as opposed to a “security bug.” Its spokespeople declined to comment when TechCrunch asked if Twitter considers account deletion to be akin to withdrawing consent to retain direct messages.

I asked Twitter for comment and will update this article if I hear back.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/f6bEJAHhhCw/

Thousands of Android apps bypass Advertising ID to track users

Six years after it was introduced, it looks as if Android’s Advertising ID (AAID) might no longer be the privacy forcefield Google claimed it would be.

New research by AppCensus has found that 18,000 Play Store apps, many with hundreds of millions of installs, appear to be sidestepping the Advertising ID system by quietly collecting additional identifiers from users’ smartphones in ways that can’t be blocked or reset.

Among the best-known offenders were news app Flipboard, Talking Tom, Clean Master AV Cleaner Booster, Battery Doctor, Cooking Fever, and Cut the Rope Full Free, which were found to be sending data to advertising aggregators.

But what is the Advertising ID and why does it matter?

Few Android users pay much attention to it, but in 2013 the Advertising ID seemed like a great idea.

At that time, apps were allowed to collect a lot of data unique to the user’s device, such as its Android ID, IMEI number, hardware MAC address, and SIM serial card number – any one or combination of which could be used to track and profile users.

Under the Advertising ID system (also introduced by Apple as the Advertising Identifier) app makers would no longer be allowed to collect “persistent” identifiers and would instead capture an anonymous string that could be periodically reset by the user.

Android users can find and reset the Advertising ID through Settings Google (Services Preferences) Ads. 

In theory, performing a reset sends ad profilers back to square one because the ID being tracked before and after the reset will be different.

However, AppCensus’s research shows that a large number of app makers are not only checking the Advertising ID but also persistent identifiers, particularly the Android (device) ID and IMEI number.

Against the rules

The device ID and IMEI, of course, are specific to each device and can’t be changed, so tracking them is a powerful identifier. AppCensus argues that by tracking these identifiers in addition to the Advertising ID, app makers are breaching Google’s Play Store policy. This states:

The advertising identifier must not be connected to personally-identifiable information or associated with any persistent device identifier (for example: SSAID, MAC address, IMEI, etc.) without explicit consent of the user.

The question is what, if anything, the Android Advertising ID is for if apps and their advertising clients are able to subvert its intended purpose without appearing on Google’s radar.

It’s the same device fingerprinting controversy that in 2017 brought Apple and Uber into conflict with one another.

Google’s response is that it has taken action against an unspecified number of the apps on the AppCensus list and that the collection of identifiers was only allowed to stop problems such as fraud detection. It told CNET:

We take these issues very seriously. Combining Ad ID with device identifiers for the purpose of ads personalization is strictly forbidden. We’re constantly reviewing apps – including those listed in the researcher’s report – and will take action when they do not comply with our policies.

Anyone who wants more background on the data being collected by a specific app can find it via the AppCensus database tool.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/-XjOpB4kZiQ/

Millions of “private” medical helpline calls exposed on internet

Thanks to Sophos security expert Petter Nordwall for his help with this article.

You know when you call a helpline and a cheery voice advises you that your call may be recorded for a variety of reasons, all of which are supposed to be for your benefit?

Have you ever wondered what happens to all those recordings?

Could something you said confidentially on the phone back in 2014 – personal and private information disclosed during a call to an official medical advice line, for example – suddenly show up in public in 2019?

As millions of people in Sweden are suddenly realising, the answer is a definite “Yes”.

One of the subcontractors involved in running the Swedish medical assistance line 1177 (a bit like 111 in the UK – the number you use for urgent but not emergency medical help) apparently left six years’ worth of call records – 2,700,000 sound files in WAV and MP3 format – on a server that was openly accessible on the internet.

All you’d have needed was a web browser to scroll through and download years of confidential calls.

Ironically, according to Computer Sweden, which published a short video showing a browsing session wandering through the the server’s contents, the offending files were available unencrypted over port 443 from a server in Sweden. (The server is now offline.)

To explain.

Web connections need an IP number and a port number to denote the specific service they want from a specific server.

Port numbers are a bit like phone extensions: the main phone number connects you to the front desk, and the extension denotes the specific person or department you want to get through to.

There are thousands of commonly used port numbers – by convention, for example, mail servers listen on port 25, unencrypted web connections (HTTP) on port 80 and encrypted web connections (HTTPS) on 443.

In fact, HTTP and HTTPS are so commonly aassociated with 80 and 443 than when you write a URL such as http://example.com/, it’s taken as shorthand for the more specific web link http://example.com:80/, where the port number is included explicitly in the URL.

Likewise, https://example.com/ is shorthand for https://example.com:443/.

This shorthand almost always works because almost every server that supports HTTPS does so by listening for incoming network connections on port 443.

In this case, however, Computer Sweden reported that by making a regular, unencrypted HTTP connection to the server mentioned above, but using port 443 instead of the usual port 80, the entire contents of a directory tree called /medicall could be viewed.

As far as we can see, the calls were conveniently split out into browsable subdirectories like this…

. . .
/medicall/2016/01/01
/medicall/2016/01/02
. . .
/medicall/2017/06/01
/medicall/2017/06/02
. . .
/medicall/2019/02/01
/medicall/2019/02/02
/medicall/2019/02/03
...

…and so on.

From the video, the most recent call that was exposed seems to have a datestamp of 2019-02-18­T08:59, which is just over 24 hours ago at the time of writing.

The earliest datestamp visible in the video goes back to 2014-02-25­T10:24, although that file is rather confusingly in a directory named /medicall/2013/04/09.

According to a follow-up report from Computer Sweden, the unsecured server also contained information about calls relating to medical transfers – essentially, non-emergency ambulance trips.

What next?

Swedish politicians are, understandably, unimpressed, and the Swedish Data Protection Agency is investigating.

This is a huge breach of public trust, and is probably the biggest test so far of the recent GDPR legislation (General Data Protection Regulation) in the European Union.

GDPR was put in place to force companies to think about security proactively in the hope of avoiding breaches, and is geared toward prevention rather than punishment.

Nevertheless, in most EU countries, GDPR permits significantly harsher punishments than any previous legislation, with fines that can go as high as €20,000,000 or 4% of company turnover, whichever is greater.

In this saga, it looks as though there are several levels of contract and subcontract – as far as we can tell:

  • The Swedish public service contracted company X to handle calls to the 1177 number.
  • X subcontracted M1 to handle three of the most populous regions in Sweden.
  • M1 subcontracted M2 – a Swedish-owned company in Thailand – for overflow and after-hours cover.
  • M2 used call centre software supplied by V, whose cloud storage was hosted back in Sweden.
  • V’s servers hosted the open-to-anyone voice files.

Where the buck stops in this case, and who will bear the ultimate responsibility, remains to be seen.

What to do?

If you called 1177 in the past few years in Sweden, you may be at risk, but it may be impossible for the IT companies involved ever to find out how many records, if any, were stolen and abused by crooks.

So far, it looks as though only calls made in the Stockholm, Södermanland and Värmland regions were affected – in those regions, a Swedish-owned company in Thailand was subcontracted to handle overflow and after-hours calls, and it looks as though only calls answered in Thailand are part of the breach.

Sadly, therefore, there isn’t much you can do except to wait and see what emerges next from the investigations that are currently under way.

More generally, our advice is as follows:

  • If you’re in Sweden, check the official 1177 website (1177.se) for news about your region. Not all regions of the country were affected, and not all calls in the affected regions were included in the breach.
  • Consider sticking up for your right not to have your calls recorded. Unfortunately, you may end up waiting longer to be served, given that you often have to wait until a human comes on the line before you can formally opt out. (If sufficiently many of us demand not to be recorded every time we call any sort of helpline, we may eventually make the point that call recording should really be opt-in, not opt-out.)
  • Consider how you archive recorded data, including audio and video. With no financial incentive to re-use existing recording tapes, as we used to do in the analog era, it’s easy to let old data pile up indefinitely, just in case. But do you really need years’ worth of private data available online, in real time, in bulk and unencrypted?
  • Consider using pentration testing services to look for leaks. Don’t wait until a hacker or journalist comes knocking and finds your badly configured web server listening on a port you forgot about. If you do make a cybersecurity blunder, aim to be the first to find it so you can close it before any harm is done.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8zbH8B3ipis/

Facebook flaw could have allowed an attacker to hijack accounts

If you’re a security researcher in search of a fat bug bounty, Facebook must look like a good place to start your next hunt.

The site has suffered a lot of niggling security flaws in recent times, to which can now be added a new Cross Site Request Forgery (CSRF) protection bypass flaw that could have allowed an attacker to hijack a user’s account in several ways.

Discovered by researcher ‘Samm0uda’ in January, the problem centres around what is technically known as a vulnerable URL “endpoint”, in this case facebook.com/comet/dialog_DONOTUSE/?url=XXXX.  Explains the researcher:

This endpoint is located under the main domain www.facebook.com which makes it easier for the attacker to trick his victims to visit the URL.

CSRF attacks happen when an cybercriminal tricks the user into clicking on a malicious link that submits instructions to the vulnerable site that appear to come from the user’s browser.

All that is required for this to work is that the user must be authenticated (i.e. logged in) when this happens, although the victim remains unaware that anything untoward is happening.

The technique has been popular for years, which is why websites use anti-CSRF tokens that are reset every time there is a state-changing request.

In this case, the researcher was able to bypass this by adding the Facebook fb_dtsg CSRF token to the POST request body as part of the compromise.

“In the blink of an eye”

A successful attack would allow an attacker to post to the hijacked user’s timeline, change their profile picture, and even trick them into deleting their account.

Admittedly, account takeover executed by changing the user’s recovery email address or phone number would be trickier as it requires the user to be lured to two URLs, one to make a change and another to confirm the action.

So to bypass this, I needed to find endpoints where the ‘next’ parameter is present so the account takeover could be made with a single URL.

That extra step is what eventually reduced Samm0uda’s bug bounty from $40,000 (for takeovers not requiring additional user interaction) to $25,000 (for ones that do).

On 31 January, five days after being reported to Facebook, the issue was fixed, the researcher said.

Facebook vulnerabilities have become a bit of a running theme, including a major breach affecting nearly 50 million account holders last September after attackers exploited a flaw to steal access tokens.

Not long after, a researcher reported how a Facebook user could make themselves admin on any Facebook Business Account.

Separately, a flaw-cum-leak was discovered last week by Nightwatch Cybersecurity in which an Android app with Facebook API access was allegedly “copying user data into storage outside of Facebook and storing it insecurely in two separate locations.”

As with the latest issue, fixing those would have earned their finders a nice fee. In fact, Facebook said in December it had paid out a mere $1.1 million in bounties during 2018, and $7.5 million since 2011.

For a company of Facebook’s size and profitability, that is peanuts.  Long live researchers!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/UdAM_6q8f-Q/

Revealed: Numbers show extent of security fears about security biz Kaspersky Lab

Eugene Kaspersky’s security biz saw turnover crash by a quarter in North America following the US government’s decision to remove the antivirus software from federal systems.

Secret service agent in silhouette on white background

Sir, you’ve been using Kaspersky Lab antivirus. Please come with us, sir

READ MORE

That ruling came in late 2017 with the rationale being that Kaspersky Labs was possibly too close to the Russian administration or was that it was subject to orders from the Kremlin. Appeals failed.

Wider panic concerns about Kaspersky wares were also evident in the commercial sector stateside with Best Buy yanking the boxed products off its shelves, and to the UK where the red flag was raised (forgive the pun).

Kaspersky then moved its tech pipes and plumbing from Russia to Switzerland as part of a Global Transparency Initiative but that didn’t prevent the Netherlands government ruling its software was still too risky to use.

stop

Homeland Security drops the hammer on Kaspersky Lab with preemptive ban

READ MORE

With all this in mind, the privately owned AV slinger today revealed the impact of those challenges: unaudited global sales went up 4 per cent in calendar 2018 to $726m.

In 2017, Kaspersky’s global sales grew 8 per cent year on year to $698m, from the $644m filed in 2016.

The results came straight from the company, so we have to bear in mind these are preliminary numbers, not filed with a repository such as Companies House.

Kaspersky said digital (presumably SaaS) was up 4 per cent year-on-year to an unspecified number, enterprise was up 16 per cent and end-point grew 55 per cent.

Unsurprisingly, Middle East, Turkey and Africa grew the fastest by 27 per cent, sales in Russia, Central Asia and the Commonwealth of Independent States were up 6 per cent as was Europe, and Asia Pacific. Latin America was down 11 per cent “caused mainly by currency devaluation in the region”.

As for North America, sales dropped 25 per cent but there was a beacon of light in the shape of an 8 per cent hike in new digital license sales.

“2018 was a crucial year us,” said Kaspersky the man. “After all the challenges and unsubstantiated allegations we faced in 2017, we had a responsibility to show that the company and our people deserve the trust of our partners and customers, and in turn, to continue to clearly demonstrate and prove our leadership.”

“Our continued positive financial results are proof of this, demonstrating heat users prefer the best products and services on the market and support our principle of protecting against cyberthreats regardless of their origin,” he added.

In addition to shifting its tech infrastructure to Switzerland in 2018, Kaspersky underwent an audit by one of the “big Four professional services firms of its engineering practices on the creation and distribution of threat detection rules databases.

Kaspersky the man may be thanking Huawei for taking the Eye of Trump and co off of his software business and very much onto Huawei’s networking equipment biz. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/19/extent_of_security_fears_about_security_biz_kaspersky_lab_revealed/

Accused hacker Lauri Love loses legal bid to reclaim seized IT gear

“Mr Love, you’re not the victim in this. You brought this on yourself; you’re the victim of your own decisions,” District Judge Margot Coleman told accused hacker Lauri Love in court today as she refused to return computers seized from him by the National Crime Agency.

Love, 34, had asked for the return of computers and peripherals taken from him by the National Crime Agency (NCA) when they raided his home in 2012. The Briton, who also has Finnish citizenship, has been accused in the US of hacking a number of government agencies including NASA and the US Department of Energy.

He has not been charged in the UK. The US government tried and failed to extradite Love from the UK last year, with the Lord Chief Justice ruling: “Mr Love’s extradition would be oppressive by reason of his physical and mental condition”. Love has been diagnosed with an autism spectrum disorder.

District Judge Coleman said at Westminster Magistrates’ Court today: “The property as we identified it at the hearing is in two parts. One is the computer equipment itself and the other is the data you can take on it. I think you conceded – certainly I’ve made the finding – the information contained on that hardware is not yours. And you’re therefore not entitled to have it returned to you; it doesn’t belong to you.”

She continued in her written judgment on Love’s application, which was made under the Police (Property) Act 1897: “I found Mr Love to be evasive. He repeatedly tried to avoid answering questions by posing another… his refusal to answer questions about the content of the computers has made it impossible for him to discharge the burden of establishing that the data on his computers belongs to him and ought to be returned to him.”

Love, she said, “asked me to accept an undertaking from him that if the computer hardware is returned to him he would not decrypt or even attempt to decrypt any of the data. That is not a course I am willing to take.”

The former electrical engineering student from Suffolk did not have a lawyer for his application and represented himself against barrister Andrew Bird, who appeared for the NCA.

The NCA did get inside one of Love’s hard drives

Love wanted the return of two laptops and a PC tower. On a Fujitsu Siemens laptop the judge ruled that he had “private and confidential data exfiltrated (‘hacked’) from the ‘Police Oracle’ website”, which is a news site for police workers. His Compaq computer tower contained, among other things, “pirated versions of copyrighted films”, while a WD external HDD had a file on it named in the judgment as “truecrypt2”.

A hard drive within one of the laptops contained a file identified as “truecrypt1”. However, the judge ruled that this drive also contained:

  • Hacked data from the United States Department of Energy and Senate
  • Details of complainants and respondents to discrimination and harassment claims within the US military
  • Copies of passports both UK and foreign with no apparent legitimate connection to Mr Love
  • Email addresses and associated passwords (the passwords in “hashed” format but which can in certain cases be unhashed)
  • Details of names and home addresses and contact details of 258 court staff and judges in California
  • Folders beginning “lolcc” with details of over 232,000 individuals with their names, billing address, email address, telephone number, credit card number, expiry date and CVV number together with details of transactions, many of which appear to be donations to charities having no apparent connection with Mr Love
  • Private data, including photographs of vulnerable children, from an autism charity and Treehouse School

The NCA copied the drive’s contents through the use of a so-called “harvest drive”, which the judge said “allows data to be seen and preserved”. The NCA slurped 124GB of data; however, “before that process could be finished an encryption process cut in to the devices themselves”. DJ Coleman ruled: “I find as a fact that the information on the HARVEST drive is data taken from Mr Love’s computer equipment. This is not a situation where the computers have been used as a repository by others, as Mr Love suggested may have been the case.”

Love, who wore his usual tieless black suit with a neatly pressed white smart-casual shirt, argued that his computers contained data of “inestimable sentimental value” to him, saying that the NCA and the Crown Prosecution Service ought to “do one’s business or get off the pot” and charge him with a crime instead of leaving him in legal limbo and making accusations against him that, he said, he could not fully respond to.

DJ Coleman ruled that Love was “unwilling to answer questions about the contents of the computers”. Precise details of his cross-examination by Bird are withheld from the public by a reporting restriction order because, the judge ruled, it could “compromise a criminal prosecution” later on.

‘Have you tried to get a job?’

Barrister Andrew Bird for the NCA asked the judge to award legal costs for the application against Love. He said: “The reason why the court should award costs is this isn’t the first time this application was made.”

Love had originally made an application to Bury St Edmunds magistrates in 2015 before withdrawing it and resubmitting it.

“The public has funded this defence to this claim,” Bird told the court, “and it may well be thought wrong the public should have to fund extensive costs in a claim of this nature.”

On being fiercely questioned by DJ Coleman about why he withdrew and resubmitted his application, Love said he shouldn’t have to pay the NCA’s legal costs just because he asked for his computers back, arguing that even if the odds “are only 50 per cent favourable, some people have to try. The arguments can be heard, occasionally the arguments persuade the court.”

Scoffing, DJ Coleman replied: “Mr Love, your arguments didn’t even get off the starting block because you didn’t have an argument… there are consequences to the steps you take.”

‘Terrible state of affairs’

Speaking outside the court, Love later said: “What has gone badly wrong here is that criminal accusations were levied in a civil court while I was unrepresented, without me having access to the evidence that was proffered; this is highly irregular, not rule of law. My ultimate aim is to be prosecuted. A weird thing to say because I don’t think I committed a crime, but until I am prosecuted, successfully or unsuccessfully, I can’t leave the country because my friends in America might try to kidnap me again.”

He added: “If I don’t appeal this case, I can just see that the law has taken a very regressive stance on cryptography and people no longer have property rights if they do not cooperate in decryption. This is a terrible state of affairs.”

Gasps of “oh my god” and “what the fuck?” came from the public gallery. Clearly stressed, Love walked in a small circle as he gathered his thoughts. A stony-faced District Judge Coleman tapped the bench with her hand, fingers pointed outwards, and said: “Can you just focus on the question please, I don’t have the time for this! You made, withdrew the application, three months later you renewed it. I expect a simple answer. Why did you decide you would launch this again?”

Levelly, Love replied: “There was a chance I would be taken from the country and never have an opportunity to seek restitution. I thought I should do it immediately… it’s very hard to use the civil courts when you’re in a cell in New Jersey.”

“Mr Love, you’re not the victim in this,” replied the judge. “You brought this on yourself, you’re the victim of your own decisions. We’re only talking about you. Have you any income?”

“My income is £120 a week,” said Love, confirming he receives Employment Support Allowance benefits.

“Have you tried to get a job?” asked District Judge Coleman, prompting whispers of “Oh my god are you serious, is she for real?” from the public gallery, where Love’s friends and supporters, including his girlfriend Sylvia Mann, were sitting.

Flushed, Love replied: “I’m meeting someone on Wednesday to try and get a job as a security consultant. I am studying informally.”

Grimacing, the judge eventually said she was not going to “make an order, which technically could be made, against a litigant in person who I’m satisfied won’t have known the difference. It is irritating to say the least that the taxpayer has had to bear the cost of the proceedings. But I’m not going to make an order.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/19/lauri_love_computer_appeal_rejected/

Security Leaders Are Fallible, Too

Security leaders set the tone for their organizations, and there are many places where the process can go wrong. Second in a six-part series.

We’re only human; we all make mistakes sometimes. Every aspect of securing, defending, and attacking has a human element, an element that profoundly affects all the other components and guarantees that there can be no silver bullet in cybersecurity. We need to factor in human error as part of the cybersecurity process.

This is the premise of the article series we kicked off recently, addressing cybersecurity and the human element from six perspectives of fallibility: end users, security leaders, security analysts, IT security administrators, programmers, and attackers. Last time, we addressed the truth about end users. This time, we cover security leaders.

Security Leaders
Security leaders set the tone and the strategy for cybersecurity within their organizations. Depending on the structure and nomenclature of the organization, a security leader’s title may be chief information security officer, chief security officers, chief information officers, chief risk officer, vice president of cybersecurity, director of cybersecurity, or any one of a number of similar titles. These leaders own the responsibilities of protecting the organization’s digital assets and ensuring the confidentiality, integrity, and accessibility of their organization’s data.

Common Mistakes
One of the biggest challenges for security leaders is how to communicate an accurate description of the organization’s risk profile and security posture to senior officers and the board of directors. We have seen that some leaders have a tendency to paint a rosier picture than the reality of the situation, implying that there is little or no risk of a successful cyberattack. Others don’t take the time, or are not given the opportunity, to provide vital information on threats and threat actors.

When the flow of cybersecurity knowledge does not move upward, security leaders run the risk of having those above them, who often have less understanding of cybersecurity, dictate the direction or minimize the role of the security team. Tasks are prioritized not based on their criticality within the organization but on the amount of attention the topic receives in the news. Purchases are not based on organizational needs but on how much publicity the vendor has received. Investment in proper training for security team members to enhance their knowledge, skills, and abilities is overlooked, and investments are focused primarily on procuring technology.

Incident response (IR) drills don’t receive interdepartmental support. And the scope of the security leader’s responsibilities doesn’t include all of the areas that should be within his or her purview.

Repercussions
If the captain is not steering in the right direction, the ship is bound to go off course. In this case, that means the organization will likely end up suffering from a significant incident at some point. The incident may result from a lack of expertise or due to an insufficient budget (because if everything is under control there is little need to increase spending), an unpatched vulnerability (because the patch was put on the back burner while a vulnerability that was making headlines was addressed), a lack of pertinent technology (because funds were spent on “shiny objects”), an access control misconfiguration (because the security leader had no oversight of the activities), or a similar cause that is the consequence of misguided leadership.

In addition, if the risk was downplayed or proper IR plans weren’t in place before the incident, then the rest of the organization will be unprepared when the situation arises. Organizational transparency suffers, proper response gets delayed, and the incident — be it a data breach, data destruction, or a business disruption — may have more effects and be costlier.

Minimize Mistakes
As many organizations have recognized over the past few years, cybersecurity must be a board-level issue. When cybersecurity is appropriately prioritized, it’s given the resources it needs to operate effectively. We see reasonable budgets for personnel and technology, support from other departments, and a role where the security leader has oversight of, or is heavily involved in key areas that affect security posture, such as vulnerability management, access and identity management, and asset management.

The security leader must also provide a realistic depiction of how the organization’s cybersecurity operations are running. That means being up-front about the state of the organization’s security posture, identifying shortcomings, and devising concrete plans to address these deficiencies. In addition, the reporting of metrics/key performance indicators should not be viewed as an opportunity to sugar-coat or pat oneself on the back but, rather, a way to convey all the work that the security team is doing, how their work is reducing the security risk to the organization, and how weak points are being shored up.

Change the Paradigm
We must recognize that many of the security leader positions today are not set up for success. The security leaders face constraints from multiple angles — budget, network infrastructure, corporate policy, organizational structure — and often bear the full burden of the responsibility when there is an incident. This dynamic must change.

On the flip side, security leaders, who have an average tenure of about 24 to 48 months, need to be more committed to their roles. A strong cybersecurity posture isn’t built in a day. When a security leader leaves after only a couple of years, he or she can set back the security program by months, quarters, or even years.

Obviously, if the position is better designed for success with board-level access, a culture that values cybersecurity, and sufficient budget, churn will decrease. But security leaders with true vision will recognize that they can create the environment they need for success by effectively communicating the critical role of cybersecurity in the organization’s growth and prosperity. It is that type of security leader who can develop a security program that can effectively contend with today’s threats.

Join us next time to discuss the third perspective in our series: security analysts. 

Related Content:

Roselle Safran is President of Rosint Labs, a cybersecurity consultancy to security teams, leaders, and startups. She is also the Entrepreneur in Residence at Lytical Ventures, a venture capital firm that invests in cybersecurity startups. Previously, Roselle was CEO and … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/-security-leaders-are-fallible-too/a/d-id/1333791?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Palo Alto Networks to Buy Demisto for $560M

This marks Palo Alto Networks’ latest acquisition and its first of 2019.

Palo Alto Networks will acquire Demisto, a security orchestration, automation, and response (SOAR) firm, for a total purchase price of $560 million, the two companies announced today.

Demisto, founded in 2015, provides a platform designed to automate and standardize incident response processes. More than 150 customers across healthcare, financial services, high tech, and other industry verticals use its automated playbooks to date.

The company has raised a total of $69M over three rounds of outside funding. Its latest was a Series C round, which raised $43M, on Oct. 10, 2018.

Palo Alto Networks said plans to strengthen Demisto’s existing integration with its Application Framework. At the same time, Demisto will continue to execute on its growth plans and leverage Palo Alto Networks’ distribution network, the firm said in a statement.

“We have dedicated ourselves to the challenge of automation because we believe that relying on people alone to combat threats will fail against the scale of today’s attacks,” said Slavik Markovich, Demisto CEO, in a statement. Markovich, along with fellow founders Rishi Bhargava, Dan Sarel, and Guy Rinat, will join Palo Alto Networks as part of the acquisition.

This is the latest acquisition by Palo Alto Networks and its first of 2019, following a string of security-focused deals throughout last year. In March 2018 it bought Evident.io for $300M in an effort to extend its API-based security capabilities and help users secure cloud deployments. The next month it turned its focus to endpoint security with its purchase of Secdo. Oct. 2018 brought a $173M acquisition of RedLock, another buy for cloud security and threat detection.

Its most recent acquisition is slated to close during Palo Alto Networks’ fiscal third quarter, following regulatory approval and other closing conditions. Read more details here.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/palo-alto-networks-to-buy-demisto-for-$560m/d/d-id/1333902?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple