STE WILLIAMS

Brit spooks slammed over ‘gentlemen’s agreement’ with telcos to get mass comms data

Privacy International has slammed the UK’s spy agencies for failing to keep a proper paper trail over what data telcos were asked to provide under snooping laws, following its first ever cross-examination of a GCHQ witness.

The campaign group was granted the right to grill GCHQ’s star witness after he made a series of errors in previous statements submitted to the Investigatory Powers Tribunal (IPT). The evidence was part of a long-running challenge over the spy agency’s collection of bulk communications and personal data.

Although the witness’s most recent errors related to submissions made at an October 2017 hearing about how much access IT contractors employed by GCHQ have to data, much of the cross-examination aimed to unpick GCHQ’s role in choosing what information telcos hand over.

Under section 94 of the Telecommunications Act, communications service providers and public electronic communications networks can be asked to provide the UK’s spy agencies with bulk communication data on users’ phone and internet records.

The use of s94 directions only became public knowledge in 2015, when the government introduced its so-called Snooper’s Charter and admitted that such collection had been going on since 1998. Privacy International launched a legal challenge against the government and, in 2016, the IPT ruled the activity illegal for the time it was carried out under wraps.

Since then, the tribunal has been ploughing through related issues arising from the case, as Privacy International pushes to uncover more detail about how the s94 directions work, with GCHQ providing much of its evidence through one key witness.

Witness X, as he is known – he has been granted anonymity and speaks from behind a screen – was the deputy director for mission policy at the Cheltenham-based agency for about three years up until last month.

Throughout the case, he has given evidence and submitted multiple statements on behalf of GCHQ to the tribunal. However, Witness X has had to amend his statements a number of times, and following the most recent correction Privacy International was granted permission to cross-examine him.

GCHQ Aerial View of Poppy

Court finds GCHQ and MI5 engaged in illegal bulk data collection

READ MORE

During the two-hour hearing this week, Thomas de la Mare, acting for Privacy International, unpicked Witness X’s statements in granular detail, pressing for precise explanations on how GCHQ worked with service providers, how providers were issued with demands for information and how detailed those requests were.

Central to the debate are the so-called trigger letters – the term used in court to describe notices sent to CSPs detailing the information they are asked to provide – that are sent out after the Secretary of State signs off on the s94 direction.

Much of the questioning was aimed at ascertaining how much power GCHQ has in setting these specifics. The government and spy agencies have repeatedly pointed to the fact the Secretary of State holds the power to OK s94 directions – indeed, this was listed as the first step in the process in a letter sent to the then commissioner of interception of communications, Sir Swinton Thomas, in 2004.

However, Privacy International believes these may be broad sign-offs, with GCHQ left to fill in the specifics.

During the hearing, Witness X accepted that his agency had been able to narrow the focus of the request for data, but that this would be “a technical narrowing, rather than substantive” and was to do with non-communications data.

But Camilla Graham-Wood, solicitor at Privacy International, said that Witness X’s evidence “has further muddied the waters in relation to what exactly went on in relation to the secretive section 94 regime”.

She said: “When the agencies first approached the Commissioner in 2004 for approval of this regime, the involvement of the Secretary of State was relied upon to justify these secretive practices, particularly given the absence of parliamentary scrutiny.

“However, what has transpired, is that the section 94 directions signed by the Secretary of State were overly broad, and in light of GCHQ’s evidence, the decision to choose what data was to be provided by the telecommunications operators was effectively exercised by GCHQ, without the involvement of the Secretary of State.”

Cosy gents club?

During the hearing, de la Mare also made much of the relationship between GCHQ and the CSPs, emphasising the closeness between the providers and GCHQ’s “sensitive relationship team” – which acts as the sole point of call for the telcos.

De la Mare argued that the essence of the underlying agreement was “consensual” and that the s94 directions were simply a “cover” to “justify” what the companies had already been willingly volunteering for many years.

Witness X replied that the relationship was “cooperative”, but took issue with the idea that using s94 was a cover or a justification – saying it was simply used to provide a legal basis for the transfer of information. He stressed that there wasn’t a negotiation over the specifics of what the CSPs would provide.

“The willingness is high level – is it [the CSP] willing to provide data or not,” he said – rather than that the provider being able to say they would provide this or that type of data.

He added that different PECNs might provide different data sets not because of what they had offered but because of the “nature of their business”.

Elsewhere in the hearing, de la Mare produced a table – compiled using one of Witness X’s previous statements – that set out the dates s94 direction were approved, the dates the corresponding trigger letters were sent and the level of detail these notices included.

Light bulb photo via Shutterstock

Brit spooks ‘kept oversight bodies in the dark’ over data sharing

READ MORE

Counsel used this to demonstrate that, in six of the 12 sets of s94 directions identified between 1998 and 2016, the information requested listed general, rather than specific communications data.

And, despite Witness X’s earlier evidence that trigger letters are always sent out immediately after the foreign secretary signs off on the s94 direction – in at least two cases there was a delay.

In addition, there were a number of cases where there was no record of a trigger letter having been sent. The witness put this down to the fact that the information was sometimes given by other means, especially to providers that don’t have the capability to deal with classified materials.

However, this effectively means there is no written record of the specific data requested from the companies, which would arguably make it impossible for GCHQ cross-check and confirm it had received the correct information.

Graham-Wood said that such evidence raised questions over the relationship between GCHQ and the telcos.

“Where companies were so eager to hand over data about their customers, section 94 was required as a legal cover,” she told The Register.

“The lack of a paper trail documenting what companies were asked to provide and reliance upon oral agreements strikes more of an old-school gentleman’s agreement, despite it concerning highly sensitive personal data. This is clearly unacceptable.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/28/privacy_international_questions_oversight_of_uks_mass_comms_data_collection/

How to Secure ‘Permissioned’ Blockchains

At the heart of every blockchain is a protocol that agrees to the order and security of transactions in the next block. Here’s how to maintain the integrity of the chain.

Permissioned blockchains are growing in popularity as businesses attempt to cash in on the blockchain trend while keeping a firm hand on the tiller. Contrary to their non-permissioned cousins (such as bitcoin or Ethereum), permissioned blockchains are controlled by an authority that grants permission to every node that participates.

If you’ve read news stories recently about innovative uses of blockchain, the chances are they were based on permissioned blockchains. In these cases, the authority figure was probably a consortium of companies or a single organization.

This article considers the security challenges of deploying a small blockchain with authorized participants and provides advice for secure deployment. But before jumping into the finer details, let’s discuss the characteristics of a permissioned blockchain and its terminology.

Permissioned versus Non-permissioned Blockchains
At the heart of every blockchain is a protocol that agrees to the order of new transactions in the next block. It is called “consensus” because it is a binding agreement between all the validating nodes. It’s critical this process is kept secure as it maintains the integrity of the chain.

In non-permissioned blockchains, consensus is typically a race to solve a hard mathematical problem in exchange for a small financial reward. Validating nodes collect all the transactions they know about, choose an order, and begin solving the block’s challenge. Sheer luck determines who wins the race, although those with more computational power are likelier to succeed. These consensus protocols can withstand significant attacks on the chain (up to 50% of nodes can be malicious), but at the cost of transaction speed and finality. For instance, bitcoin achieves single-digit transactions per second and finality can take over an hour.

In permissioned blockchains, consensus is more orderly and validators take turns to propose a block for the others to approve. It’s a much faster process, meaning permissioned chains achieve high transaction throughput and often instant finality. This type of consensus generally requires over two-thirds of nodes to be trustworthy.

Security Challenges Faced by Permissioned Blockchains
The term “blockchain” triggers a certain image — one of decentralized control and security through scale. The properties of large non-permissioned chains, such as bitcoin and Ethereum, are well known by the public and occasionally influence those choosing them to deploy permissioned chains for enterprise purposes. It’s important to realize when you shrink the number of participants and rely on trusted validators, the security challenges closely resemble traditional IT systems rather than the trendy nature of large blockchains.

Let’s imagine a permissioned blockchain established between five major banks. In a five-node chain, this means that four nodes (two-thirds) must be behaving correctly for consensus to succeed.

Nodes misbehave for two main reasons: their legitimate owners have ill intent or they have been compromised by an attacker. The former challenge is about preventing collusion (a topic for another day), while the latter is about securing the private keys of the nodes and ensuring they are used only for signing messages in accordance with the consensus protocol.

Of equal importance is the protection of the private keys that control the blockchain accounts. Keys sign transactions and permit the flow of funds out of a particular account. Unauthorized access to these keys could result in an unwarranted transfer of value between the banks, which would be costly to resolve and could doom the entire project.

Finally, the linchpin of the entire system is the set of keys that authorized the participants in the first place. Access to these keys could allow an attacker to commission new validating nodes, which would make it easier to control enough voting power to corrupt the chain.

Responding to the Threat
While public blockchains rely on the sheer number of nodes for security, permissioned chains have to turn to traditional methods of hardening, including protected environments for private keys and processes and procedures for securely operating the chain.

The private keys used by validating nodes should be physically protected, using technology such as hardware security modules (HSMs). The use of HSMs ensures that the private keys cannot be read from server memory in the case that a validating node is compromised. It’s even possible to protect the consensus logic using an HSM, ensuring the data complies with the consensus protocol before signing it with the key. A classic example is preventing double-signing (i.e., forking) by refusing to sign two blocks at the same height. My research team has released an open source example of this technique, using a popular permissioned consensus engine.

Protecting the private keys used by the blockchain accounts should follow a risk-based approach. For wallets holding small amounts of value, the protection could be as simple as a USB-stick HSM with a button to authorize transfers. In enterprise scenarios, the value involved will likely demand the use of commercial HSMs, and perhaps the sharing of signing duties between multiple signatories.

Protecting the heart of the blockchain — the private keys that grant access to play — is rather similar to protecting any other public key infrastructure (PKI). After all, it is a PKI. Contrary to the common perception of blockchains, permissioned chains rely on a PKI that issues credentials to its members, and every transaction can eventually be validated with reference to a root of trust. Consequently, the keys involved must be protected in a similar manner to any other root of trust — with HSMs, separation of duty, auditing, and so on.

The Future of Permissioned Chains
The hype surround blockchain shows no sign of abating, so I expect to see permissioned blockchain projects championed in the news for some time. Designing a permissioned blockchain should include security from day one, a step many projects will miss as they rush into this new market. I anticipate a steady increase of news stories relating to flaws in consortium blockchain implementations.

As the hype dies away, fewer projects will be commissioned, but by that time we should have some good advice available to blockchain owners through bodies such as the Accredited Standards Committee X9 and ISO/TC 307.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

 

Duncan Jones leads the research and innovation team at Thales e-Security, focusing on emerging security threats and tomorrow’s cyber technologies. He studied computer science at the University of Cambridge, before spending ten years working in security and payments. Duncan … View Full Bio

Article source: https://www.darkreading.com/endpoint/how-to-secure-permissioned-blockchains-/a/d-id/1331129?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why Cryptocurrencies Are Dangerous for Enterprises

When employees mine coins with work computers, much can go wrong. But there are some ways to stay safe.

Whatever the latest hot, new cryptocurrency is — be it bitcoin or one of its quickly sprouting rivals — doesn’t matter: coin mining and trading activities by employees and by hackers is a considerable security problem in the enterprise.

Cryptocurrencies and the industries sprouting around them are infecting enterprise desktops and servers with malware, making systems vulnerable to cyberthieves, and draining electricity. They could be after customer lists, passwords, databases, or looking to turn your computers and devices into bots, ready to spread more malware.

The threats might start from employees, if they choose to try to make a couple of extra dollars by mining or trading cryptocurrencies. Today, insiders are the biggest problem, as they are more than likely using enterprise-owned computers or company-owned Wi-Fi to pursue their cryptocurrency interests. Cryptocurrency is the new day trading, both disruptive and dangerous, and this is due to the nature of the software that needs to be used for those activities.

There are two types of software. One works to mine cryptocurency coins; the other manages digital wallets.

Coin-mining software uses CPU cycles and memory on the end user’s computer to solve complex math problems. The more problems that are solved, the more coins are mined (created) and a portion is added to the user’s account. Coin mining requires computing horsepower in order to make just a few pennies’ worth of cryptocurrency. The more powerful the computer, the faster the employee makes money. If the employee can manage to harness multiple desktop/notebook computers — or more powerful computers, such as corporate servers or cloud resources — the employee makes even more money, but the enterprise suffers.

There are two dangers. First, running mining software consumes considerable electricity. Second, if coin-mining software is installed on servers, it’s reducing the amount of server processing capability to be used for legitimate work. Today, mining bitcoins requires too much processing power to be efficient, and so employees are mining newer or less-known currencies, such as Monero and Ethereum. Don’t underestimate electricity consumed by mining. By comparing it to playing computer games, if a regular gaming computer runs for eight hours, it is 2,000 kW/h per year of electricity. With mining, it’s more like 5,000 kW/h. That’s thousands of dollars wasted.

A second threat is digital wallets, software used to manage digital currency accounts. They are targeted by cyberthieves, who break in to steal the cryptocurrency coins. If those wallets are stored on company-owned computers, hackers are breaking into your own resources, including your computers, servers, or network.

Digital wallets and mining applications are not carefully written applications by name-brand vendors. More likely, they are written by anonymous sources, and distributed via questionable means via the Dark Web. To obtain software for cryptocurrencies, one has to get near questionable parts of the Web, websites targeted by hackers, and the software may be a Trojan for malware. For example, EtherDelta, a coin exchange marketplace that was taken over by hackers in 2017 by subverting the website’s DNS information. This allowed the hackers to steal cryptocurrency coins.

Hackers may try to subvert employees’ coin-mining/trading activities via malware installed on coin applications. Another recent danger is the use of malicious JavaScript or malicious ads to do some of the calculations needed to mine software — but this time, on the hacker’s account. Software on web pages use the end-user’s computer to perform calculations around the clock. Those actions can be delivered via JavaScript, using browsers like Firefox, Chrome, Safari, or Edge. Most JavaScript is fine yet can be turned malicious.

Stay Safer
So, what can you do? A few things:

  • Make sure your antivirus software is up to date on all corporate assets, and that your AV solution blocks coin software. Contact your vendor to make sure.
  • Don’t allow non-corporate devices to access the enterprise network, and that includes personal devices, such as the employee’s personal computer brought into the office.
  • Set strong policies against the use of mining or coin-management software on enterprise devices or in the workplace — treat it as you would pornography or other disruptive and dangerous software.
  • Configure firewall policies to block access to known websites involved in cryptocurrencies or which are hubs for the distribution of coin software. This is an ever-changing list, so you must be vigilant.
  • Sites to consider blocking include coinbase.com, cex.io, binance.com, kraken.com, etherdelta.com, coindesk.com, and blockchain.info.
  • Monitor corporate computers to see if they have excessive CPU or memory utilization, which could be the result of coin-mining software.

In conclusion, be aware of myriad cryptocurrency coin issues to better foster a culture of security in your enterprise before it becomes an epidemic.

Related Content:

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

David Shefter serves as Chief Technology Officer for Ziften Technologies, where he brings an expansive background in security, IT, and emerging technologies for finance. Previously, he served as Senior VP of Innovation and Emerging Technology at Citigroup. Shefter is … View Full Bio

Article source: https://www.darkreading.com/endpoint/why-cryptocurrencies-are-dangerous-for-enterprises/a/d-id/1331121?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hacking on TV: 8 Binge-Worthy and Cringe-Worthy Examples

From the psycho-drama Mr. Robot to portraying the outright dangers of ransomware taking down a hospital in Grey’s Anatomy, hacking themes now run deep in today’s TV shows. PreviousNext

Image Source: Shutterstock via stock_photo_world

Image Source: Shutterstock via stock_photo_world

Hackers and inside stories about hacking have made their way into popular culture in a big way.

There’s still nothing that quite tops USA Network’s Mr. Robot for its sheer ability to drill down into the details of how hackers think and operate. The series’ protagonist and anti-hero, Eliot Alderson, played by Rami Malek, is a self-described hacker and computer tech complete with a drug problem and social anxiety and chronic depression disorders. While Malek’s character is clearly a bit of a stereotype, some viewers are forgiving.

“By far, Mr. Robot is the most accurate in the way it portrays the methods of hackers,” says Jason Haddix, vice president of trust and security at Bugcrowd. “In most shows, the longest time to exploit you’ll see is about 30 seconds. It’s never that simple. Our work can take hours and days to do enough reconnaissance to find a flaw.”  

Stu Sjouwerman, founder and CEO of KnowBe4, says shows such as NCIS and its spin-offs NCIS Los Angeles and NCIS New Orleans tend to be “quick and dirty” and don’t really give viewers a sense of how hacking is quite tedious.

“Hacking is a methodical and tedious exercise, but once you get there it’s like, ‘whoa,'” Sjouwerman says.

The sense of excitement and danger that surrounds hacking makes for good television. Hackers took over Grey Sloan Memorial Hospital earlier this year during multiple episodes of ABC’s Grey’s Anatomy and despite all the subplots that included domestic abuse themes and a transgender hacker who saves the day, the episode gave a reasonable depiction of how disruptive and dangerous it would be for a hospital to be held up for a $20 million ransom. 

Other TV shows this year, from Bull on CBS to Showtime’s Homeland offered up hacker plots and subplots. And HBO plans to launch Hackerville this fall, a new series based in Europe.

In putting together this slideshow, we drew from today’s popular shows to many of the popular techie shows of the past, like Person of Interest, CSI Cyber and Numb3rs. If you haven’t seen some of these shows, click on the links to trailers and episode clips to catch up and do some binge watching of your own. 

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/careers-and-people/hacking-on-tv-8-binge-worthy-and-cringe-worthy-examples/d/d-id/1331115?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FTC Settles with Venmo on Security Allegations

Proposed settlement addresses complaints that Venmo misrepresented its security and privacy features.

The Federal Trade Commission has reached a settlement with Venmo, a PayPal company, regarding allegations that the company misrepresented the way it handled and made available funds as well as the level of security of its financial platform.

The charges, originally filed in 2015, alleged that some Venmo customers suffered “real harm” when the company either didn’t make funds available in the advertised time or withdrew funds after their initial deposit.

Venmo advertised “bank-grade security” and transaction privacy for their customers; the FTC found that the company had delivered neither. In the proposed settlement, Venmo admits to no wrongdoing, but does admit to the facts of the allegations.

Under the agreement, approved by a 2-0 vote of the commission, Venmo is required to stop mis-representing the level of security available for transactions and to be more transparent with customers about both the security and privacy of their transactions. In addition, because of the GLB component of the complaint and settlement, Venmo will have to submit to twice-annual audits of its compliance for 10 years.

The proposed agreement will be published to the Federal Register and become subject to public comment for 30 days. After that time, the commission will vote on whether or not the settlement will become final.

Read more here and here.
 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/compliance/ftc-settles-with-venmo-on-security-allegations/d/d-id/1331156?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple co-founder Steve Wozniak scammed by Bitcoin fraudster

Apple co-founder and tech icon Steve Wozniak has reportedly admitted falling victim to Bitcoin fraud.

News of the incident emerged at a conference in India, where ‘The Woz’ described losing seven Bitcoins (currently worth $70,000) to a fraudster who paid for them using a credit card but then issued a chargeback.

Of all the types of Bitcoin fraud – fake exchanges, fake wallets, phishing attacks on wallets – this must count as one of the most basic.

A victim agrees to sell Bitcoins to a buyer, who pays for them using a credit card. Once the currency has been transferred, the buyer issues a card chargeback, which means the seller is likely to end up with nothing.

This exploits a characteristic of Bitcoins that people selling the currency on an irregular basis might not realise: once Bitcoins have been transferred, they can’t be recalled.

And because Bitcoin wallets are pseudonymous (i.e. the person receiving Bitcoins is an address not an identity), identifying scammers is not easy unless they accidentally reveal themselves elsewhere.

Wozniak said:

It was that easy! And it was from a stolen credit card number so you can never get it back.

It’s also possible the scammer simply claimed the card had been stolen when it hadn’t been. Chargebacks can be issued weeks or months after the transaction, which increases the risk for anyone receiving payments this way.

Wozniak doesn’t say when the fraud happened but if it was some time ago his losses would have been a lot lower than if it happened today, given how the cost of Bitcoin has risen.

In response to the rising incidence of card fraud, most Bitcoin exchanges have long since imposed daily transaction limits or added extra verification.

In a separate development, some issuers recently stopped people from purchasing Bitcoins using credit cards, full stop.

Wozniak has spoken before about his Bitcoin investment as an intellectual “experiment” designed to test and understand a system that “is mathematical, it is pure, it can’t be altered.”

I had them so that I could someday travel and not use credit cards, wallets or cash. I could do it all on Bitcoin.

Seven Bitcoins lighter, presumably Wozniak has learned the lesson that Bitcoins don’t only go up and down in value with investment fashion – they can also occasionally disappear completely.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/x55ljALMyJY/

Making private browsing more private

Browser privacy modes aren’t really guaranteed to be private.

Unavoidably, browsers must temporarily store data from main memory in secondary processor caches, swap files squirrelled away in corners of the hard drives, and OS-managed DNS caches.

That’s a lot for a humble browser to keep track of, let alone delete with certainty at the end of the session, which means that forensic tools will often find traces if they know where to look.

To close this weakness, researchers at MIT and Harvard University have proposed that a completely new type of server – called Veil – takes over the privacy job instead.

One of the Veil team, Frank Wang, explained the current issue:

The fundamental problem is that [the browser] collects this information, and then the browser does its best effort to fix it.

But at the end of the day, no matter what the browser’s best effort is, it still collects it. We might as well not collect that information in the first place.

It’s a tall order but what they came up with is as inventive as it is unfamiliar.

The basic idea is that the browser accesses a web page through a special “blinding” server that re-compiles its content into an encrypted form that is decrypted using a symmetric AES key known only to the user.

From the user’s point of view, everything looks as it would on any other website even as behind the scenes the URLs, HTML, CSS, and JavaScript have been turned into abstract references cryptographically unlinkable to the pages from which they come.

No two versions of any page passed through Veil’s blinding will ever look the same, aided by content mutation (dynamically altering HTML, CSS and JavaScript), and heap walking (marking sensitive page content so that it is never swapped out to disk). Cached content that remains at the browser end becomes unreadable.

Where even higher privacy is desired, Veil offers a mode which turns the browser into a sort of “dumb terminal” through which all content is transmitted as simple bitmaps that are constantly overwritten.

Apart from superior privacy, the advantage of this setup is really that the user doesn’t need a special browser or plug-in to make it work – the only change is that they access sites through a blinding server address instead of a web URL.

The downside is that because developers must hook their websites to work with Veil, it can’t be used to browse any site, nor is it ever likely to be as fast or responsive as conventional web access.

Currently, Veil exists only in a prototype form, albeit one that its makers appear to have tested thoroughly to ensure the concept holds water.

Who would use something like Veil?

Its developers suggest whistleblowing websites, which have a strong need to preserve their visitor’s privacy. It can also be combined with anonymity networks like Tor.

If the idea of blinding servers catches on with publishers, the user base could in theory be as big as the user base for browser privacy modes –  in other words, everyone at some point.

In 2009, Google’s then CEO Eric Schmidt was infamously quoted as saying:

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.

But in today’s internet, where privacy can often appear to be crumbling under pressure from different forms of surveillance, this sentiment might find fewer supporters.

This gives Veil a chance of getting off the MIT drawing board and into real life.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/p_1QSPct42Y/

ISIS recruiter caught by Facebook screenshot

Mohammed Kamal Hussain, a 28-year-old recruiter for Daesh, also known as ISIS, ISIL or Islamic State, who used Facebook, WhatsApp and Telegram to send thousands of messages to strangers in efforts to radicalize them, was given seven years in jail after one of his targets took screenshots of the messages and turned them over to police.

Hussain, a Bangladeshi national who had overstayed his visa and was living in East London, was found guilty on Monday at Kingston Crown Court of two counts of encouraging terrorism and one count of supporting a proscribed organization.

According to London’s Met Police, Hussain came to their attention only when a man who lives outside the UK emailed the Home Office in March 2017, saying he’d received Facebook messages from a stranger inviting him to join Daesh. Instead of ignoring the unprompted pitches to join the terrorist group, he grabbed screenshots of the messages and sent them to police.

Commander Dean Haydon, head of the Met Police Counter Terrorism Command, praised him as a “conscientious individual” who trusted his instincts to report the suspicious messages:

It is in great part thanks to him that police were able to bring Hussain to justice.

Met Police said that investigators trawled thousands of messages sent by Hussain. Among them were Facebook posts encouraging and glorifying terrorism, including a speech from the so-called “leader” of Daesh, Abu Bakr al Baghdadi. Hussain was arrested on 30 June 2017.

According to Commander Dean Haydon, when police searched Hussain’s devices, they found “barbaric” videos of Daesh violence and “warped reasoning” for killing people, including children and Muslims.

Haydon encouraged anyone who “sees something online that they have even the slightest feeling could be terrorist- or extremist-related” to follow the example of the screenshot-grabbing man who helped police track down Hussain. He suggested reporting such content to police via the Home Office’s online reporting form, which is part of its ACT (action counters terrorism) campaign.

Reporting can be done anonymously. Haydon said the site has a team of specially trained officers who look at all reports and decide if action is required.

Earlier this month, the Home Office announced the launch of an artificial intelligence (AI) tool that it said will be able to automatically identify extremist videos like the kind Hussain was disseminating – and even block them before they can be uploaded.

The Home Office cited tests that show the tool can automatically detect 94% of Daesh propaganda with 99.995% accuracy. That accuracy rate translates into only 50 out of one million randomly selected videos that would require human review. The tool can run on any platform and can integrate into the video upload process to stop most extremist content before it ever reaches the internet.

That £600,000 AI tool was developed by the Home Office and ASI Data Science. It’s primarily designed for smaller platforms such as Vimeo, Telegra.ph and pCloud – platforms that don’t have the resources to build their own AI counterterrorist tools but still need them, given that they’re increasingly targeted by Daesh and its supporters.

As for the bigger platforms, they’re already working on their own machine-learning projects to fight terrorist content online.

The most recent such project comes from Facebook Messenger. The BBC reported on Monday that Facebook has been running, and funding, a pilot project to de-radicalize extremists.

Led by the counter-extremism organization Institute for Strategic Dialogue (ISD), the aim was to mimic extremists’ own recruitment methods, specifically in the realm of direct messaging. ISD staffers scanned several far-right and Islamist pages on Facebook for targets, then manually searched profiles to find instances of violent, dehumanizing and hateful language.

Eleven “intervention providers” – they were either former extremists, survivors of terrorism or trained counsellors – reached out to 569 people. Seventy-six of those people responded and took part in conversations of five or more messages, and researchers claimed that eight showed signs of rethinking their views.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/In5i4GlGXXQ/

“Misguided” hacking bill threatens to ice security researchers, say critics

The US state of Georgia is considering anti-hacking legislation that critics fear could criminalize security researchers. The bill, SB 315, was drawn up by state senator Bruce Thompson in January, has been approved by the state’s senate, and is now being considered by its house of representatives.

The bill would expand the state’s current computer law to create what it calls the “new” crime of unauthorized computer access. It would include penalties for accessing a system without permission even if no information was taken or damaged.

One of the bill’s backers, state Attorney General Chris Carr, said the bill is necessary to close a loophole: namely, the state now can’t prosecute somebody who harmlessly accesses computers without authorization.

From a statement his office put out when the bill was first introduced:

As it stands, we are one of only three states in the nation where it is not illegal to access a computer so long as nothing is disrupted or stolen.

This doesn’t make any sense. Unlawfully accessing any computer in Georgia should be a crime, and we must fix this loophole.

But critics of the legislation believe it a) will ice Georgia’s cybersecurity industry, penalizing security researchers reporting on bugs; b) would criminalize innocent internet users engaged in innocuous and commonplace behavior, given that the law’s definition of “without authority” could be broadly extended to cover behavior that exceeds rights or permissions granted by the owner of a computer or site (in other words, terms and conditions); and c) is unnecessary, given that current law criminalizes computer theft; computer trespass (including using a computer in order to cause damage, delete data, or interfere with a computer, data or privacy); privacy invasion; altering or deleting data in order to commit forgery; and disclosure of passwords without authorization.

That’s all coming from a letter sent by the Electronic Frontier Foundation (EFF) to Congress in opposition to the current draft of SB 315.

The EFF calls the legislation “misguided.”

The EFF, along with other groups, are worried that beyond criminalizing innocent online behavior, the bill would criminalize security researchers for the sort of non-malicious poking around that they do.

According to Scott M. Jones from Electronic Frontiers Georgia – a group that participates in the Electronic Frontier Alliance – overly broad use of the Computer Fraud and Abuse act (CFAA) has already chilled security research.

He brought up an incident from last year that he believes embarrassed the attorney general’s office into cooking up SB 315. It involved a data breach at Kennesaw State University, whose Election Center was handling some functions for elections in the state. The breach was big news, and it was messy: it spawned a lawsuit over destruction of election data, for one.

The thing about that breach was that it had been responsibly disclosed by a security researcher who wasn’t even targeting the university’s elections systems; rather, Jones said, he simply stumbled upon personal information via a Google search, then tried to get authorities to remove it. In other words, he poked around.

The FBI wound up investigating that researcher, but they couldn’t come up with anything, so off they went without a case to prosecute him. Jones:

To use the language that the attorney general’s office used, they want to build [SB 315] to criminalize so-called “poking around.” Basically, if you’re looking for vulnerabilities in a non-destructive way, even if you’re ethically reporting them—especially if you’re ethically reporting them—suddenly you’re a criminal if this bill passes into law.

Equifax is another case in point: As the EFF suggested in its letter about the bill, fear of prosecution under a bill like SB 315 could have dissuaded an independent researcher out of disclosing vulnerabilities in the credit broker’s system: vulnerabilities that Equifax ignored when the researcher responsibly disclosed them to the company. Those vulnerabilities led to the leak of sensitive data belonging to some 145 million Americans and 15 million Brits.

This illustrates why it is vital for independent researchers to hold companies accountable to their customers.

The EFF has asked the state to amend the bill so as to better protect security researchers.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yLAYAHWWSQA/

Got that itchy GandCrab feeling? Ransomware decryptor offers relief

White hats have released a free decryption tool for GandCrab ransomware, preventing the nasty spreaders of the DIY malware from asking their victims for money.

GandCrab has been spreading since January 2018 via malicious advertisements that lead to the RIG exploit kit landing pages or via crafted email messages impersonating other senders, infecting an estimated 53,000 computers in the process.

In exchange for the decryptor, the crooks behind GandCrab ask for a ransom of anywhere between hundreds and hundreds of thousands of dollars in DASH, a crypto-currency that has just made its debut in cybercrime. The developers of GandCrab use a ransomware-as-a-service business model that allows people with little technical skill to get a piece of the action.

Ransomware demands tied to GandCrab infections have reached up to an exorbitant $600,000+, orders of magnitude higher than is common in ransomware scams. Ransomware scammers more typically demand between $300 and $500.

The newly developed (free) antidote works for all known versions of the ransomware. The nasty encrypts personal data on victims’ machines.

Security firm Bitdefender developed the GandCrab ransomware decryption tool in collaboration with Europol and Romanian Police. The effort is the latest under the No More Ransom project.

No More Ransom was launched in July 2016, introducing a new level of cooperation between law enforcement and the private sector to fight ransomware. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/28/gandcrab_decryptor/