STE WILLIAMS

SOC Maturity By The Numbers

Most large organizations today have security operations centers in play, but only 15% rate theirs as mature.PreviousNext

Image Source: Adobe Stock

Image Source: Adobe Stock

For many enterprises, the security operations center (SOC) is the spear tip of their cybersecurity programs.

Organizations depend on the ability of SOC analysts and incident responders to quickly spot indicators of attacks, investigate, uncover root causes, and mitigate problems in a timely fashion.

Dark Reading looks at some recent statistics to examine how well-prepared the average SOC is in meeting today’s security challenges.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: http://www.darkreading.com/cloud/soc-maturity-by-the-numbers/d/d-id/1327969?_mc=RSS_DR_EDT

Security Training 101: Stop Blaming The User

To err is human, so it makes sense to quit pointing fingers and start protecting the organization from users — and vice versa.

 More on Security Live at Interop ITX

We’ve all seen them: the “don’t take the bait” anti-phishing posters plastered throughout most enterprises. As companies struggle with the various forms of malicious email, from spearphishing to whaling, I’ve noticed security leaders have begun to emphasize monitoring and deterring human, email-related mistakes. In that vein, companies are inserting language into employee agreements regarding cybersecurity hygiene and creating policies that punish lax security habits.

I believe it’s time that we, as cybersecurity professionals, take a step back and reconsider our approach to user error. After all, to err is human, and our energies would be better served not on assessing blame but on protecting the organization from users — and users from themselves. 

A user-education program is of preeminent importance. Every modern control framework, from ISO to the NIST Cybersecurity Framework, requires user education. The problem I see in today’s standard corporate information security program is that user education is the first and only line of defense against many threats. For example, many companies don’t allow personally identifiable information to be transferred unencrypted, but have no data-loss prevention technology to prevent it. Frankly, this is irresponsible. When a Social Security number is accidently put in an email, the user gets blamed — not the information security group. This training-only strategy also creates an environment in which every user has to do the right thing, every time, without failure.

Users Make Mistakes: Be Prepared for It
Several months ago, I witnessed a Fortune 250 CISO dress down his director of governance, risk, and compliance because a recent audit found sensitive information on a shared resource not designed for that purpose. Immediately, the CISO and the director discussed implementing new information-classification training requirements for users and a scanning program to find any mistakes by other users, who would also need more training. This line of thinking appears to be common in less-sophisticated enterprises. No discussion of preventive techniques, only detection and blame.

A common methodology for user interface experience testing is to pretend the user is drunk. The thought is, if a drunk user can navigate your application, a sober one can easily do the same. The same methodology should apply to cybersecurity. In the security professional community, we have failed by counting on users to constantly do the right thing. Our focus must not be on eliminating human errors but on preventing them in the first place.

As a consultant, I have reviewed hundreds of presentations for boards of directors throughout my career. Many CISOs struggle with establishing board-appropriate metrics: they wonder about the right level of reporting detail to include and how much board members will understand. But I can always count on the phishing test PowerPoint slide to appear during a presentation. “How many clicks this quarter versus last quarter? How many repeat offenders, even after training? After we introduced training, the click rate dropped from 44% to 32%.” It’s amazing how similar these slides are across different companies, regardless of the industry.

Typically, I see companies with pre-training click rates in the 20% to 30% range improve significantly after several quarters of effort. The absolute best training programs I’ve seen, at security-conscious companies, produce results in the 2% to 3% range. Although remarkable, even this level is too high when it takes just one administrator to fall for a scam. After all, 2% of 50,000 users is still 1,000 users.

In my opinion, the phishing test click-rate is a terrible metric for reporting. It assumes the user is responsible for phishing-related issues and takes the focus off of developing reliable, technical controls.

I would much rather see companies move the focus to detection, and instead track their phishing reportrate or how many users reported a test phishing email to the security group. Improving the number of users who report phishing emails creates a large “human sensor” network to support the information security operations center. Recently, I worked with a company that has seen great results, with fewer incidents, using this model. The approach also has the added benefit of enabling the information security market to work with carrots — rewarding users who report — versus “using a stick” to punish those who click through.

Ultimately, user errors should be classified into two categories: (1) mistakes anyone can make, and (2) mistakes no one should make. Training programs and detection techniques should be focused on the second category. As a community, we should focus on preventing the first, and to accomplish that, we must move beyond blaming users and accept accountability as security’s gatekeepers. 

Related Content:

Andrew Howard is Chief Technology Officer for Kudelski Security, trusted cybersecurity innovator for the world’s most security-conscious organizations. Prior to joining Kudelski Security, he led the applied cybersecurity research and development portfolio at the Georgia Tech … View Full Bio

Article source: http://www.darkreading.com/endpoint/security-training-101-stop-blaming-the-user/a/d-id/1327959?_mc=RSS_DR_EDT

Looking for love? Don’t get caught out by the growing number of scammers

Online dating fraudsters conned almost 4,000 British people out of an average of £10,000 ($12,500) each during 2016, figures from the national Action Fraud crime reporting service have revealed.

Between January and November, the service received 3,594 reports of dating fraud with the total sum lost by victims hitting £36.6m.

Adding in the projected numbers for December (around £3.5m), the yearly losses are noticeably higher than for 2014 and 2015, which recorded £32m and £26m in fraud, respectively.

Action Fraud hasn’t released its analysis of underlying trends, but it appears that after a drop in 2015, the number of victims and average losses for last year more or less returned to the level of 2014.

There is no way to get a handle on the scale of under-reporting although, as with a lot of cybercrime, we can pretty be sure these numbers are far from the whole picture.

What is clear, however, is that the potential losses for anyone caught up in online dating fraud are among the highest for any category of cybercrime.

Case studies from 2016 underline how the size of these losses can be life-changing, with one victim losing a reported £300,000 pursuing a fictitious Italian suitor she met on a dating website.

In this case as in so many others, the social engineering technique deployed was to ask the victim for escalating sums of money to solve the imaginary person’s “problems”, be those health, travel or simple misfortune. The man’s identity was borrowed from a real person trawled from elsewhere on the internet.

The victim told the BBC:

I wasn’t comfortable, and then I got so far in I couldn’t get myself out, and I didn’t want to walk away having lost £50,000 or what-have-you, so you keep going in the hope that you’re wrong and this person is genuine.

The BBC report goes on to mention a second female victim who lost £140,000 in a similar dating con, a reminder that punctures the lingering myth that dating fraud victims are predominantly older men incompetently searching for younger women.

It’s almost as if fraudsters have hit on the insight that with patterns of relationship changing, longer life expectancy and in some cases greater affluence, human loneliness is there to be exploited as a growth sector.

Victims are typically stalked on legitimate dating websites, which on this evidence seem sadly disinterested in the problem.

As Action Fraud commented:

With more people using online dating sites we expect a rise in this type of crime to continue.

In a world in which trust is low and yet the yearning for it remains a basic human need, there is no simple defence against online dating scams beyond sticking to basic principles for interaction.

When it comes to the entanglement of money and dating, people who put their trust in the tameness of wolves will always look like the pack’s next kill.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xvSyyMNXFD0/

Potential Phantom Menace found on Twitter: a Star Wars botnet

Most Twitter users are familiar with them: followers with odd names and avatars, following far more than they are being followed. Automated fake accounts known as bots.

People often dismiss them as harmless clutter. But one UK researcher thinks there may be more here than what we see on the surface – a Phantom Menace, if you will. (Cue the John Williams Star Wars film score…)

The bots are with you

Juan Echeverria, a computer scientist at UCL, has published a paper on a network of 350,000 Twitter bots he calls the Star Wars botnet. Some of the accounts are used to fluff up follower numbers, send spam and boost interest in trending topics. From the paper:

A large number of Twitter users are bots. They can send spam, manipulate public opinion, and contaminate the Twitter API stream that underline so many research works. One of the major challenges of research on Twitter bots is the lack of ground truth data. Here we report our discovery of the Star Wars botnet with more than 350k bots. We show these bots were generated and centrally controlled by a botmaster. These bots exhibit a number of unique features, which reveal profound limitations of existing bot detection methods.

He said the work has significant implications for cybersecurity, not only because the size of the botnet is larger than those studied before, but also because it’s been well hidden since its creation in 2013. He said more research is needed to fully grasp the potential threat such a large, hidden botnet poses to Twitter.

His research began by sifting through a sample of 1% of Twitter users to better understand how people use the medium. But along the way, the research seemed to reveal many linked accounts, which means an individual or group is running the botnet. These accounts didn’t behave like the more garden-variety bots out there.

Scum and villainy?

In the report, he describes what his team saw as the work unfolded:

Although the tweet distribution is largely coincident with the population distribution, there are two rectangle areas around North America and Europe that are fully filled with non-zero tweet distributions, including large uninhabited areas such as seas, deserts and frozen lands. These rectangles have sharp corners and straight borders that are parallel to the latitude and longitude lines. We conjectured that it shows two overlapping distributions. One is the distribution of tweets by real users, which is coincident with population distribution. The other is the distribution of tweets with faked locations by Twitter bots, where the fake locations are randomly chosen in the two rectangles – perhaps as an effort to pretend that the tweets are created in the two continents where Twitter is most popular. The blue-color dots in the two rectangles were attributed to 23,820 tweets. We manually checked the text of these tweets and discovered that the majority of these tweets were random quotations from Star Wars novels. Many quotes started or ended with an incomplete word; and some quotes have a hashtag inserted at a random place.

For example:

Luke’s answer was to put on an extra burst of speed. There were only ten meters #separating them now. If he could cover t

That passage is from the book Star Wars: Choices of One. Echeverria and his colleagues found quotations from at least 11 Star Wars novels.

Here’s a wider look at the Force-infused activities:

  • They only tweet random quotations from the Star Wars novels.
  • Each tweet contains only one quotation, often with incomplete sentences or broken words at the beginning or at the end.
  • The only extra text that can be inserted in a tweet are (1) special hashtags that are associated with earning followers, such as #teamfollowback and and (2) the hash symbol # inserted in front of a randomly chosen word (including stop words, like ”the” and ”in”) in order to form a hashtag.
  • The bots never retweet or mention any other Twitter user.
  • Each bot has created = 11 tweets in its lifetime.
  • Each bot has = 10 followers and = 31 friends.
  • The bots only choose ‘Twitter for Windows Phone’ as the source of their tweets.
  • The user ID of the bots are confined to a narrow range between 1.5 × 109 and 1.6 × 109 . See Figure 7.

Echeverria and his fellow researchers have started  a website and Twitter account  called “That is a bot!” where people can report samples and help to raise awareness of how prevalent they are.

May The Force Be With You.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/M3RB3SYwpo4/

Boffins break Samsung Galaxies with one SMS carrying WAP crap

A single TXT message is enough to cause Samsung S5 and S4 handsets to return to factory settings, likely wiping users’ data along the way. And because the attack exploits Android’s innards, other vendors’ handsets are at risk.

The vulnerabilities, thankfully patched by Samsung, means attackers can send WAP configuration messages that will be blindly applied by the affected devices once received without the need to click on links.

Attacks that send affected devices into boot loops can also be reversed and set to stable by a good configuration SMS, opening avenues for ransomware attacks, Contextis hackers Tom Court (@tomcourt_uk) and Neil Biggs say.

Newer Samsung Galaxy S6 and S7 models will not blindly accept the messages sent over the 17 year-old protocol.

The pair of researchers have penned a three part series explaining the attack surface of Android SMS and the WAP suite.

Court and Biggs combined two bugs to produce the denial of service attack that forces unpatched and non-rooted phones to factory reset.

Users of rooted Samsung devices can enter the adb settings to delete the malicious configuration file default_ap.conf.

“The complexity of exploiting an Android device in recent years has escalated to the point that more often than not a chain of bugs is required to achieve the desired effect,” Court and Biggs say.

“This case is no different and we have shown here that it took two bugs to produce a viable attack vector, combined with some in-depth knowledge of the bespoke message format.”

The pair explain the attack in detail here finding that no authentication is used to protect OMA CP text messages.

They also found a remote code execution on vulnerability on Samsung devices on the S5 and below, detailed in the following CVEs:

  • CVE-2016-7988 – No Permissions on SET_WIFI Broadcast receiver
  • CVE-2016-7989 – Unhandled ArrayIndexOutOfBounds exception in Android Runtime
  • CVE-2016-7990 – Integer overflow in libomacp.so
  • CVE-2016-7991 – omacp app ignores security fields in OMA CP message

“Given the reversible nature of this attack, it does not require much imagination to construct a potential ransomware scenario for these bugs,” the pair say.

“Samsung have now released a security update that addresses these among other vulnerabilities and, as is our usual advice, it is recommended that users prioritise the installation of these updates.”

They left discovery of how the bugs apply to other phones as an exercise for other hackers.

Vulnerabilities were reported to Samsung in June, fixed in August, and patched on 7 November with disclosure made overnight. ®

Sponsored:
DevOps and continuous delivery

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/01/25/boffins_break_samsung_galaxys_with_one_sms/

Human bot hybrid finds LinkedIn email, phone number-filching holes

LinkedIn has shuttered five dangerous privacy holes that could have allowed users’ phone numbers, email addresses and resumes to be downloaded, plus the deletion of all connection requests.

The flaws, since patched, were found by the first human-bot hacking hybrid, the brainchild of Bangalore security boffin Rahul Sasi.

Sasi (@fb1h2s) revealed his ambitious project dubbed Cloud-AI at the Nullcon hacking con in Goa, India, covered by The Register. At the time he explained his intention to build a flaw-finder that can blend gut intuition with automated mechanical efficiency.

“Cloud-AI is currently a large dataset of how humans have interacted with the web,” Sasi told Vulture South. “Our team is currently training Cloud-AI to be capable of doing more complex interactions [and] will soon come up with APIs that will let individuals automate their tasks using Cloud-AI.”

Sasi and his team at CloudSek trained his machine against popular cloud applications including LinkedIn and Facebook, finding 10 dangerous insecure direct object reference vulnerabilities in the former, a bug class normally identified through manual human analysis and missed by automated scanners.

Rahul Sasi at Nullcon. Image: Darren Pauli, The Register.

Cloud-AI also found that Linkedin’s recruiter profiles would leak email addresses of profiles shared in messages to other users. The personal data was hidden in response when the member request identification number was swapped to the victim’s identity number.

Sasi’s machine also uncovered a flaw that would leak phone numbers, along with email addresses, for users who had applied for jobs through the site.

Another flaw allowed all connection requests on LinkedIn to be deleted through mere manipulation of a single request identification number.

Other bugs allowed Lynda video transcripts and exercise files to be downloaded without authentication or the necessary premium membership.

Sasi disclosed the bugs to LinkedIn team which fixed the critical vulnerabilities within a day of his report.

Cloud-AI, explained in this 2016 paper [PDF], is built on machine learning and natural language processing, and uses vector space models to convert word strings to numbers, naive bays machine learning classifiers, and cosine similarity to improve training.

Those techniques result in a machine that can navigate naturally around the web and identify the parts of a site that a hacker would target for the quickest returns. In practice this requires the tool be able to follow dynamic user instructions so it understands that phrases like ‘sign me up’, ‘let’s go’ and so forth all signify account registration.

Some components of the project will be made open source, Sasi told Vulture South. ®

Sponsored:
DevOps and continuous delivery

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/01/25/human_bot_hybrid_finds_linkedin_email_phone_number_stealing_holes/

Western Digital fixes remote execution bug in My Cloud Mirror

Western Digital has issued a fix for its My Cloud Mirror backup disks, after ESET “detection engineer” Kacper Szurek found an authentication bypass with remote code execution in the system.

My Cloud Mirror is a backup hard drive product sold with personal cloud storage, which means the hardware might be left Internet-visible.

Szurek writes that the login form wasn’t protected against command injection.

The “exec() function is used without using escapeshellarg() or escapeshellcmd().

“So we can create string which looks like this: wto -n "a" || other_command || "" -g which means that wto and other_command will be executed.”

There’s a bunch of other bugs in the My Cloud Mirror 2.11.153 firmware, Szurek writes, mostly relating to parameters that aren’t escaped.

The affected files in the firmware include index.php, chk_vv_sharename.php, modUserName.php, upload.php, and a gem in login_checker.php.

“Inside lib/login_checker.php there is login_check() function which is used to check if user is logged, but it’s possible to bypass this function because it simply checks if $_COOKIE['username'] and $_COOKIE['isAdmin'] exist.”

Western Digital fixed the issues in release 2.11.157 in late December – so make sure your box has updated itself. ®

Sponsored:
DevOps and continuous delivery

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/01/25/western_digital_fixes_remote_execution_bug_in_my_cloud_mirror/

Firefox bares teeth, attacks sites that collect personal data

Shoddy sites will have fewer places to hide with Firefox joining Chrome in badging cleartext sites that collect personal information as insecure.

Mozilla’s labels won’t be as prominent as Google’s, introduced this year, which places the red letter label in the address bar. Firefox will instead tuck its warning in the same spot behind a crossed-out lock that reads “not secure” when clicked.

Firefox product veep Nick Nguyen says the move follows the company’s many musings on the benefits of HTTPS.

“Starting today in the latest Firefox, web pages that collect passwords, like an email service or bank, but have not been secured with HTTPS will be more clearly highlighted as potential threats,” Nguyen says.

“Up until now, Firefox has used a green lock icon in the URL bar to indicate when a website is secure (using HTTPS) and a neutral indicator (no lock icon), otherwise.

“In order to more clearly highlight possible security risks, these pages will now be denoted by a grey lock icon with a red strike-through in the URL bar.”

The insecurity stickers will expand in future releases with a floating box triggered when users click password entry fields on cleartext sites that reads “logins entered here could be compromised”.

A further development will expand the struck-out lock icon and slap it on all cleartext sites regardless of whether they collect passwords or credit cards.

“To continue to promote the use of HTTPS and properly convey the risks to users, Firefox will eventually display the struck-through lock icon for all pages that don’t use HTTPS, to make clear that they are not secure,” Firefox staffers Tanvi Vyas and Peter Dolanjski wrote.

“As our plans evolve, we will continue to post updates but our hope is that all developers are encouraged by these changes to take the necessary steps to protect users of the Web through HTTPS.”

Firefox on insecure sites.

Browser barons are increasingly exercising their power to highlight weak security on web sites. The push to end cleartext on sensitive sites was greased by the widely-supported Let’s Encrypt initiative that offered free SSL certificates to sites and the means to easily implement it.

In October, Google announced it would be forcing sites to enforce proper certificate security within a year.

The Alphabet subsidiary said it would flag sites with unauthorised certificates and label those that do not subscribe to the initiative as untrusted in a move that will help combat phishing.

Firefox’s latest update also brought in audio playback for lossless FLAC fanatics, more efficient video performance, a zoom button, and ASLR and DEP bypassing security fixes. ®

Sponsored:
DevOps and continuous delivery

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/01/25/firefox_bears_teeth_attacks_cleartext_sites_that_fail_https_check/

US govt can’t stop Microsoft taking its Irish email seizure fight to the Supreme Court

The US government has lost a legal appeal to have a critical case against Microsoft reheard, paving the way for a Supreme Court challenge.

In an even split of 4-4 judges, the Second Circuit Court of Appeals, based in New York, denied [PDF] the request for a full rehearing of the case in which Microsoft has refused to hand over to American investigators the emails of a non-US citizen held on a server in Ireland.

The case is seen as a critical test of how far legal jurisdiction in the internet era can stretch.

Back in 2014, the FBI used a law dating from 1986 (the Stored Communications Act) to ask for the emails; Microsoft refused to hand them over, arguing that search warrants did not reach beyond US borders.

In July 2016, the appeals court sided with Microsoft when it concluded that “the Stored Communications Act does not authorize courts to issue and enforce against US‐based service providers warrants for the seizure of customer email content that is stored exclusively on foreign servers.”

That decision came following an unusual legal approach that Microsoft adopted in 2014, where it actively asked a New York judge to find it in contempt of court for refusing to honor the warrant.

The Justice Department was not happy with the Appeals Court’s decision, and so in October it questioned the “unprecedented” decision and asked for a full-bench hearing. Its logic was that the search warrant should not depend on where the data was stored but instead on who controls access to it. Microsoft has never denied that it is able to bring up the emails on a system located in the United States.

Bypass

Government lawyers pointed out that users have no choice about where their data resides, and raised concerns that if the judgment was allowed to stand, US companies could bypass domestic laws by simply storing their data on servers in other jurisdictions. They name-checked Google and Yahoo! as examples for how their entire databases could remain out of the reach of the authorities “even when the account owner resides in the United States and the crime under investigation is entirely domestic.”

The case is clearly a difficult one, as the 4-4 judge split demonstrates. A further three judges recused themselves from the decision. The even split, however, means that the case will not be reheard.

The four dissenting judges each wrote their own opinion explaining why they felt the case should be reheard. One, Dennis Jacobs, was highly critical of the earlier ruling in Microsoft’s favor, calling it “unmanageable, and increasingly antiquated.”

To Jacobs’ eyes, “If I can access my emails from my phone, then in an important sense my emails are in my pocket.” He also queried the extent to which such a ruling would impact other areas, such as bitcoins or digital recordings.

The lead judge in the original appeal, Susan Carney, unsurprisingly disagreed with the dissenters and highlighted the Supreme Court’s “strong presumption against extraterritoriality.” Claiming that simply because something was accessible in the US made it subject to US law “runs roughshod” over that basic principle, she argued.

Precedent

There did, however, remain the question about the dangerous precedent that could be set, particularly given the national security and public safety implications – which several judges noted should prove sufficient for a rehearing. Conversely, some have pointed out that the law could set a precedent for other countries to insist on access to US citizens’ emails.

With a new administration in place, it is possible that the Justice Department will be asked to drop the case, but it is more likely that it will take the case to the Supreme Court, which will effectively be asked to decide where data lives and how far US laws stretch in the digital era.

As with several other legal cases working their way through the legal system – from fifth amendment protections on location data and mobile phone passcodes, to first amendment encryption questions, not to mention mass surveillance – the biggest issue is that the current laws in place are wholly unsuited to the modern world and are being stretched and pulled in different directions.

But with Congress in a decade-long impasse, the inability to develop new laws for the internet era is causing a slow build-up in problems within the legal system, of which this case is just one. ®

Sponsored:
Customer Identity and Access Management

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/01/24/us_government_microsoft_email_seizure_appeal/

Trump’s FBI boss, Attorney General picks reckon your encryption’s getting backdoored

US President Donald Trump’s pick for his Attorney General and head of the FBI will have security specialists nervous, since both believe breaking encryption is a good idea.

Senator Jefferson Beauregard “Jeff” Sessions III (R‑AL) is Trump’s pick for the top legal job in the US. In congressional testimony, he outed himself as a committed backdoor man when it comes to encryption. In the written testimony [PDF] to Senator Patrick Leahy, (D‑VT) he laid out his position.

“Encryption serves many valuable and important purposes,” Sessions wrote. “It is also critical, however, that national security and criminal investigators be able to overcome encryption, under lawful authority, when necessary to the furtherance of national security and criminal investigations.”

That’s going to be bad news for people who favor strong encryption. The finest minds in cryptography have repeatedly pointed out the impossibility of building a backdoor for law enforcement into secure encryption, since there’s no way to stop others from finding and exploiting the Feds-only access. If backdoors are mandated, then it could open up all our data to attackers. Encryption is either strong or backdoored.

Sessions’ appointment is also going to cause Apple CEO Tim Cook and other tech execs to wear long faces. During the San Bernardino iPhone case, Sessions was one of the main voices in Congress calling for Apple to create hacking tools for its own operating system and hand them over to the FBI.

“Coming from a law enforcement background, I believe this is a more serious issue than Tim Cook understands,” Sessions said at the time. “In a criminal case, or could be a life-and-death terrorist case, accessing a phone means the case is over. Time and time again, that kind of information results in an immediate guilty plea, case over.”

Meanwhile, Trump has reportedly decided to keep James Comey as director of the FBI. FBI bosses are appointed on 10-year terms to shield them from American politics and similar influences, although presidents can fire them.

Republican-leaning Comey too thinks backdoors (or front doors as he likes to call them) are going to be essential for law enforcement to stop the communications channels of crooks and terrorists “going dark.”

Comey has said that he wants an adult conversation about encryption this year, and by adult he presumably means that anyone who opposes him is being childish. With the new AG getting his back, Comey might have more success than before in weakening encryption. ®

Sponsored:
Customer Identity and Access Management

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/01/25/with_trump_encryptions_getting_backdoored/