STE WILLIAMS

A Hidden Insider Threat: Visual Hackers

Ponemon experiment shows how low-tech white-hat hackers, posing as temps, captured information from exposed documents and computer screens in nearly nine out of ten attempts.

When we think of hackers breaching systems and stealing information from where we work, we don’t usually suspect the people we work with as the guilty parties.

But insider threats are in fact a very real and growing challenge. SANS Institute surveyed nearly 800 IT and security professionals across multiple industries and found that 74 percent of respondents were concerned about negligent or malicious employees who might be insider threats, while 34 percent said they have experienced an insider incident or attack.

One potential method of attack is visual hacking, which is defined as obtaining or capturing sensitive information for unauthorized use. Examples of visual hacking include taking photos of documents left on a printer or information displayed on a screen, or simply writing down employee log-in information that is taped to a computer monitor. The visual hackers themselves could be anyone within an organization’s walls, including employees, contractors or service vendors, such as cleaning and maintenance crews, and even visitors.

In the Visual Hacking Experiment, a study conducted by Ponemon Institute and jointly sponsored by 3M Company and the Visual Privacy Advisory Council, white-hat hackers posing as temporary or part-time workers were sent into the offices of eight U.S.-based, participating companies.

The hackers were able to visually hack sensitive and confidential information from exposed documents and computer screens. They were able to visually hack information such as employee access and login credentials, accounting information and customer information in 88 percent of attempts and were not stopped in 70 percent of incidents.

Assess and Adapt

The best place to begin clamping down on visual privacy threats, no matter what industry you work in, is to perform a visual privacy audit. This will help you assess your key-risk areas and evaluate existing security measures that are in place.

Some questions to consider when conducting a visual privacy audit include:

  • Does your organization have a visual privacy policy?
  • Are shredders located near copiers, printers and desks where confidential documents are regularly handled?
  • Are computer screens angled away from high-traffic areas and windows, and fitted with privacy filters?
  • Do employees keep log-in and password information posted at their workstations or elsewhere?
  • Are employees leaving computer screens on or documents out in the open when not at their desks?
  • Do employees know to be mindful of who is on the premises and what they are accessing, photographing or viewing?
  • Are there reporting mechanisms for suspicious activities?

In addition to identifying areas where visual privacy security falls short, a privacy audit can help managers to make changes or additions needed to your organization’s policies and training.

Policies should outline the do’s and don’ts of information viewing and use for employees and contractors both in the workplace and when working remotely. Additionally, visual privacy, visual hacking and insider threat awareness should be made an integral part of security training, and reinforced through refresher training and employee communications.

Standard best practices

The specific measures you take to defend against visual hacking from insider threats will be unique to your organization or industry. For example, health care organizations are mandated under HIPAA to use administrative, physical, and technical safeguards to ensure the privacy and security of PHI in all forms, including paper and electronic form. But all organizations have the duty to protect customer and employee information, the organization’s intellectual property, confidences, and privacy interests. Standard best practices that apply to nearly every organization include:

  • A “clean desk” policy requiring employees to turn off device screens and remove all papers from their desks before leaving each night.
  • Requirements for masking high-risk data applications to onlookers using strategies from most secure to least secure.
  • Make shredders standard issue to all on-site units, especially nearby copiers, printers, faxes and a prerequisite for all who qualify to telework or qualify to use secure remote network access to corporate information assets.
  • Install privacy filters on all computers and electronic devices, both in the office and while working remotely, where sensitive data is extremely vulnerable. Privacy filters blacken out the angled view of onlookers while providing an undisturbed viewing experience for the user, and can be fitted to the screens of desktop monitors, laptops and mobile devices.

The growing problem of insider threats shouldn’t instill fear and suspicion in workers about the people they see and talk to every day while on the job. However, workers should understand that the threat is real and that they play an important role in helping protect their company’s sensitive data – and that of their customers – against this increasingly prevalent problem.

 

 

Mari Frank, an attorney and certified privacy expert, is the author of the “Identity Theft Survival Kit,” “Safe Guard Your Identity,” “From Victim to Victor,” and “The Guide to Recovering from Identify Theft.” Since 2005 she’s been the radio host of “Privacy Piracy” a weekly … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/a-hidden-insider-threat-visual-hackers-/a/d-id/1323602?_mc=RSS_DR_EDT

Survey: When Leaving Company, Most Insiders Take Data They Created

Most employees believe they own their work, and take strategy documents or intellectual property with them as they head out the door.

Employees feel a sense of ownership over the data and documents they create on the job — so much so, that 87 percent of them take data they created with them when they leave the company, according to a new survey.

Secure communications company Biscom surveyed individuals who previously left a full-time job — voluntarily or otherwise. While only 28 percent of respondents stated they took data they had not created when they departed, the vast majority walked off with copies of their own work.

“I think the biggest driver was that sense of ownership,” says Biscom CEO Bill Ho. Of those who took data they created, 59 percent said they did so because they felt the data was theirs. Seventy-seven percent said they thought the information would be useful in their new job.

The good news is that none of the respondents said they did it to harm the organization. (Although 14 percent admitted that they’d be more likely to nab data on the way out if they were leaving under “negative circumstances.”)

“There may be a concept in their mind that it’s not malicious because they’re not trying to harm anyone,” says Ho, “but I think deep down they know it’s wrong.”

The other good news is that none of the respondents stated they took data protected by privacy regulations. Yet 88 percent of respondents took company strategy documents and/or presentations, 31 percent took customer contact lists, and 25 percent took intellectual property (IP).

“IP is a really, really big problem,” says Ho. “It’s [a company’s] differentiator. It’s what gives them their competitive edge.”

The vast majority of respondents, a whopping 94 percent, said that weren’t aware of any protections their organization had in place to prevent employees from removing data and documents. Only 3 percent admitted that they knew of these protections and ignored them. Another 3 percent said they knew of them, and couldn’t get around them.

Biscom researchers say that it’s doubtful 94 percent of the organizations had no policies or procedures in place to prevent insider data leaks/theft. The trouble, therefore, was that companies were doing a poor job of educating employees on the existence of these policies, procedures, and security technologies.

“If there were tools and technologies in place,” says Ho, “it wasn’t stopping them.”

The most common method respondents used to take data was moving it to a Flash or external drive (84%). Other tactics were emailing it to their personal accounts (47%), printing hard copies (37%), loading it onto a shared drive (21%), or saving it to a sync and share service like Dropbox (11%).

Although some respondents said they were more likely to abscond with data if they left the company under negative circumstances — like being fired or laid off — security teams are better equipped to handle those situations. They know the bad news before the employee does, and can protect the company by quickly revoking access privileges and having people escorted out of the building. However, the employees who quit are a step ahead of the security team, and can begin the process of exfiltrating data long before they give their two weeks notice.

So how to dissuade that type of behavior? One, which may seem counterintuitive, is to provide users with better, easier, more secure file-sharing tools that the organization can monitor, Ho says. 

“If the employees don’t have the tools to share, they’re going to use what they can,” he says. Technologies like DropBox and Google Drive are free and easy to obtain. “Companies should probably serve their own employee” better.

Ho also recommends technologies to monitor user behavior — like behavioral analytics and data exfiltration monitoring — and regular security awareness programs that inform users about the company’s policies about data removal and the tools they use to enforce them.  

“The people who are really determined, they’ll probably find a way,” he says. “It’s the people who are on the edge … who you can potentially change their behavior.”

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/survey-when-leaving-company-most-insiders-take-data-they-created/d/d-id/1323677?_mc=RSS_DR_EDT

This family lets you control their Christmas lights over the internet

Lots of people spread some Christmas cheer and brighten the long winter nights by hanging colorful lights on their homes and trees.

But Ken Woods has taken things much further than that, giving anyone with an internet connection the ability to control his Christmas lights through a website, christmasinfairbanks.com.

By clicking the ON and OFF links you can control a dozen different light arrangements on trees, windows, doors and wreaths – if no one else is trying to do the same thing at the same time – and see the effects through a web camera that’s nailed to a tree in front of the house.

Because they live out “in the middle of nowhere” in Fairbanks, Alaska, with a really slow internet connection, the website runs on a virtual server in the Amazon cloud, which will cost Woods about $450.

Woods gladly accepts donations to defray the costs and so far has taken in about $225 – a few dollars at a time.

Woods, a UNIX admin for the Alaska division of geological surveys in Fairbanks, told me that he built the hardware that controls the light switches and wrote the software that controls the hardware:

I’ve written some code that runs on an Amazon cloud EC2. There’s some software that actually comes back to our house and fetches the image directly from the camera and uses that image to distribute it out from the EC2 on the video side.

The hardware side is essentially a power strip that has a web server built into it and relays built into it that are stately aware and able to connect via ethernet.

There have been other Clark Griswolds who have put their Christmas lights online, and even some fakes where controlling the “lights” merely meant switching between images of the lights turning on and off.

But Woods’s setup is legit, and he can prove it to you.

We’re wary of hoaxes at Naked Security, so when I reached out to Woods he sent me a private, expiring URL for controlling the lights on a Christmas tree.

As I watched the live feed, I switched the tree on and off about a half-dozen times, at different intervals – and it worked each time.

“I’ve gone to great lengths to prove it’s real,” Woods told me.

This is the sixth year that Woods and his wife Rebecca-Ellen have given control of their Christmas lights to the world, but this year it’s attracted a lot more attention from media around the world, and millions of visitors to the website.

The response has been overwhelmingly positive and Woods said he receives hundreds of emails from well wishers.

Woods asks on his website for people not to come by the house (“It’s actually much more entertaining over the internet … Please don’t be that guy”), and to email him if they’re watching the feed and see anything suspicious.


Image of “Ken R-E’s Internet Controlled Christmas Lights” via christmasinfairbanks.com.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/J_c1E-Po_ro/

Welcome to HTTP error code 451: Unavailable for legal reasons

There’s a new addition to the 400 family of error codes: give a hearty welcome to newcomer 451, the HTTP code that lets you know that you’re not seeing what you want to see because it’s been blocked for legal reasons.

Of course, it’s named after Ray Bradbury’s dystopian 1953 novel about censorship and book burning, Fahrenheit 451.

On Friday, the Internet Engineering Steering Group (IESG) approved publication of 451, the formal name for which is “An HTTP Status Code to Report Legal Obstacles“.

Tim Bray brought the draft to the HTTP Working Group a while ago, inspired in 2012 by a Slashdot thread about British ISPs returning 403 for Pirate Bay requests because of a court order.

The intent of the Error 451 message is to make it crystal clear when such a website has been legally blocked.

Mark Nottingham, chair of the IETF HTTP Working Group, writes that the draft has been in the works for a while, steadily pushed forward by those who argue that the 403 status code – which simply says “Forbidden” – doesn’t highlight online censorship.

There was pushback.

For one, there’s a finite number of name spaces that fit in the 400 to 499 range, which is used for error codes.

But with the rise of online censorship, some sites began to adopt the code experimentally, Nottingham said, and more began to call for the ability to let people know the content was being blocked due to legal reasons:

As censorship became more visible and prevalent on the Web, we started to hear from sites that they’d like to be able to make this distinction.

That includes Lumen, previously called Chilling Effects, a database that collects and analyzes legal complaints and requests for removal of online materials; and Article19, which works on behalf of freedom of expression.

Both expressed interest in being able to spider the Web to look for the 451 status code in their efforts to catalog censorship, Nottingham said.

The code might seem like a victory for free speech: similar to the warrant canary, a puff of smoke rising from the mine in which the canary’s been muzzled.

But as we noted when we wrote about the code a few years ago, those who are dead-set on censorship won’t have much problem stifling the “we’ve been censored” code.

A regulatory body could still bypass it by issuing a court order specifying not only that a page should be blocked, but also precisely which HTTP return code should be used.

Nottingham:

In some jurisdictions, I suspect that censorious governments will disallow the use of 451, to hide what they’re doing. We can’t stop that (of course), but if your government does that, it sends a strong message to you as a citizen about what their intent is. That’s worth knowing about, I think.

There’s a lot to be said for adding a bit of transparency into legally mandated internet opacity.

But we should bear in mind that a legal block doesn’t always neatly translate into the presence of censorship.

“Legal reasons” for blocking sites could have to do with ongoing criminal investigations, for example, or those involving minors.

Nottingham writes that error code 451 can be used both by network-based intermediaries (e.g., in a firewall) as well as on the origin Web server, but he suspects it’s going to be used far more in the latter case, as sites like Github, Twitter, Facebook and Google are forced to censor content against their will in certain jurisdictions, due to right to be forgotten orders, government suppression and other forms of censorship.

Image of error on computer screen courtesy of Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Iz-BEBDP68Y/

1 in 4 people will be hit by a data breach by 2020 – what are you doing to secure yourself?

In a world where it seems like a new data breach is announced every other day, there are still plenty of people who don’t think it’ll happen to them. They read about the 15 million or so T-Mobile customers who were hit by the Experian data breach and are thankful that they weren’t a customer. If they evaded the Anthem data breach, they count themselves lucky and believe they’ve beaten the odds.

If you belong to the club of ‘fortunate’ people who’ve still not faced the brunt of a data breach, here’s some news that should make you think. IDC predicts that around one quarter of the world’s population will be affected by a data breach by 2020.

Each data breach is a huge learning in itself and many organizations are investing money to improve their own IT security posture. They are plugging gaps, improving security awareness amongst employees and deploying end-to-end IT security solutions securing users, devices, data and servers alike.

Data breaches are an expensive proposition and with more stringent data protection laws coming into play, no organization would want to lose sensitive customer data. Under the new EU data protection laws, for example, companies could face massive fines of up to 4% of their global annual turnover.

So, why are we staring at more data breaches in the future?

We are living in an increasingly connected world. Mobile, broadband and wireless use is still rising and it is estimated there will be nearly 3.1 billion connections to the Internet of Things (IoT) by 2019. Attack surfaces are growing and so is the sophistication of cyberattacks.

The question is – How do you as an individual, protect your data? Or are you staring at a lost cause?

The answer lies in ‘choice’. Choose to share sensitive information of a personal nature with only those organizations who have a stringent and legally compliant privacy policy which contains explicit information on how they plan to protect your data. If you are concerned about the data safeguards in place, reach out to the company directly and clarify them.

Let’s be very honest here. How many of us actually go through the privacy policy page? There are plenty of people, who don’t know what a privacy policy is. The time has come to take ownership of your data’s security. Do not take it lightly.

Make sure you are doing your bit to keeping your data secure. Using strong, unique passwords for each online account and making sure you back up your files (a solution for ransomware attacks) is a good start.

You also need to guard against social engineering attacks, such as phishing. These types of attacks aim to fool you into handing over confidential or sensitive data. So, the next time you receive an email or other communication that asks you to share sensitive information, or click on a link, think ‘security’ before you do so.

Yes, data breach incidents may go up in the future, but so should your resolve to protect your data. Good luck!

Image of padlocks courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/IIvU22iWe8U/

Cops crush claimed karaoke copyright crooks’ conspiracy

‘Twas the night before Christmas, and all through the station, London police couldn’t give a figgy pudding about anybody’s plans for a homemade karaoke sing-along.

The City of London Police last week announced that they’d arrested three UK men who they said uploaded thousands of karaoke tracks online.

Police are calling them a “gang,” suspected as they are of uploading and distributing tens of thousands of tracks from artists including Beyonce, Lady Gaga, Kylie Minogue and Kanye West.

Head over to TorrentFreak and you’ll hear a bit of a different, less dramatic story: less of a criminal gang and more of a group of devoted karaoke fans, aged between 50 and 60, who made up their own DIY karaoke songs because the songs weren’t available from professional karaoke manufacturers.

Acting on a complaint filed by the British Phonographic Industry (BPI), City of London Police’s Intellectual Property Crime Unit (PIPCU) in June had initiated an investigation against individuals allegedly uploading karaoke tracks to the internet without permission.

Last Tuesday (15 December), they arrested one man in Barnstaple, Devon and two men in Bury, Lancashire.

Together the trio formed KaraokeRG (KRG), a release group specializing in karaoke tracks.

A company that makes and distributes karaoke music and products had noticed that KRG was uploading lots of tracks to the KickAss torrent website within days of the “legitimate” company having made them available on its own online platforms.

From the description at the top of KRG’s master list of songs on one of its sites:

The following is a list of all KaraokeRG homemade CD+G karaoke songs. They were created primarily because they are not available from any professional karaoke manufacturers. However, in some cases, some songs were made available by professional karaoke companies AFTER they were homemade.

In other words, KRG claimed to be filling a market gap with DIY karaoke titles otherwise not available.

But, as TorrentFreak says, it’s likely that the backing tracks are still subject to copyright restrictions, so giving them away, even with “homemade” subtitles, is still likely to get KRG into trouble.

The men’s homes were searched. Police seized computers, laptops and documents.

PIPCU estimates that “hundreds” of albums have had their copyright uploaded by the men, which has in turn led to “thousands and thousands of tracks being accessed illegally”, and that legitimate music companies have been deprived of a “significant” amount of money.

PIPCU’s Detective Constable Ceri Hunt:

The illegal downloading of copyrighted music may seem like a harmless thing to do, but the reality is that these individual offenses are collectively damaging one of our key creative industries, costing people who work in the music industry millions of pounds and threatening thousands of jobs.

PIPCU will continue to target the individuals and the organized crime gangs facilitating these crimes, working with key partners like the BPI to ensure that those most responsible are brought to justice.

PIPCU confirmed that the “karaoke gang” has been released on bail.

Image of dog singing karaoke courtesy of Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EhI0WdA6Yek/

Xen Project blunder blows own embargo with premature bug report

The Xen Project has reported a new bug, XSA-169, that means “A malicious guest could cause repeated logging to the hypervisor console, leading to a Denial of Service attack.”

The fix is simple – running only paravirtualised guests – but the bug is a big blunder for another reason.

Xen is very widely used by big cloud operators, principally Amazon Web Services. Xen bugs are therefore very, very valuable to criminals because if they can learn of a vulnerability they have millions of targets to attack. The Xen Project therefore cooked up new rules designed specifically to ensure that big operators get a couple of weeks in which to sort things out before world+dog is told about the bug.

Those processes weren’t followed for XSA-169, as the notice of the bug sheepishly admits “The fix for this bug was publicly posted on xen-devel, before it was appreciated that there was a security problem.”

That’s far from a complete breakdown in the Project’s processes, but also not a good look for an effort that provides critical code to a great many people around the world.

The good news is that a patch is already available. ®

Sponsored:
Go beyond APM with real-time IT operations analytics

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/23/xen_blunder_blows_own_embargo_with_premature_bug_report/

Cisco cops to enterprise IOS XE vulnerability

Cisco’s latest operating system update ships with a vulnerability that could let hackers seize control of network devices.

The giant has admitted to the hole in its IOS XE release 16.1.1 that, if exploited, would let an attacker force a device to reload.

IOS XE is Cisco’s operating system for routers, switches and appliances but 16.1.1 was only for the enterprise-class 3650/3850 stackable switches.

The update shipped in early December.

You can see the main features here, but top-line features included the ability to upgrade the WCM independently along with GUI improvements.

Cisco has issued a software update, warning that there is no workaround you could implement.

“The vulnerability is due to incorrect processing of packets that have a source MAC address of 0000:0000:0000. An attacker could exploit this vulnerability by sending a frame that has a source MAC address of all zeros to an affected device. A successful exploit could allow the attacker to cause the device to reload,” Cisco said in an advisory here. ®

Sponsored:
Go beyond APM with real-time IT operations analytics

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/23/cisco_ios_xe_vuln/

IT bloke: Crooks stole my bikes after cycling app blabbed my address

An IT manager in Manchester, England, says thieves stole his bikes after a smartphone cycling app pinpointed the location of his garage.

Mark Leigh, 54, of Failsworth, said his two bicycles – worth £500 ($750) and £1,000 ($1,500) – were nicked shortly after he made his address and details of his bikes public on the popular biking app Strava, the Manchester Evening News reports.

The app includes an optional privacy setting that conceals the exact location of your home, but Leigh was not aware of this switch when he shared details of his bike rides via the software. Strava encourages people to publish their routes and journey times to make the application more engaging among enthusiasts.

Unfortunately, doing so tips off crooks as to where bikes are kept and when they are not in use.

“I’d come back from a ride around the Saddleworth hills, which I tracked on Strava,” Leigh told the newspaper. “I locked my bike in the garage next to another one. The following morning my garage had been cleverly broken into and they were gone.”

Leigh notes that his garage is not very visible and is at the end of a narrow cul-de-sac. The fact that only the bikes were stolen, where there were lots of other valuable items in the garage, and there were no other break-ins nearby leads him to believe the thieves must have been using Strava as a way to find easy targets.

His fears were confirmed by an organizer of a local cycling club who told the paper that he had lots of reports in recent months where bicycles had been stolen and the owners suspected it was due to their use of cycling apps advertising their location.

All of which is a timely reminder to people over why they should be careful about what apps they use, what information they share, and why it’s worthwhile spending a bit of time digging into the privacy settings that many apps now offer. ®

Sponsored:
Simpler, smarter authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/22/it_manager_loses_bikes_after_cycling_app_pinpoints_home/

Juniper’s VPN security hole is proof that govt backdoors are bonkers

Juniper’s security nightmare gets worse and worse as experts comb the ScreenOS firmware in its old NetScreen firewalls.

Just before the weekend, the networking biz admitted there had been “unauthorized” changes to its software, allowing hackers to commandeer equipment and decrypt VPN traffic.

In response, Rapid7 reverse engineered the code, and found a hardwired password that allows anyone to log into the boxes as an administrator via SSH or Telnet.

Now an analysis of NetScreen’s encryption algorithms by Matthew Green, Ralf-Philipp Weinmann, and others, has found another major problem.

“For the past several years, it appears that Juniper NetScreen devices have incorporated a potentially backdoored random number generator, based on the NSA’s Dual EC DRBG algorithm,” wrote Green, a cryptographer at Johns Hopkins University.

“At some point in 2012, the NetScreen code was further subverted by some unknown party, so that the very same backdoor could be used to eavesdrop on NetScreen connections. While this alteration was not authorized by Juniper, it’s important to note that the attacker made no major code changes to the encryption mechanism – they only changed parameters.”

The Dual EC DRBG random number generator was championed by the NSA, although researchers who studied the spec found that data encrypted using the generator could be decoded by clever eavesdroppers.

ScreenOS uses the Dual EC DRBG in its VPN technology, but as a secondary mechanism: it’s used to prime a fast 3DES-based number generator called ANSI X9.17, which is secure enough to kill off any cryptographic weaknesses introduced by Dual EC. Phew, right? Bullet dodged, huh?

No. In Juniper’s case there’s a problem. The encrypted communications can still be decoded using just 30 or so bytes of raw Dual EC output. And, lo, conveniently, there’s a bug in ScreenOS that will cause the firmware to leak that very sequence of numbers, undermining the security of the system.

Also, worryingly, ScreenOS does not use Dual EC with the special constant Q defined by the US government – it uses its own value.

Armed with those 30 bytes of seed data, and knowledge of Juniper’s weird Dual EC parameters, eavesdroppers can decrypt intercepted VPN traffic.

Now it gets really spy-tastic

Said eavesdroppers were probably involved in introducing one of the vulnerabilities in the first place. Whoever tampered with some builds of ScreenOS changed just the value of Q. No other code was slipped in; just a new constant. Knowing that value, and how to exploit it with the data leak bug, was all a snoop needed.

In other words, someone saw the data leak bug, and knew that if they controlled Q, they could crack encrypted VPN channels.

“To sum up, some hacker or group of hackers noticed an existing backdoor in the Juniper software, which may have been intentional or unintentional – you be the judge,” wrote Green.

“They then piggybacked on top of it to build a backdoor of their own, something they were able to do because all of the hard work had already been done for them,” the assistant professor added.

“The end result was a period in which someone – maybe a foreign government – was able to decrypt Juniper traffic in the US and around the world.”

Green points out that this is a classic example of why backdoors are a bad idea all round. It’s something politicians and law enforcement officials may want to ponder the next time they call for mandatory government access to encrypted communications.

If they are going to build backdoors into encryption, such as by fiddling with the mathematics or sliding in convenient bugs, someone else is going to find the way in. ®

Sponsored:
Simpler, smarter authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/23/juniper_analysis/