STE WILLIAMS

Suspect Arrested In Connection With Mirai Botnet

One million Deutsche Telekom customers were knocked offline in a November 2016 cyberattack.

A 29-year-old man was arrested by British police at a London airport on Wednesday in connection with the November 2016 hack of about one million Deutsche Telekom’s customers, reports DataBreachToday. The arrest was made on behalf of Federal Criminal Police Office of Germany and unconfirmed reports say it could be in relation to Mirai botnet attacks.

Mirai malware, security experts explain, target default accounts and passwords and may be behind the attack on routers of the German telecom company customers. Mirai was originally controlled by a group called Poodlecorp but is perhaps now used by several hackers and believed to be behind some major distributed denial-of-service attacks recently.

“One person writes the Mirai botnet and then publishes its code, and then within a week it’s in dozens of botnets,” explained security expert Bruce Schneier at last week’s RSA Conference in San Francisco.

Exposure to such malware, say other experts, is because of unsecure IoT devices that get exploited to launch the DDoS attacks. The solution, they add, is to discard such devices or disallow them to be connected to the public Internet.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/suspect-arrested-in-connection-with-mirai-botnet/d/d-id/1328259?_mc=RSS_DR_EDT

IaaS: The Next Chapter In Cloud Security

Organizations adopting IaaS must update their approach to security by using the shared responsibility model.

Companies in industries from manufacturing to financial services to the public sector trust cloud providers with their critical data. The rapid growth of software-as-a-service (SaaS) applications such as Office 365 and Salesforce depended on trust. But the floodgates of SaaS adoption didn’t open until IT security professionals became convinced that cloud providers could produce equivalent or better security compared to traditional software. Where challenges remain is on the enterprise side; Gartner predicts 95% of cloud security incidents will be the customer’s fault.

Now a second wave of cloud adoption is reaching its swell, with the edge around enterprises disappearing into infrastructure-as-a-service (IaaS) offerings. For IaaS, organizations need to update their approach to security by using the shared responsibility model.  

 More on Security Live at Interop ITX

Updating the Shared Responsibility Model for IaaS
Cloud-first companies use SaaS tools for different functions: Office 365 for collaboration, Workday for human resources, and Salesforce for customer relationship management. Every organization also possesses anywhere from a handful to thousands of internally developed applications for employees, customers, and partners. Organizations are eliminating their data centers and moving these proprietary applications to IaaS cloud offerings en masse, leading to an IaaS growth rate double that of SaaS.

Even companies that have taken a proactive approach to SaaS security must reevaluate their capabilities when it comes to applications hosted on IaaS platforms. SaaS and IaaS platforms operate under different shared-responsibility models, or the allocation of security capabilities between cloud provider and customer. Many security vulnerabilities that SaaS providers address fall on the shoulders of the enterprise customer for applications hosted on IaaS services.

Furthermore, business pressure to move quickly means security teams may have little to no oversight on IaaS security; developer teams don’t have extra resources to dedicate to updating security for existing on-premises applications slated to migrate to the cloud. Proprietary applications don’t have dedicated security solutions like SaaS apps do, nor do they have APIs that integrate out of the box with security products. While in the past we have thought of startups and cloud service providers as the companies dealing with AWS, Azure, or Google Cloud Platform security, today the Fortune 2000 are contending with the challenges of securing apps in the cloud.

IaaS security threats come from both inside and outside the organization. Hackers target corporate IaaS accounts to steal data or computing resources. This vector can be exploited through stealing credentials, gaining misplaced access keys, or leveraging misconfigured service settings. One researcher discovered over 10,000 AWS credentials on GitHub. Hacked accounts can be used to mine Bitcoin or hold companies for ransom, as in the worst-case scenario of hosting company Code Spaces.

Internally, a malicious employee with access to IaaS accounts can cause immense damage by stealing, altering, or deleting data on the platform. Human error and negligence can expose corporate data and resources to attackers. Healthcare company CareSet made a configuration error that resulted in hackers exploiting its Google Cloud Platform account to launch intrusion attacks against other targets. After a few days without remediation, Google temporarily shut down the company’s account. Organizations can’t incorrectly assume that IaaS environments are secure out of the box. In every case above, the cloud provider is powerless to address the customer’s vulnerability.

IaaS Security Action Plan
Keeping data safe in proprietary applications on IaaS platforms requires an extra step beyond SaaS security: protecting the computing environments themselves. Securing environments on AWS, Google Cloud Platform, Microsoft Azure, or other IaaS platforms begins with a configuration audit. Here are four categories of configurations critical to securing IaaS usage:

1. Authentication: Multifactor authentication is a necessary control for any application with sensitive corporate information, especially cloud applications exposed to the Internet. Companies should enable multifactor authentication for root accounts and Identity and Access Management users to reduce the risk of account compromises. Heightened authentication can require a user to enter an additional login step before they commit an action such as deleting an S3 bucket.

2.  Unrestricted Access: Unnecessarily exposing AWS environments increases the threat of various methods of attack including denial-of-service, man-in-the-middle, SQL injections, and data loss. Checking for unrestricted access to Amazon Machine Images, Relational Database Service instances, and Elastic Compute Cloud can protect intellectual property and sensitive data, as well as prevent service outages.

3. Inactive Accounts: Inactive and unused accounts pose unnecessary risk to IaaS environments. Auditing and eliminating inactive accounts can prevent account compromise and misuse at little cost to productivity.

4. Security Monitoring: One of the top fears of moving computing to the cloud is loss of visibility and forensics. Turning on an audit trail like AWS’s CloudTrail logging establishes a behavior monitoring tool for active threats and forensic investigations. This is also a basic compliance requirement for any large company and can be a deal breaker for moving an application to IaaS.

Of these four categories, security monitoring is the most complex and robust. Machine learning tools can be tuned to detect a range of behavior indicative of a threat. APIs can enable monitoring based on session locations, excessive activity, or brute-force logins. At first glance, moving applications to the cloud can appear to forfeit control. With a proactive, cloud-based security strategy, however, applications on IaaS can be just as secure as their on-premises counterparts, or even more so.

Related Content:

 

Kaushik Narayan is a Co-Founder and CTO at Skyhigh Networks, a cloud security company, where he is responsible for Skyhigh’s technology vision and software architecture. He brings over 18 years of experience driving technology and architecture strategy for enterprise-class … View Full Bio

Article source: http://www.darkreading.com/cloud/iaas-the-next-chapter-in-cloud-security/a/d-id/1328202?_mc=RSS_DR_EDT

20 Cybersecurity Startups To Watch In 2017

VC money flowed plentifully into the security market last year, fueling a new crop of innovative companies.PreviousNext

Image Source: Adobe Stock

Image Source: Adobe Stock

In spite of a slowdown in the overall funding activity from venture capital firms in 2016, the cybersecurity market continued to raise money at full steam. Last year saw the market break records in terms of funding deals, with Q3 tallying up to be the most active quarter for deals in cybersecurity in the last five years, according to CBInsights.

That influx of money is driving innovation in a number of areas. Particularly notable market segments targeted by these firms include security for data centers and public cloud infrastructure, security orchestration and incident response tools, and third-party risk assessment tools.

The following 20 firms are primarily early- to middle-stage startups, with a few more mature start-ups that have courted growth equity to change course or expand into a particularly hot new market segment. We believe these firms are worth watching due to several factors. On the funding front, they either managed to snag $25 million or more in funding in 2016, or garnered a notable funding round within the last three months. Many of these firms have been founded in the past three years and a number of them are notable for acting as first-movers in a particularly hot security niche. Additionally, a number of the firms are notable for their leadership by security veterans and visionaries.

In the interest of bringing some new blood to our annual spotlight on startups, we’ve included only companies that were not already featured in our lists in the last two years.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: http://www.darkreading.com/careers-and-people/20-cybersecurity-startups-to-watch-in-2017/d/d-id/1328251?_mc=RSS_DR_EDT

Facebook or Google: whose messenger AI bot has the edge?

The AI bots Facebook designed for its Messenger IM service haven’t lived up to expectations, some say. According to The Register, tests have shown that AI bots have, in fact, a “70% failure rate”.

So, is this the end of the road for Facebook’s AI bot?

Facebook was full of enthusiasm when it introduced its “bots for the Messenger Platform” last April. Bots, it told us, would be able to provide anything from generic weather and updates to personal receipts, shipping notifications and live automated messages

by interacting directly with the people who want to get them.

And they can and do.

A few months later, in June, Facebook launched its DeepText “text understanding engine”. DeepText would help the Messenger bots advance by allowing them to understand a user’s intent. Discussing how DeepText would impact a Messenger conversation, the social media giant explained that

[DeepText is] used for intent detection, which helps realize that a person is not looking for a taxi when he or she says something like, “I just came out of the taxi”, as opposed to “I need a ride.”

By September, the AI bot was making great strides forward, Facebook announced:

Developers and businesses have built over 30,000 bots for Messenger.

With the bots such a great success, Messenger Platform v1.2 was launched, with new features primarily centred around Messenger bots.

But, it seems, Facebook’s AI didn’t live up to everyone’s expectations. The Register reports that tests found that the technology “could fulfil only about 30% per cent of requests without human agents”. Testers felt that

… the technology to understand human requests wasn’t developed enough.

It seems that Facebook’s technology may not yet be advanced enough yet for the use cases. It may not be able to understand intent well enough to insert relevant external links into Messenger conversations when it parses them.

I quite like The Information’s interpretation – that Facebook is dealing with the “Clippy The Paperclip problem” where

The user views the contribution by the agent, or bot, as intrusive.

So, the technology needs more work. Isn’t this always the case when technologies are first launched within mainstream applications?

We’re told by The Register’s contact that the social media giant is going to

… narrow down the set of cases so users aren’t disappointed by the limitations of automation.

I translate this to simply mean that Facebook’s engineers are turning their attention to building better algorithms. That basically means they’re going to give Messenger some extra training.

When it comes it AI, analyst Richard Windsor writing in RFM agrees. He’s convinced that Facebook’s technology isn’t there yet.

Facebook and Alphabet are worlds apart.

He sees Google as “a leader”, “pushing the boundaries of artificial intelligence forward” and Facebook as “a laggard”, “miles behind”, and simply “making excuses for its inability to control hate speech”.

Watson believes Facebook doesn’t have enough experience around AI to give it a “solid foundation of intelligent algorithms”. Google, on the other hand, has been “working on this [AI] for over 20 years”.

Maybe the training will help?

Facebook may be working behind the scenes to improve its algorithms, but that’s OK.

And while Windsor may be unimpressed by how well Facebook’s bots are working on its Messenger IM, others have more confidence in them. After all, Facebook is still dominating the messaging app market, as the BBC reveals:

Google’s AI-powered messaging app Allo, since being launched to much fanfare last year, has failed to make even a minor dent in a messaging app market dominated by Whatsapp and Facebook Messenger.

And so, the race is on.

While Google may have the better AI technology, Facebook has the messaging app market. But Google’s AI assistant can also be found outside of its messaging app – in Pixel phones, Home and more.

And we’ve not even touched on Apple, Amazon and Microsoft.

The AI bots is certainly going to be a very interesting space to watch over the coming months and years!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nEw2JE8SQeY/

Twitter users, do you know who’s spying on your web-surfing habits?

If you weren’t already worried about the privacy dangers of online ad tracking, now would be a good time to start. Researchers have found a way to de-anonymise web surfing records, putting a recent US privacy ruling in jeopardy.

Online ad networks track your browsing history across multiple sites so that they can serve you more effective advertising. Search for something on an ecommerce site that participates in one of these networks, and you’ll be shown related ads on another site that also participates.

This often creeps people out, but the counterargument has always been that the data is anonymous. Instead of linking your real name to your web surfing records, these trackers use a unique customer ID.

In this way, they may know that the same person who searched for Christmas getaways on one website is now reading an article about marmosets on another website, but because they don’t know who that person is, they can responsibly litter the marmoset article with holiday advertisements, without anyone on the internet knowing that you’re leaving your house unattended on December 25.

That’s all well and good, but what if you could deduce a person’s identity by matching their anonymous web surfing with their social media timeline? What if, instead of a customer ID, you could replace it with their Twitter handle?

Academics from Stanford and Princeton have done just that. Their research relies on the idea that people are more likely to follow links showing up on their social media feed, and in particular the links from people they follow on Twitter that show up in their feed. They reasoned that because the set of links in a Twitter feed is often unique, you can match it against links in an anonymous surfing history.

The group collected anonymous web browsing histories from almost 400 volunteers, and mined them for links that came from Twitter (marked with the domain name t.co, which Twitter uses to shorten URLs) visited in the last 30 days. It attempted to de-anonymize histories with at least five such links by comparing them against 300m Twitter feeds.

The researchers found that they could identify more than 70% of volunteers on average. The more links in someone’s history that originated from Twitter, the more accurate the identification. The team correctly identified 86% of participants in the experiment with between 50 and 75 URLs. So if you follow a lot of links from Twitter, you’re more likely to be identified.

This isn’t just a theoretical exercise. The team built a system to de-anonymise web browsing histories in under a minute using the concept, proving that it’s workable in practice.

The team at Princeton has a record of exposing flaws in anonymous datasets. Arvind Narayanan, one of the researchers, runs a blog called 33 Bits of Entropy, named for the fact that there are about 6.6bn people in the world, meaning that you only need 33 bits of information to determine their identity. He has moved on a bit from his de-anonymising research, but in the past he has embarrassed Netflix by using its research dataset to work out who was watching what movies.

Here’s another tidbit from the research: it points out that the same principles apply to any set of items selected anonymously by someone with an identifiable historical record of selections. For example, anonymous papers might cite other work and could be compared with a broader spectrum of academic papers to see if similarities show up.

We wonder if it’s possible to run it against the eight references in the original bitcoin paper, created by the mysterious Satoshi Nakamoto, to help track him down, assuming that he had published academic work before? Not necessarily, says Jessica Su, one of the researchers:

Who is likely to use social media history in combination with ad tracking? The trackers themselves could. The team looked at four such trackers: Google, Facebook, ComScore and AppNexus – and found that they all had enough information to de-anonymize their users.

Some of these trackers are already de-anonymizing users by default. Google, which already tracks users across almost 80% of websites, changed its privacy policy around anonymisation late last year to match links with Google accounts. Facebook, meanwhile, owns the social network from which its users follow links.

Who else might use this information? The NSA, for one. It already tracks Google ads to find Tor users. The research points out that well-resourced adversaries could eavesdrop on network traffic to work out which domains a particular device is visiting (although thankfully HTTPS makes that more difficult).

Other potential users could include potential employers, anyone granting credit, or insurance companies who might love to know about your recent search for cancer symptoms or risky pursuits. Anyone who could benefit from knowing what you’re searching for would find this attack useful.

The good news is that commercial parties like these could only match your anonymous browsing history against your public social media profile if they had that data. The bad news is that it has long been for sale.

It’s particularly galling for privacy advocates, because the selling of customer data was about to get a lot harder. The FCC in the US issued an order restricting ISPs from collecting customers’ sensitive data unless they specifically opted in. Under the order, service providers must get customer permission to sell sensitive personal data, defined as “reasonably linkable” to an individual.

Anonymised data may be seen as not reasonably linkable, meaning that it can be collected and used. But clearly, with a bit of automated detective work, it’s pretty easy to make that link.

How can you stop this from happening? Tracker-blockers such as Ghostery, uBlock Origin or Privacy Badger can help, the researchers say, while not revealing your real-world identity on social media profiles is a useful albeit cumbersome form of protection. Given the recent actions of US border guards, the latter might be a good idea anyway.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OxCBNtPCiEs/

I was authorized to trash my employer’s network, sysadmin tells court

Back in December 2011, Michael Thomas did what many sysadmins secretly dream of doing: he trashed his employer’s network and left a note saying he quit.

As well as deleting ClickMotive’s backups and notification systems for network problems, he cut off people’s VPN access and “tinkered” with the Texas company’s email servers. He deleted internal wiki pages, and removed contact details for the organization’s outside tech support, leaving the automotive software developer scrambling.

The real-life BOFH then left his keys, laptop, and entry badge behind with a letter of resignation and an offer to stay on as a consultant.

What Thomas didn’t consider while leaving his elaborate “screw you” was that he might be breaking the law. Just under two years later, he was charged with a single felony count of “intentionally causing damage without authorization, to a protected computer.”

He was found guilty by a jury in June last year, and in August was sentenced to time served plus three years of supervised release. He was also ordered to pay $130,000.

Now, however, Thomas is appealing [PDF] that conviction in the Fifth Circuit Court of Appeals in New Orleans using a legal defense that may have enormous implications for sysadmins across the entire United States.

In essence, Thomas is arguing that, yes, while he did intentionally cause damage it wasn’t “without authorization.” In fact, he was expressly authorized to access all the systems he accessed, and he was expressly authorized to carry out the deletions he did – every sysadmin in the world deletes backups, edits notification systems and adjusts email systems. In fact, it’s fair to say that is a big part of the job they are paid to carry out.

His legal filing to the Fifth Circuit also points out that none of his actions were forbidden by the company’s own policies.

Thomas is telling the court: sure, I trashed their systems but I did nothing illegal. And he has a point. It’s just that every company in America is terrified that he might win the argument.

Run-up

Of course, there is a back story.

Thomas was hired to the company by a friend of his – Andrew Cain. Cain was the company’s first employee and the only IT employee. As the company – which sets up and runs car dealership websites – grew, it needed another full-time IT staffer to handle demand.

Things went well for two years until, out of the blue, the company’s founders fired Cain. Cain suspected the reason for his firing was the founders were looking to sell the company – something they have done repeatedly in the past as serial entrepreneurs – and didn’t want to have to give Cain his cut as the first employee. At the same time they fired Cain – on a Thursday – Thomas was offered a bonus to stay on and take over his friend’s job.

It’s fair to say that Cain was just a tad irritated. And he called Thomas to tell him the news and that he would be suing for wrongful dismissal. And that’s when ClickMotive started having trouble with its IT systems.

Thomas’ appeal filing admits many of the things that came out during the investigation and trial: he obtained emails from ClickMotive’s system and forwarded them to Cain’s wife to help his lawsuit.

The day after Cain was fired, a Friday, the entire ClickMotive network went down from a power outage. Thomas got it back up and was still working remotely on Saturday, mopping up problems. Then, on the Sunday, the network was hit with a denial-of-service attack, taking it down again.

And so Thomas drove to the office Sunday evening and start working on getting it back up. While there, however, the rogue employee also carried out a whole range of activities, before departing a few hours later and leaving his keys, laptop, badge and a resignation letter – which were discovered the next morning.

That Sunday, Thomas deleted remotely stored backups and turned off the automated backup system. He made some changes to VPN authentication that basically locked everybody out, and turned off the automatic restart. He deleted internal IT wiki pages, removed users from a mailing list, deactivated the company’s pager notification system, and a number of other things that basically created a huge mess that the company spent the whole of Monday sorting out (it turned out there were local copies of the deleted backups).

Authorized

While the company’s actions don’t exactly cover it in glory, using your admin privileges to delete backups and mess up your employer’s system is not a great idea (no matter how appealing it might be). The question is: is it illegal?

“Michael Thomas had unlimited authorization to access, manage, and use ClickMotive’s computer systems,” argues his Tor Ekeland lawyers, “and was given broad discretion in his exercise of that authority.”

Unsurprisingly as one of only two IT people in the company, Thomas basically had full rein over the computer systems. He could manage users and their privileges without requiring specific authorization. Part of his job was to delete unnecessary data.

As the filing argues: “The central issue in this case is whether Thomas acted ‘without authorization’ if he performed these same actions in a manner that was contrary to the company’s interests.”

And it argues that he didn’t. He had the right to make changes to all the systems he touched; the term “without authorization” is ambiguous and was interpreted too broadly in his case; and the court didn’t identify exactly what he did that was prohibited.

Since the appeal has decided to focus in on the specific legal language used to convict Thomas, it could have far-reaching implications either way.

If he is found to have acted without authorization, the question then becomes: does that make other sysadmins criminally liable for mistakes they might make unless they get explicit permission beforehand? That would create a hell of a problem.

If Thomas is found to have acted with authorization, every company will wonder if that gives their sysadmins carte blanche to ruin their systems with no legal comeback. That’s not going to sit very well in boardrooms.

Of course, one solution would be to have explicit, commonsense company policies about what sysadmins are allowed to do and what they are not allowed without additional permission.

Or perhaps the better solution is to follow an age-old piece of advice that company bosses never seem to grasp: don’t treat your employees like shit. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/23/michael_thomas_appeals_conviction/

Cloudbleed: Big web brands leaked crypto keys, personal secrets thanks to Cloudflare bug

Big-name websites leaked people’s private session keys and personal information into strangers’ browsers, due to a Cloudflare bug uncovered by Google security researchers.

Cloudflare helps companies spread their websites and online services across the internet. Due to a programming blunder, for several months Cloudflare’s systems slipped random chunks of server memory into webpages, under certain circumstances. That means if you visited a website powered by Cloudflare, you may have ended up getting chunks of someone else’s web traffic hidden in your browser page.

For example, Cloudflare hosts Uber, OK Cupid, and Fitbit. It was discovered that visiting any site hosted by Cloudflare would sometimes cough up sensitive information from strangers’ Uber, OK Cupid, and Fitbit sessions. Think of it as sitting down at a restaurant, supposedly at a clean table, and in addition to being handed a menu, you’re also handed the contents of the previous diner’s wallet or purse.

This leak was triggered when webpages had a particular combination of unbalanced HTML tags, which confused Cloudflare’s proxy servers and caused them to spit out data belonging to other people – even if that data was protected by HTTPS.

Leaked … Some unlucky punter’s Fitbit session slips into another a random visitor’s web browser. Click to enlarge (Source: Google Project Zero)

Normally, this injected information would have gone unnoticed, hidden away in the webpage source, but the leak was noticed by security researchers – and the escaped data made its way into the Google cache and the hands of other bots trawling the web.

Timeline

The blunder was first spotted by Tavis Ormandy, the British bug hunter at Google’s Project Zero security team, when he was working on a side project last week. He found large chunks of data including session and API keys, cookies and passwords in cached pages crawled by the Google search engine.

“The examples we’re finding are so bad, I cancelled some weekend plans to go into the office on Sunday to help build some tools to clean up,” he said today in an advisory explaining the issue.

“I’ve informed Cloudflare what I’m working on. I’m finding private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We’re talking full https requests, client IP addresses, full responses, cookies, passwords, keys, data, everything.”

Ormandy said that the Google team worked quickly to clear any private information and that Cloudflare assembled a team to deal with it. He provisionally identified the source of the leaks as Cloudflare’s ScrapeShield application, which is designed to stop bots copying information from websites wholesale, but it turns out the problems ran deeper than that.

On Thursday afternoon, Cloudflare published a highly detailed incident report into the issue: it happens that Cloudflare’s Email Obfuscation, Server-Side Excludes and Automatic HTTPS Rewrites functions were the culprits.

The problem occurred when the company decided to develop a new HTML parser for its edge servers. The parser was written using Ragel, and turned into machine-generated C code. This code suffered from a buffer overflow vulnerability triggered by unbalanced HTML tags on pages. This is the broken pointer-checking source that is supposed to stop the program from overwriting its memory:

/* generated code. p = pointer, pe = end of buffer */
if ( ++p == pe )
    goto _test_eof;

What happens is that elsewhere p becomes greater than pe, thus it avoids the length check, allowing the buffer to overflow with extra information. This eventually leads to the above web session leaks.

“The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer,” said Cloudflare’s head of engineering John Graham-Cumming in the biz’s incident report.

“Had the check been done using = instead of == jumping over the buffer end would have been caught.”

According to Graham-Cumming, for data to leak, the final buffer had to finish with a malformed script or img tag, be less than 4KB in length (otherwise Nginx would crash), and be running the three functions.

The new cf-html parser was added to Cloudflare’s Automatic HTTP Rewrites function on September 22, 2016, to its Server-Side Excludes app on January 30 this year, and partially added to the biz’s Email Obfuscation feature on February 13. It was only in the Email Obfuscation case that significant memory leakage appears to have happened, which tipped off Ormandy.

Cloudflare does have a kill switch for the more recent of its functions and shut down Email Obfuscation within 47 minutes of hearing from Ormandy. It did the same for Automatic HTTPS Rewrites a little over three hours later. Server-Side Excludes couldn’t be killed, but the company says it developed a patch within three hours.

Cloudflare says SAFE_CHAR logging shows that the period of greatest leakage occurred between February 13 and 18, and even then that around 1 in every 3,300,000 HTTP requests through Cloudflare leaked data across 3,438 unique domains. It says that it held off disclosing the issue until it was sure that search engines had cleared their caches.

In posts on Hacker News, Ormandy pointed out that the 3,438 figure only covers direct queries, and that any information that passed through Cloudflare’s hands was vulnerable to leakage.

He has also noted that the top award for Cloudflare’s bug bounty program is a t-shirt. Maybe the web giant will reconsider that in the future. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/24/cloudbleed_buffer_overflow_bug_spaffs_personal_data/

South Korea targeted by cyberspies (again). Kim, got something to say?

The South Korean public sector is once again in the firing line of a sophisticated – and likely government-backed – cyberattack.

The campaign was active between November 2016 and January 2017 and relied on exploiting vulnerabilities in a Korean language word processing program and a spoofed document from the Korean Ministry of Unification.

Security researchers at Cisco Talos discovered that the adversaries used a compromised Korean government website – kgls.or.kr (Korean Government Legal Service) – to download secondary payloads onto compromised machines.

“This attack is notable because it uses the proprietary format of the Hangul Word Processor, a regional word processor and popular alternative to Microsoft Office for South Korean users,” Cisco Talos reports.

“Due to these elements it’s likely that this campaign has been designed by a well-funded group in an attempt to gain a foothold into South Korean assets, which can be deemed extremely valuable.”

Many of these techniques fit the profile of campaigns previously associated with attacks by certain government groups. South Korean systems are routinely attacked by their neighbors in the North. The US National Security Agency also has a history of gaining access to networks in South Korea, primarily to spy on the Norks.

The spying occurred in the run-up to a controversial ballistic missile test by the North Koreans earlier this month and, perhaps of greater relevance, shortly before joint US–South Korean military exercises.

North Korea has repeatedly been blamed for hacks and malware-based attacks on its southern neighbors, most notoriously the so-called Dark Seoul attacks against banks and broadcasters of 2013. The NORKS were also blamed by US intel agencies for the infamous Sony Pictures hack of 2014. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/24/south_korea_cyberspied_upon/

Netflix Debuts ‘Stethoscope’ Open-Source Security Tool

Entertainment giant offers open-source app for security.

Entertainment giant Netflix has released a new Web application called Stethoscope designed to tackle security issues with mobile and desktop devices. 

Netflix, which developed the tool for its own users, is also offering the code on GitHub. Stethoscope, based on Python, gathers device details and provides the user with recommendations on how to secure their systems. Stethoscope supports LANDESK (for Windows), JAMF (for Macs) and Google MDM (for mobile devices).

It’s all about educating users and empowering them to secure their devices, according to Netflix. “By providing personalized, actionable information–and not relying on automatic enforcement–Stethoscope respects people’s time, attention, and autonomy, while improving our company’s security outcomes,” Netflix said in a blog post announcing the tool. “If you have similar values in your organization, we encourage you to give Stethoscope a try.”

Click here for more details.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/endpoint/netflix-debuts-stethoscope-open-source-security-tool/d/d-id/1328249?_mc=RSS_DR_EDT

Google Researchers ‘Shatter’ SHA-1 Hash

‘Collision’ attack by researchers at CWI Institute and Google underscores need to retire SHA-1.

The aging cryptographic hash function SHA-1 (Secure Hash Algorithm 1) has suffered what some experts consider its final blow today as researchers from Google and the CWI Institute revealed that they had found a practical way to break SHA-1.

SHA-1 long has been considered obsolete, and most major browser vendors plan to halt accepting SHA-1 based certificates this year due to its relatively weaker crypto scheme than the newer SHA-2 and SHA-3 standards. 

Google and CWI engineered a collision attack against SHA-1, demonstrating two PDF files with the same SHA-1 hash and different content as a proof-of-concept of their findings.

“For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. As early as 2014, the Chrome team announced that they would gradually phase out using SHA-1. We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure,” Google said in a blog post today. “We hope that our practical attack against SHA-1 will finally convince the industry that it is urgent to move to safer alternatives such as SHA-256.”

See Google’s post here for more details on the PoC.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/google-researchers-shatter-sha-1-hash-/d/d-id/1328253?_mc=RSS_DR_EDT