STE WILLIAMS

When is a VPN not private? When you’re not paying for it

One of the world’s most popular virtual private networks (VPN) is not so private, according to a complaint filed earlier this week with the US Federal Trade Commission (FTC).

The Center for Democracy Technology (CDT), a nonprofit that advocates for free speech and privacy, contends that the free Hotspot Shield VPN, a product of AnchorFree, Inc., is engaging in “unfair and deceptive trade practices” by promising, “secure, private and anonymous access to the Internet” when it is actually tracking, collecting and sharing user data with third parties.

At one level, this probably sounds like a “no-such-thing-as-a-free-lunch” situation.

If a service is “free”, in the sense that you’re not paying money for it, then you’re paying in some other way. In the case of Hotspot Shield, that means being required to look at ads or having at least some of your personal data – location, browsing habits, purchasing history etc – collected and sold to third parties for marketing.

But that is at the heart of what has rapidly become a very public squabble between the CDT and AnchorFree, which says Hotspot Shield has more than 500m users. CDT contends that if users are “paying” for a service with their data, that ought to be made more clear to them.

After all, the whole idea of a VPNs is there in the middle word of the title: “private”. They are promoted as a way to keep your identity and browsing history secret – from everybody.

And that is indeed what AnchorFree promotes. Among the screenshots included in the CDT complaint is a Hotspot Shield VPN description on the iTunes/iOS App Store, which says, “Stay private and anonymous online. Prevent anyone from tracking your IP address, identity and location from websites and online trackers. Enjoy complete anonymity.”

The company also declares there are “no logs kept. Hotspot Shield doesn’t track or keep any logs of its users and their activities. Your security and privacy are guaranteed.”

Except that’s not what’s in the fine print. AnchorFree’s actual privacy policy, among other things, says that:

  • IP address or unique device identifiers are not considered personal information – something that would probably be news to most of their customers.
  • The service gathers location data, in part for the optimization of ads.
  • The service uses cookies and automatically collected information to, “provide customized advertisements, content and information.”
  • The service may “enter into agreements with unaffiliated entities which possess technology that allows us to customize the advertising and marketing messages users receive while using the service.”
  • The service will disclose personal information to law enforcement, not just in response to a warrant or subpoena, but to “otherwise cooperate” with law enforcement or government agencies.
  • The service doesn’t guarantee that it will create a VPN or use a proxy IP address on all websites.

Beyond that, the CDT complaint claims that Hotspot Shield has been found to be “actively injecting JavaScript codes using iframes for advertising and tracking purposes”.

AnchorFree CEO David Gorodyansky, who has called CDT’s claims “unfounded”, told Naked Security in an email exchange:

Privacy and user trust is the key to our business. We have never given up or sold any user data, and our perspective on user data protection is to not store any data related to user IP addresses or personally identifiable information.

Asked to clarify how that statement squared with language in the company’s privacy policy, Gorodyansky said that “we are in the process of updating our privacy policy to reflect the reality around how our systems work, and the reality is that many of the items [in the above list] are not actually accurate”.

How common such disconnects between promotion and written policy are is hard to estimate. The FTC, while it acknowledged receipt of the complaint from CDT about Hotspot Shield, declined to comment on whether there are any other investigations into allegedly false VPN claims. Joanna Gruenwald Henderson of the FTC said:

FTC Investigations are non-public, and we do not comment on investigations or even the existence of an investigation.

Part of the problem, of course, is that hardly anybody reads Terms of Service (ToS) agreements or privacy policies because they generally are pages upon pages of dense legalese. The Finnish security company F-Secure famously did an experiment in 2014, creating a spoof ToS that said those who wanted to use a “free” Wi-Fi service would have to give the company their first-born child. Plenty of people agreed – obviously not having read the fine print.

But another reality is that, as is the case with just about anything online, nothing is entirely bulletproof. Naked Security’s Paul Ducklin, who provided a tutorial with pros and cons of VPNs a couple of years ago, also reminded users in an earlier post that both security and privacy of VPNs come with some qualifications.

VPNs can be “excellent tools” to improve privacy, anonymity and secrecy,” he wrote, but also noted that, “the ‘private’ in ‘virtual private network’ means nothing more than that the VPN provides a connection that can be made to behave as though you had a direct hookup to your destination network.

“In other words, a VPN is implicitly private more in the sense that your family car is classed as a private/light goods vehicle than in the sense of private-as-in-privacy,” he wrote.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/TVX48B_uL3U/

Scanners to be patched after government warns of vulnerabilities

The US Department of Homeland Security (DHS) issued a security advisory on Thursday, warning that vulnerabilities found in medical scanners made by Siemens are trivial to exploit remotely.

Exploits are publicly available.

Siemens said on Monday that the company expects to update some medical scanners’ software by month’s end, according to Reuters.

The good news is that so far, Siemens hasn’t detected any sign of an attack – but it’s not twiddling its thumbs over this one, however. The company assigned a security severity rating of 9.8 out of 10, using the open industry standard CVSS (Common Vulnerability Scoring System) risk assessment system, according to the DHS security advisory.

Patients are apparently not at risk. From Siemens’ statement:

Based on the existing controls of the devices and use conditions, we believe the vulnerabilities do not result in any elevated patient risk. To date, there have been no reports of exploitation of the identified vulnerabilities on any system installation worldwide.

The exploits target known weaknesses in older Windows software. Those weaknesses were found in the Windows 7 versions of software running on Siemens’ PET (positron emission tomography), CT (computed tomography), and SPECT (single-photon emission computed tomography) scanners.

Successful exploitation of the flaws would enable “an attacker with a low skill” to remotely execute arbitrary code, according to ICS-CERT researchers.

PET scans show images at the cellular level. They rely on special dyes with radioactive tracers that enable doctors to check for disease in the body. The most common use of PET scans is to seek out cancer and the metabolization of cancerous cells, though they’re also used to image heart problems, brain disorders and problems in the central nervous system.

The scanners aren’t typically connected to the internet. ICS-CERT says that anybody running vulnerable devices should keep it that way: keep them off both the network and the internet.

ICS-CERT is also advising healthcare organizations to locate all medical and remote devices behind firewalls and to isolate the tools from the network. If remote access is required, researchers are advising that it be done securely, such as via a Virtual Private Network (VPN).

It’s important to keep in mind that VPNs aren’t free from their own vulnerabilities, as ICS-CERT notes and as Naked Security’s Paul Ducklin has explained. Keep your VPN updated to the most current version available, the researchers note, and bear in mind that a VPN is “only as secure as the connected devices”.

Unfortunately, most healthcare organizations just don’t get security right, whether it’s aimed at stopping data breaches, stemming the onslaught of ransomware attacks or securing devices such as these vulnerable scanners. Case in point: in its 2016 Cyber Security Intelligence Index, IBM called 2015 “the year of the healthcare breach”.

Last year, Sophos conducted a survey of IT decision-makers across multiple industries in six countries, finding an alarming laxity in many organizations’ approach to data security.

The survey found that the healthcare sector had one of the lowest rates of data encryption, with only 31% of healthcare organizations reporting extensive use of encryption, while 20% said they don’t use encryption at all.

There doesn’t even have to be a malicious actor involved to bring about security lapses in healthcare. Beyond data breaches perpetrated by hackers, health data is frequently exposed through accidental loss, device theft and employee negligence.

These are the flaws that Siemens is now working on patching:

  • Code injection. An unauthenticated remote attacker could execute arbitrary code by sending specially crafted HTTP requests to the Microsoft web server (Port 80/TCP and Port 443/TCP) of affected devices.
  • Code injection. An unauthenticated remote attacker could execute arbitrary code by sending a specially crafted request to the HP Client automation service on Port 3465/TCP of affected devices.
  • Memory buffer flaw. An unauthenticated remote attacker could execute arbitrary code by sending a specially crafted request to the HP Client automation service of affected devices.
  • Access elevation/escalation/privileges. An unauthenticated remote attacker could execute arbitrary code by sending a specially crafted request to the HP Client automation service of affected devices.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/faOZUrPoZmk/

Carmakers warned to focus on security of connected vehicles

As anyone who visits cybersecurity shows knows, vehicle hacking has become the presentation topic guaranteed to pack them in.

True to form, this year’s DEF CON Car Hacking Village stream featured Tencent Keen researchers successfully hacking Tesla’s Model X for the second year running.

Following up 2016’s demonstration of an attack in which the team disabled the car’s brakes via Wi-Fi, this year they remotely turned on the lights while opening and closing the doors, producing a slick video showing off their handiwork.

Reminding the world that this isn’t a harmless light show, they made the vehicle’s brakes engage while driving for good measure.

Tesla reportedly fixed the flaws in the vehicle’s Controller Area Network (CAN bus) and Electronic Control Unit (ECU) attack within two weeks using an over-the-air (OTA) update, swift by security standards. It also pointed out that the attack was not easy to pull off.

But the mere fact that today’s cars now have this sort of vulnerability is a reminder of the accelerating digital complexity of a form of transport people still see as seats with an engine attached.

Aware that cybersecurity show techniques have a history of turning up in the real world, and with autonomous vehicles that depend on complex software control not that far off, governments and industry are starting to react.

This week the UK Department of Transport published a guidance document designed to set out first principles to carmakers, going beyond the industry’s tentative cyberthreat initiative, Auto-ISAC, which convened in 2015.

Inspired by a detailed policy document from the US National Highway Traffic Safety Administration (NHTSA) a year ago, as well as the proposed Autonomous and Electric Vehicle Bill set out in June, this sets out a set of first principles that range from the sensible to the stringent.

Foremost is a warning that carmakers must set up aftercare to cope with flaws and cybersecurity incidents as well as plan for the entire lifetime of a vehicle, not simply the profitable bit after it has been sold.

Manufacturers must further impose standards on their supply chains and make sure personal data gathered is managed according to laws that will include revised data protection.

All very worthy, but the biggest warning is simply whose job it is to make sure all this happens:

Personal accountability is held at the board level for product and system security (physical, personnel and cyber) and delegated appropriately and clearly throughout the organisation.

Message: blaming engineers further down the hierarchy won’t be good enough in future should the hackers come calling.

On a visit to Bristol to promote the guidelines, transport minister Lord Callanan likened autonomous vehicles to PCs, urging owners to “treat them as you would your computer”.

Owners should be careful about downloading apps to in-car systems, and make sure software is updated, he said. In fact, the flawed PC model in which consumers make decisions about what to trust and not trust is, thankfully, looking extremely unlikely.

As vehicles become more autonomous they will also, paradoxically, become heavily dependent on remote management and monitoring. We can only hope carmakers are doing their deep thinking about this now.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/U5TS243ohu8/

FBI’s spyware-laden video claims another scalp: Alleged sextortionist charged

The FBI’s preferred tool for unmasking Tor users has brought about another arrest: a suspected sextortionist who allegedly tricked young girls into sharing nude pics of themselves and then blackmailed his victims.

As we learned from previous investigations, the Feds have a network investigative technique (NIT) up their sleeve that can potentially identify folks using the anonymizing system Tor.

The NIT involves a specially crafted video file that when downloaded and opened, it causes the media player to ping an FBI-controlled server somewhere on the internet. If this happens, and the surreptitious connection does not go through the Tor network, then it will reveal the public IP address of the user to the Feds. This information can be used to identify the person’s ISP and, with a subpoena, the subscriber’s identity, leading to their arrest.

In this case, the tool was used against Buster Hernandez, 26, who was charged [PDF] on Friday with multiple counts of sexual exploitation of a child, threats to use an explosive device, and threats to injure. Hernandez, of Bakersfield, California, was allegedly running a five-year reign of terror by using Facebook to extort children to send him pictures of themselves naked.

“Terrorizing young victims through the use of social media and hiding behind the anonymity of the Internet will not be tolerated by this office,” said US Attorney Josh Minkler. “Those who think they can outwit law enforcement and are above being caught should think again. Mr Hernandez’s reign of terror is over.”

Using the name “Brian Kil,” Hernandez is accused of sending young Facebook users messages claiming he had compromising pictures of them and threatened to post them online unless the youngsters sent more nude snaps. He allegedly warned them that if they went to the police he would come after them – at one point threatening to blow up one victim’s school, prosecutors say.

In December 2015, the FBI were brought in after a year-long investigation by cops in Brownsburg, Indiana, where two of the victims lived. The police couldn’t work out who Kil really was because he was using Tor to cover his tracks online, thus successfully remaining anonymous. One victim had been terrorized by Kil for 16 months, it is claimed. Every time Facebook shut down his account, Kil would reappear with a new profile, we’re told.

When one of the girls finally refused to send any more pictures, Kil made threats against her school again via Facebook, saying: “I am coming for you. I will slaughter your entire class and save you for last.” He further made threats to law enforcement, declaring on the social network: “I will add a dozen dead police to my tally … Try me pigs, I will finish you off as well.”

The threats caused two schools to be closed for the day. Kil told a second victim to go to public meetings about the threats, and relay to him any leads that were reported regarding Kil’s identity. He also bragged that investigators were inept.

“Everyone please pray for the FBI. They are never solving this case lmao,” he wrote. “Can’t believe the FBI is still wasting there (sic) time on this. I’m above the law and always will be.”

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/09/fbis_spywareladen_videos_claim_another_scalp_as_suspect_sextortionist_charged/

How do you feel about getting on a plane with no pilot?

Those of us who follow tech news are aware that completely AI-driven cars are in our future. We should also be ready for completely AI-piloted aircraft. Are you?

An interesting survey by UBS just came out, and its findings are all over the media. Some 8,000 air passengers were surveyed; here are the results which really got my attention:

  • 54% of respondents felt unlikely to take a flight that didn’t have a human pilot

  • 17% of respondents were confident about taking a flight without a human pilot

  • Completely AI-piloted aircraft could save the airline industry $35bn per year, $31bn of which would come from reducing the cost of highly skilled employees

But autopilot is nothing new – and we’re happy enough with that. It’s a feature that has existed in aircraft in some form or another since June 1914. That’s only a little over a decade after the Wright Brothers flew their first experimental aircraft in December 1903. In the century or so since, autopilot in aircraft has become increasingly functional and sophisticated.

Are people correct in their reluctance to trust completely computer operated aircraft?

Research and development on driverless cars has gone on for many years now. America’s National Highway Traffic Safety Adminstration estimates that it takes a human driver an average of 100m miles, or 160m kilometres of driving to kill someone. By May 2016, Tesla’s Autopilot feature, which is only semi-autonomous, has been tested over that same distance.

Meanwhile, Google’s completely autonomous cars had logged 1.6m (2.5m kilometres) as of April 2016 – so just imagine what the collective odometer on all of Tesla and Google’s vehicle testing looks like by now. There have been couple of collisions with Google’s human driverless cars in April 2016, but they didn’t even cause human injury, let alone death.

Now let’s get back to aircraft. Commercial aircraft can already take off, cruise, and land with a computer doing all of the thinking. That makes it much safer for passenger jets in situations such as landing in foggy conditions.

Air France flight 447 became a disaster when the Airbus A330 aircraft’s autopilot failed. But it wasn’t an autpilot failure that crashed the plane and killed all 216 passengers and 12 personnel.: it was the inability of the three human pilots to control the plane after the autopilot failed. All of those deaths were ultimately caused by human error.

Concerning the UBS survey on how people feel about completely autonomous aircraft, human beings are notoriously bad at estimating real risk and danger. An article in Wired considers research done by psychologists in that area.

A lot of the current research into the psychology of risk are examples of these newer parts of the brain getting things wrong.

And it’s not just risks. People are not computers. We don’t evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, we have shortcuts, rules of thumb, stereotypes and biases – generally known as ‘heuristics’. These heuristics affect how we think about risks, how we evaluate the probability of future events, how we consider costs, and how we make trade-offs.

That’s borne out by the figures for traffic injuries in the US after the terrorist attacks of September 2001, as researchers Wolfgang Gaissmaier and Gerd Gigerenzer from the Harding Center for Risk Literacy at the Max Planck Institute for Human Development in Berlin reported 11 years later. In their conclusions, they stated:

The fear of terror attacks may have compelled Americans to drive instead of fly. They were thus exposed to the heightened risk of injury and death posed by driving.

So we’re not very good at assessing risk, and apparently even prepared to do things that are more risky in response to a perceived danger.

But completely AI-piloted aircraft could well be part of our future, regardless of how we feel about it. Airbus has already successfully completed trials of its experimental, completely autonomous SAGITTA aircraft.

The UBS survey of how human beings feel about completely computer piloted aircraft does illustrate that most are wary. But there’s a silver lining. Younger and more highly educated respondents were more likely to want to fly with “pilotless” planes, so it’s possible that the wider public will come to accept that, too, as time goes on.

We’ll probably have some time to psychologically adjust, however. According to UBS, the implementation of “pilotless” aircraft will probably be gradual.

In commercial flights, if the move from two to zero pilots may be too abrupt over the next 10 to 20 years, we could see first a move to having just one pilot in the cockpit and one remotely located on the ground, particularly on flights below six to seven hours. Indeed, today’s drones are controlled by remotely based operators.

We’re already accepting autonomous cars – one day the potentially unnerving but safer pilotless aircraft could well be taking us on our next trip.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rv0E7t9mKpY/

News in brief: Google fires memo writer; drones could be shot down; EU plans giant Sahara solar plant

Your daily round-up of some of the other stories in the news

Google fires writer of  ‘sexist’ memo

Google has fired a staffer who circulated a controversial 10-page memo criticising the tech giant’s diversity practices and claiming that biological differences between men and women are the reason for the gender imbalance at Google.

CEO Sundar Pichai said in a statement: “To suggest a group of our colleagues have traits that make them less biologically suited to that work is offensive and not OK. It is contrary to our basic values and our Code of Conduct, which expects ‘each Googler to do their utmost to create a workplace culture that is free of harassment, intimidation, bias and unlawful discrimination’.”

The memo sparked a furious backlash on social media, with one former senior Googler, Yonatan Zunger, pointing out that in sharing the memo, its author had “just created a textbook hostile workplace environment”.

Scientists pointed out that the memo’s author had also built his arguments on discredited theories, with Angela Saini writing in the Guardian that “psychological studies show that there are only the tiniest gaps if any, between the sexes, including areas such as mathematical ability and verbal fluency”, and adding: “The science cited in the Google engineer’s memo is flawed.”

Pentagon warns it could shoot down drones

US pilots of consumer drones, be warned: the Department of Defense has said that US military bases have been given permission to shoot down your drone if it flies too close or overhead.

It’s already illegal to fly a consumer drone within 400ft of a US military facility, and now you not only risk a fine and a possible criminal charge, but also seeing your drone shot out of the sky.

Jeff Davis at the Pentagon told reporters on Monday that the military “retain the right of self-defence. And when it comes to … drones operating over military installations, this new guidance does afford us the ability to take action to stop those threats.”

“Action” includes tracking, disabling and destroying drones, he added. Davis pointed out that guidance on proper use of drones is available on the FAA’s website.

EU plans giant solar plant in the Sahara

Writing this column in grey, chilly London, the thought of solar energy powering Europe seems unlikely – but the EU is looking to build a giant solar farm in the Tunisian Sahara desert that could one day provide power for Europe, according to Popular Mechanics.

The planned 4.5GW plant, which TuNur Ltd has applied to build in south-west Tunisia, could help Europe reduce its dependence on fossil fuels by 2020.

The plans are for the plant to collect solar energy via mirrors that reflect sunlight on to a central collector, from where it would be transmitted to European countries via undersea cables connecting it to the grid at points in Italy, France and Malta.

Daniel Rich of TuNur told Digital Trends: “This will help Europe meet its Paris Climate Agreement emissions reduction commitments quickly and cost-effectively.”

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/n93DM9MaRg8/

Marcus Hutchins free for now as infosec world rallies around suspected banking malware dev

British security researcher Marcus Hutchins was released on Monday from a Nevada jail after posting bail. He is now on his way to Milwaukee to face charges of selling malware online.

Hutchins, 23, who shot to fame after finding a way to kill off the WannaCry ransomware outbreak that crippled parts of Britain’s National Health Service, was arrested last week just before he boarded a plane home to the UK from the US. He had been visiting Las Vegas for the BSides, Black Hat and DEF CON hacking conference season.

A Sin City court granted Hutchins bail of $30,000 on Friday. However, the decision came at 3.30pm local time, and his attorney wasn’t able to make it to the bail office to pay the money before it closed at 4pm. As a result, Hutchins spent the weekend in jail – but has now posted bail.

The FBI has accused Hutchins of writing, updating and selling the Kronos banking trojan between 2014 and 2015. He and an unnamed associate alleged made a few thousand bucks selling the malware-as-a-service on dark web markets.

Hutchins was nabbed by the Feds on Wednesday, and was held for more than 24 hours at an FBI field office without access to a lawyer or any contact with his family before the Department of Justice announced he’d been arrested. In court, the FBI claimed that, during interrogation without an attorney present, Hutchins confessed to writing some malware code. Indeed, as a computer security expert, Hutchins, aka MalwareTechBlog on Twitter, has published harmless proof-of-concept malware source code on his website for research purposes.

The Brit is now making his way to Milwaukee, where the indictment that led to his arrest was filed on July 12. He is scheduled for his next court appearance on Monday, August 14, and is under onerous bail conditions – no internet access and being forced to wear a GPS tag and surrender his passport.

Hutchins denies any wrongdoing. He faces a possible 40 years in prison if found guilty.

“Cybercrime remains a top priority for the FBI,” said special agent in charge Justin Tolomeo. “Cybercriminals cost our economy billions in losses each year. The FBI will continue to work with our partners, both domestic and international, to bring offenders to justice.”

Cops have screwed the infosec pooch

The technology community has rallied around Hutchins – a fundraising webpage has already gathered more than $12,000 in contributions to help foot his legal fees. Hutchins is a widely respected member of the UK security community and his arrest has sparked shock and a lot of anger.

“I am withdrawing from dealing with the NCSC [UK National Cyber Security Centre] and sharing all threat intelligence data and new techniques until this situation is resolved,” said fellow UK researcher Kevin Beaumont.

“This includes through Cyber Security Information Sharing Partnership. Many of us in the cybersecurity community openly and privately share information about new methods of attacks to ensure the security for all, and I do not wish to place myself in danger.”

Beaumont is not alone in this. Several researchers The Register has spoken to are also putting a hold on cooperating with law enforcement for the time being, while they see how this case develops.

The FBI’s heavy-handed approach, and the continuing impasse over the Wassenaar Arrangement, have made researchers extremely leery about having anything to do with law enforcement, wrecking a concerted campaign by the authorities to woo more hackers into helping them keep the internet safer for all. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/08/marcus_hutchins_free_for_now/

It’s 2017 and Hyper-V can be pwned by a guest app, Windows by a search query, Office by…

Patch Tuesday Microsoft has released the August edition of its Patch Tuesday update to address security holes in multiple products. Folks are urged to install the fixes as soon as possible before they are exploited.

Among the flaws are remote code execution holes in Windows, Internet Explorer/Edge and Flash Player, plus a guest escape in Hyper-V. Of the 48 patches issued by Redmond, 25 are rated as critical security risks.

Those 25 critical issues include a remote code execution vulnerability for all supported versions of Windows (CVE-2017-8620) for which an exploit is already public, we’re told. That flaw allows an attacker to take over a target machine on the network via a malicious Windows Search or SMB query.

Here’s Redmond’s description of the flaw:

A remote code execution vulnerability exists when Windows Search handles objects in memory. An attacker who successfully exploited this vulnerability could take control of the affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

To exploit the vulnerability, the attacker could send specially crafted messages to the Windows Search service. An attacker with access to a target computer could exploit this vulnerability to elevate privileges and take control of the computer.

Additionally, in an enterprise scenario, a remote unauthenticated attacker could remotely trigger the vulnerability through an SMB connection and then take control of a target computer.

The security update addresses the vulnerability by correcting how Windows Search handles objects in memory.

Essentially, you can poke machines, from desktops to servers, running SMB for file sharing or Windows Search Service, and hijack them to install spyware and other nasties. Get patching.

“As with the others, this vulnerability can be exploited remotely via SMB to take complete control of a system, and can impact both servers and workstations,” said Jimmy Graham, product management director at security firm Qualys.

“While an exploit against this vulnerability can leverage SMB as an attack vector, this is not a vulnerability in SMB itself, and is not related to the recent SMB vulnerabilities leveraged by EternalBlue, WannaCry, and Petya.”

The vulnerability is separate from the seemingly still-unpatched SMBLoris flaw showcased at DEF CON last month.

Also patched are a hefty nine critical memory corruption errors in the Microsoft scripting engine that handles JavaScript. Those flaws allow a specially crafted webpage or Office document to remotely execute arbitrary code by way of malformed JavaScript embedded in the file or booby-trapped advertisements fetched by the page.

The remaining issues include 21 bugs rated as “important” by Microsoft – a designation Redmond often uses to downplay troubling bugs – as well as cross-site scripting and information disclosure flaws. Microsoft’s argument is that those bugs are less serious because they can’t be exploited without a victim clicking on a link or file or similar – though in the wild the distinction is of little importance, seeing as how clicking on things is how we operate computers.

As is often the case, this month’s “important” patches are actually rather serious. Among them is a guest escape flaw in Hyper-V that allows applications in virtual machines to escape the hypervisor’s walled sandbox to the underlying host (CVE-2017-8664), a game-over scenario for virtualized servers. It means someone logged into a VM on Hyper-V can run arbitrary evil code on the host server. Here’s Redmond’s description of the flaw:

A remote code execution vulnerability exists when Windows Hyper-V on a host server fails to properly validate input from an authenticated user on a guest operating system. To exploit the vulnerability, an attacker could run a specially crafted application on a guest operating system that could cause the Hyper-V host operating system to execute arbitrary code. An attacker who successfully exploited the vulnerability could execute arbitrary code on the host operating system.

The security update addresses the vulnerability by correcting how Hyper-V validates guest operating system user input.

Also addressed are two programming blunders in the Windows Subsystem for Linux (CVE-2017-8627 and the Windows Error Reporting system CVE-2017-8633) that already have exploits published, and could allow for denial of service and information disclosure, respectively.

Other bugs include a cross-site scripting flaw in SharePoint (CVE-2017-8654), an information disclosure flaw in SQL Server (CVE-2017-8516) and a pair of information disclosure vulnerabilities (CVE-2017-8652 and CVE-2017-8659) in the Edge browser.

As researcher Dustin Childs of Zero Day Initiative notes, the scripting engine patches should be a priority for testing and deployment due to their accessibility via browsers, as should the flaws that are already being targeted or have published exploits.

“Obviously, the patches impacting Edge, IE, and SharePoint should top deployment lists due to the ubiquitous nature of the programs,” Childs said. “Similar to the previous month, there are many Edge and IE cases quite simply titled ‘Scripting Engine Memory Corruption Vulnerability’.”

Also addressed are a number of stability issues in Windows 10, with a crash error in AppLocker, bugs in mobile device management, and a bug in NetBIOS.

Meanwhile, Adobe has issued the usual set of patches for Flash Player on Windows, OS X, and Linux. Edge and Chrome users will get the update automatically, as will those running newer versions (IE 11 and later) of Internet Explorer.

This month, the internet’s screen door has been outfitted with an update for a critical type confusion bug (CVE-2017-3106) that allows remote code execution, and an information disclosure flaw (CVE-2017-3085) allowing an attacker to bypass security controls.

Adobe is also pushing out a patch to address a hefty 67 CVE-listed flaws in the hackers’ other favorite target: PDF readers. The Acrobat and Reader update covers flaws in both the Windows and OS X versions of Adobe’s software. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/08/august_patch_tuesday/

Parents sue Disney over breaching privacy rules in kids’ apps

Welcome to Disney Princess Palace Pets: according to its Google Play store listing, the mobile game is an “enchanted world” where you can meet Pumpkin, Teacup, Petit and other “adorable pets” that you can e-love and e-groom.

According to a federal lawsuit filed on Thursday in California, it’s also the place where the Walt Disney Co. secretly collects the personal data of children.

In fact, it’s one of 43 Disney apps that embed tracking software that can then “exfiltrate that information off the smart device for advertising and other commercial purposes,” according to the class action suit (PDF).

Named plaintiff Amanda Rushing is suing on behalf of herself and a class of all parents whose kids have played the Disney-branded mobile games, which the lawsuit claims run afoul of the Children’s Online Privacy Protection Act (COPPA).

The lawsuit is against Disney and three makers of software tools embedded in the games that collect and then share the kids’ personally identifying information (PII) in order to “facilitate subsequent behavioral advertising”.

The suit claims that the Disney apps for both iOS and Android fail to ask for parental permission before the apps, using third-party tools, assign unique identifiers to users, and then use those identifiers to track users’ location, as well as what they do in the game and across multiple apps, platforms and devices.

They don’t need the kids’ names or email addresses to do that: they just need to follow them around online to build a “robust online profile”, the suit says:

The ability to serve behavioral advertisements to a specific user no longer turns upon obtaining the kinds of data with which most consumers are familiar (email addresses, etc), but instead on the surreptitious collection of persistent identifiers, which are used in conjunction with other data points to build robust online profiles.

…which is exactly what COPPA was designed to prevent. Congress enacted the legislation in 1999 with the express goal of protecting children’s privacy while they’re online. COPPA prohibits developers of child-focused apps, or any third parties working with such app developers, from obtaining the personal information of children 12 and younger without first obtaining verifiable parental consent.

App developers don’t build their own ad-tracking code; rather, they typically add a third party’s toolkit or library to their code to create, collect and track persistent identifiers that will then be sold to an advertising network or data aggregator.

Other developers will sell additional data on the same child to an advertising network, which will then have that much more data on the child and be able to craft targeted ads ever more precisely. Data on the child can be bought and sold as multiple ad networks swap databases, creating what the suit describes as …

… an increasingly sophisticated and merchantable profile of how, when, and why a child uses her mobile device, along with all of the demographic and psychographic inferences that can be drawn therefrom.

This is far from the first time that Disney’s been sued over alleged COPPA violations. In 2011, the Federal Trade Commission (FTC) fined a Disney subsidiary, Playdom, $3m after finding that it registered about 1.2m users, most of them children, for online games. The FTC’s lawsuit said Disney collected children’s email addresses and ages, and allowed them to volunteer information such as their full names, instant messenger handles and physical locations as part of their online profiles.

In 2014, the Center for Digital Democracy (CDC), a privacy watchdog, asked the FTC to look into Disney’s MarvelKids.com website, which contained a privacy policy in which Disney acknowledged that it collected personal information from children, including persistent identifiers, for reasons that were allegedly impermissible under COPPA.

It also appeard that Disney was permitting third-party advertising SDKs — including two SDK developers named in the current suit — to collect and use children’s persistent identifiers. The CDC concluded that MarvelKids.com was violating COPPA and the same was likely true “on Disney’s other child-directed websites”.

This is the full list of games named in the complaint filed last week:

  • AvengersNet
  • Beauty and the Beast
  • Perfect Match
  • Cars Lightening League
  • Club Penguin Island
  • Color by Disney
  • Disney Color and Play
  • Disney Crossy Road
  • Disney Dream Treats
  • Disney Emoji Blitz
  • Disney Gif
  • Disney Jigsaw Puzzle!
  • Disney LOL
  • Disney Princess: Story Theater
  • Disney Store Become
  • Disney Story Central
  • Disney’s Magic Timer by Oral-B
  • Disney Princess: Charmed Adventures
  • Dodo Pop
  • Disney Build It Frozen
  • DuckTales: Remastered
  • Frozen Free Fall
  • Frozen Free Fall: Icy Shot
  • Good Dinosaur Storybook Deluxe
  • Inside Out Thought Bubbles
  • Maleficent Free Fall
  • Miles from Tomorrowland: Missions
  • Moana Island Life
  • Olaf’s Adventures
  • Palace Pets in Whisker Haven
  • Sofia the First Color and Play
  • Sofia the First Secret Library
  • Star Wars: Puzzle DroidsTM
  • Star WarsTM: Commander
  • Temple Run: Oz
  • Temple Run: Brave
  • The Lion Guard
  • Toy Story: Story Theater
  • Where’s My Water?
  • Where’s My Mickey?
  • Where’s My Water? 2
  • Where’s My Water? Lite/Where’s My Water? Free
  • Zootopia Crime Files: Hidden Object


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wHz7USsSmy4/

High hopes for ‘more secure’ forked version of Bitcoin

On 1 August 2017, the Bitcoin blockchain was officially hard-forked, creating a new version of the Bitcoin (BTC) currency, now called Bitcoin Cash (BCC, or BCH – there’s no consensus on that yet).

So what exactly has happened, and what are the security implications?

The split

BTC and BCC are now separate cryptocurrencies, and although they share a past they are not forwards-compatible, and are listed separately on exchanges.

The new forked BCC chain contains all the transaction history of Bitcoin, immutably encrypted, but as new transactions in BCC are added it will diverge from the original blockchain. However, because the blockchains before the hard fork are the same, all holders of BTC Bitcoin will now also hold the same value in Bitcoin Cash.

In a statement the BCC founders said:

All Bitcoin holders as of block 478558 are now owners of Bitcoin Cash. Bitcoin Cash brings sound money to the world. Merchants and users are empowered with low fees and reliable confirmations. The future shines brightly with unrestricted growth, global adoption, permissionless innovation, and decentralized development.

The reason behind the fork is a fundamental disagreement in the Bitcoin community about the solution to a major problem – a gradual slowing of Bitcoin’s transactional confirmation ability, due to network load under stress caused by its popularity.

This in turn has made transaction fees rocket, dissuading consumers.

After some years of debate in the Bitcoin developer community, recent agreement had been reached to implement Segwit2x, which would double the size of Bitcoin blocks from the current one megabyte to two megabytes.

However, at the “Future of Bitcoin”, developer Amaury Séchet revealed the “Bitcoin ABC” (Adjustable Blocksize Cap) project, under which blocks have a maximum capacity of eight megabytes, and announced the hard fork date, 1 August.

Bitcoin security

The first security threat to consumers comes from the technical part of the user-activated hard fork, or UAHF – all BTC holders who had control of their private keys at the time of the split gained an equal amount of BCC.

Many investors using third party exchanges or unsupported software wallets may not have had control of their private keys, so the third parties who did received the new currency.

Some plan to credit their clients, some won’t.

Many investors scrambled to transfer their holdings to supported wallets in the hours before the hard fork, and hardware wallet users will need to wait for their provider to announce support, or use their recovery seed in a software wallet that does support BCC in order to split out their coins.

The opportunities for error, phishing attacks or other malicious interventions here are numerous, especially given that the majority of successful malicious attacks on Bitcoin have focused on gaining control of private keys, or hacking exchanges, rather than attacking the currency directly.

BCC is technically very similar to BTC, with the addition of larger blocksizes and transaction replay protection, the latter in the shape of a new way of signing transactions.

Bitcoin Cash transactions use a new flag SIGHASH_FORKID, which is non standard to the BTC blockchain. This prevents Bitcoin Cash transactions from being replayed on the Bitcoin blockchain and vice versa – a crucial element of the ecosystem in the future.

The new sighash also brings additional benefits such as input value signing for improved hardware wallet security, and elimination of the quadratic hashing problem.

In short, it’s arguably more secure in theory than BTC, and it aims to be cheaper and faster in use to boot.

BCC does face one major threat however, the majority attack, where a single entity gains more than 51% of the network’s processing power. The issue has been a live one for the BTC network for years, but with comparative hashrates of BTC at around 6.3 ExoHash and BCC at 68 PetaHash, the BCC network represents an easier target.

One common enemy that both blockchains share are disruption attacks, where a majority attacker leverages network disruption to split the network, lowering the barriers to success.

Partitioning network and network delay attacks are both threats, and according to a recent research paper, easier than one might assume for a supposedly decentralised network – with 20% of the Bitcoin nodes being hosted in fewer than 100 IP prefixes.

Larger blocks may be easier to delay or throttle, although even the largest 8MB block is small potatoes in current internet traffic terms, where streaming gigabytes of Netflix video is second nature in millions of homes and businesses.

So will this smaller, less powerful network processing larger blocks of transactions prove to be more or less secure from attack than its older sibling? Only time will tell, in one of the largest and most visible cryptocurrency trials ever.

It is certain that the politics of Bitcoin will continue however, as demonstrated by George Kikvadze of BitFury Group’s tweet:

Early indications are that the technical element of the split has gone as planned, and BCC/BCH did well last week, hitting more than $767, though it’s since fallen back to $325. However, past history is no guarantee of future performance – or indeed of security.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6Fh0UPsud_c/