STE WILLIAMS

And you thought the cops were bad… Civil rights group warns of facial recog ‘epidemic’ across UK private sites

Facial recognition is being extensively deployed on privately owned sites across the UK, according to an investigation by civil liberties group Big Brother Watch.

It found an “epidemic” of the controversial technology across major property developers, shopping centres, museums, conference centres and casinos in the UK.

The investigation uncovered live facial recognition in Sheffield’s major shopping centre Meadowhall.

Site owner British Land said: “We do not operate facial recognition at any of our assets. However, over a year ago we conducted a short trial at Meadowhall, in conjunction with the police, and all data was deleted immediately after the trial.”

The investigation also revealed that Liverpool’s World Museum scanned visitors with facial recognition surveillance during its exhibition, “China’s First Emperor and the Terracotta Warriors” in 2018.

crowds amass at london kings cross station

Plot twist: Google’s not spying on King’s Cross with facial recognition tech, but its landlord is

READ MORE

The museum’s operator, National Museums Liverpool, said this had been done because there had been a “heightened security risk” at the time. It said it had sought “advice from Merseyside Police and local counter-terrorism advisors” and that use of the technology “was clearly communicated in signage around the venue”.

A spokesperson added: “World Museum did not receive any complaints and it is no longer in use. Any use of similar technology in the future would be in accordance with National Museums Liverpool’s standard operating procedures and with good practice guidance issued by the Information Commissioner’s Office.”

Big Brother Watch said it also found the Millennium Point conference centre in Birmingham was using facial-recognition surveillance “at the request of law enforcement”. In the privacy policy on Millennium Point’s website, it confirms it does “sometimes use facial recognition software at the request of law enforcement authorities”. It has not responded to a request for further comment.

Earlier this week it emerged the privately owned Kings Cross estate in London was using facial recognition, and Canary Wharf is considering following suit.

Information Commissioner Elizabeth Denham has since launched an investigation, saying she remains “deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector”.

The Metropolitan Police’s use of the tech was recently slammed as highly inaccurate and “unlawful”, according to an independent report by researchers from the University of Essex.

Granary Square, King's Cross

‘Deeply concerned’ UK privacy watchdog thrusts probe into King’s Cross face-recognizing snoop cam brouhaha

READ MORE

Silkie Carlo, director of Big Brother Watch, said: “There is an epidemic of facial recognition in the UK.

“The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing. Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.

“We now know that many millions of innocent people will have had their faces scanned with this surveillance without knowing about it, whether by police or by private companies.

“The idea of a British museum secretly scanning the faces of children visiting an exhibition on the first emperor of China is chilling. There is a dark irony that this authoritarian surveillance tool is rarely seen outside of China.”

Carlo urged Parliament to follow in the footsteps of legislators in the US and “ban this authoritarian surveillance from public spaces”. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/16/facial_recognition_epidemic_across_private_sites/

Top tip: Don’t upload your confidential biz files to free malware-scanning websites – everything is public

Companies are inadvertently leaving confidential files on the internet for anyone to download – after uploading the documents to malware-scanning websites that make everything public.

These file-probing websites open submitted documents in secure sandboxes to detect any malicious behavior. Businesses forward email attachments and other data to these sites to check whether they are booby-trapped with exploits and malware, not knowing that the sandbox sites publish a feed of submitted documents.

White-hats at infosec outfit Cyjax today raised the alarm that when IT staff, security researchers, and other folk submit attachments to free malware scanning services to check for malware, they are unaware the files are viewable to everyone.

“These services allow anyone to upload a file and then generate a report about what happens when the file is opened; they then give an indication as to whether the file is malicious or benign,” Cyjax’s Cylab team explained.

“The services chosen all have public feeds and do not require payment in order to download or view the public submissions.”

By passively observing three such services over the course of three days earlier this month, Cylab hackers were able to collect more than 200 documents, mostly things like purchase orders and invoices. In some cases, they were also able to spot more sensitive information – think legal paperwork, insurance forms, and government documents that contained personal information.

Secret service agent in silhouette on white background

Sir, you’ve been using Kaspersky Lab antivirus. Please come with us, sir

READ MORE

“The volume of sensitive documents collected in only three days was staggering,” the team noted. “In a month, a threat actor would have enough data to target multiple industries and steal the identities of multiple victims.”

Even the mundane files, like purchase orders, could reveal enough of a company’s inner workings to give an identity thief or hacker enough reconnaissance to carry out a targeted attack.

“By examining the invoices, we were able to determine who was using the software, as well as the contact details of those responsible for purchasing in each organisation,” the Cylab report explained.

“This is extremely useful information for a threat actor conducting a spear phishing or BEC [business email compromise] fraud campaign.”

The Cylab team noted that in every case where the uploader of the file could be reached, the organization had no idea their documents were open to any and all. Some panicked at the news, and others contacted the sandbox site to get the files pulled.

The conclusion of the report is pretty straightforward: users and their employers seem to have no idea that these “sandbox” sites are exposing their data.

As for what can be done, administrators need to step up and let users know not to use the site, while the companies themselves should consider either providing and mandating a their own scanning tool, or at least spring for a private account that hides scanned files. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/16/sandbox_sites_exposing_data/

NSA asks Congress to permanently reauthorize spying program that was so shambolic, the snoops had shut it down

Analysis In the clearest possible sign that the US intelligence services live within their own political bubble, the director of national intelligence has asked Congress to reauthorize a spying program that the NSA itself decided to shut down after it repeatedly – and illegally – gathered the call records of millions of innocent Americans.

Not only that but in a letter from Dan Coats to the heads of two key Senate committees, the director argues that the powers should be permanently reauthorized, rather than put into a law bill that requires renewal: an approach that has long been standard when it comes to awarding extraordinary powers to Uncle Sam’s snoops.

Coats’ letter [PDF] was sent yesterday, his last day in office, and ahead of a December cut-off for the spying powers that are contained within Section 215 of the Patriot Act. It was first obtained by the New York Times.

The powers he refers to have been hugely controversial ever since they were revealed by Edward Snowden in 2013. In fact the program, which relies on two different, ridiculous, interpretations of the law has repeatedly been found to be unconstitutional.

Edward Snowden at Think. Image Darren Pauli / The Register

NSA may kill off mass phone spying program Snowden exposed, says Congressional staffer

READ MORE

Even after the law was changed, the NSA has been unable to make the system work and has twice been forced to admit that it gathered millions of call records it shouldn’t have. Back in June 2018, it deleted 534 million call records that it had gathered the previous year but gave virtually no details over how and why that had happened, prompting inquiries from senators – who were roundly ignored.

Then the exact same thing happened again just a few months later – in October 2018. That massive slurp of personal information was again kept quiet and only emerged in June 2019 when a report of the NSA’s inspector general was declassified following a lawsuit by the American Civil Liberties Union (ACLU).

We’ll probably just ditch it

The intelligence services were well aware that the second failure of its system was due to become public, and so it started letting congressional aides know that it thinking about axing the program in early 2019.

In a sign of just how little oversight there is over malfunctioning spy programs, the fact that the NSA was considering ditching the program only came out when the national security adviser to House minority leader Kevin McCarthy (R-CA) mentioned it during a podcast interview.

Luke Murry said that the NSA hasn’t been using the system for blanket collection of US citizens’ telephone metadata for the past six months “because of problems with the way in which that information was collected, and possibly collecting on US citizens.” He added: “I’m not actually certain that the administration will want to start that back up given where they’ve been in the last six months.”

The next month, an NSA source briefed a Wall Street Journal journalist that it was in fact planning to end the program because it was of limited value and the spy agency couldn’t figure out how to make it work without illegally gathering the details of innocent Americans.

Since then, the NSA has repeatedly refused to discuss the program or even confirm that it has stopped the program. However in a sign that it has been talking behind the scenes to key senators, a law bill intended to reauthorize the spying powers before December notably did not include this specific program (it does include three other spying measures.)

Many had assumed that was the end of it. But Coats in his letter this week not only suggests reauthorizing the program but says it should be done so on a permanent basis – meaning that there will be even less accountability since Congress will not be in a position to ask questions and threaten to let the powers expire if they are not answered.

It’s a shambles but we like it

And if all that wasn’t sufficiently mind-boggling, Coats explicitly acknowledges that the program is a mess but says the NSA should have the powers anyway in case they prove useful in future.

“The National Security Agency has suspended the call details records program that uses this authority and deleted the call detail records acquired under this authority,” the unclassified letter reads.

“This decision was made after balancing the program’s relative intelligence value, associated costs, and compliance and data integrity concerns caused by the unique complexities of using these company-generated business records for intelligence purposes.

“However, as technology changes, our adversaries’ tradecraft and communications habits will continue to evolve and adapt. In light of this dynamic environment, the Administration supports reauthorization of this provision as well.”

It is no secret that the intelligence services are able to bypass normal democratic processes by claiming national security considerations but it is still extraordinary that the director of national intelligence feels he is able to ask for a permanent reauthorization of a highly controversial spying power that the intelligence agencies have twice been forced to admit they cannot run without breaking the constitution.

Trump card

What can be read into the fact that Coats sent the letter on his last day on the job? Quite a lot. President Trump pushed Coats out of the job because, as DNI, Coats publicly contradicted Trump’s insistence that Russia did not meddle in the elections that put him in power.

Coats refused to disregard the security agencies’ conclusions about the role that Russia and its president Vladimir Putin had played in America’s elections. He also publicly expressed dismay when Trump said he had invited Putin to the White House and said he would have advised against a controversial private meeting between Trump and Putin in Helsinki in July 2018.

The security services are extremely uncomfortable about the president’s closeness to the leaders of several of the United States’ long-standing enemies, and the president suspects, probably quite rightly, that they are trying to find out exactly what he is discussing with those leaders in private meetings.

As such, Trump is extremely skeptical of surveillance powers and the security services and, given a clear choice, would likely prevent their reauthorization. In that respect, Coats’ letter could be a seen as a last-ditch effort to lock spying powers in place before he loses his influence.

No matter which way you look at it, two things are clear: one, ordinary Americans are being screwed over; and two, there is insufficient accountability at the highest levels of government. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/16/spying_reauthorization_coats/

Chrome add-on warns netizens when they use a leaked password. Sometimes, they even bother to change it

Between February and March this year, after Google released a Chrome extension called Password Checkup to check whether people’s username and password combinations had been stolen and leaked from website databases, computer scientists at the biz and Stanford University gathered anonymous telemetry from 670,000 people who installed the add-on.

On Friday, the boffins – Kurt Thomas, Jennifer Pullman, Kevin Yeo, Ananth Raghunathan, Patrick Gage Kelley, Luca Invernizzi, Borbala Benko, Tadek Pietraszek, and Sarvar Patel, and Elie Bursztein from Google, with Dan Boneh from Stanford – presented a paper describing the results of their data gathering at the USENIX Security conference.

The paper [PDF], titled “Protecting accounts from credential stuffing with password breach alerting,” reveals that about 1.5 per cent of logins on the web involves credentials that have been exposed online.

“During this measurement window, we detected that 1.5 per cent of over 21 million logins were vulnerable due to relying on a breached credential – or one warning for every two users,” the paper says, noting that the figure is significantly less than a 2017 study where the rate was 6.9 per cent.

For the 28 day period, 316,531 logins involved leaked credentials. Warnings sent to users were then ignored about a quarter of the time (26 per cent); these notifications also resulted in password resets about 26 per cent of the time.

The researchers suggest three potential explanations: that users may not believe the risk is worth the effort of adopting a new password; that users may not be in full control of the account (eg. a shared household account); or that there’s insufficient guidance about how to reset a password.

safecracker

What should password managers not do? Leak your passwords? What a great idea, LastPass

READ MORE

Despite the fact their security advice may be ignored, they conclude, “Our results highlight how surfacing actionable security information can help mitigate the risk of account hijacking.”

The risk, to which the title of the paper alludes, is credential stuffing, which involves gathering easily obtained sets of exposed credentials – usernames and passwords harvested from specific websites – and crafting code that attempts to use those credentials on a massive number of other websites, in the hope of finding login details that have been reused.

Credential stuffing attacks have become popular because there are so many compromised accounts available in online databases – 25 billion username and password pairs, according to internet plumbing giant Akamai.

The biz earlier this year said in its report on the subject said there were hundreds of millions of credential stuffing attacks carried out every day in 2018, with a three-day peak of 250 million brute force login attempts.

The eggheads from Google and Stanford found that users of the Password Checkup extension reused hacked credentials across more than 746,000 domains. “The risk of hijacking was highest for video streaming and adult sites, where 3.6–6.3 per cent of logins relied on breached credentials,” their paper says.

Google appears to be convinced that having Chrome check for leaked passwords would benefit everyone using the browser. A Chromium bug report suggests the capability will be built into a future update. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/16/google_stanford_chrome_passwords/

Project Zero Turns 5: How Google’s Zero-Day Hunt Has Grown

At Black Hat USA, Project Zero’s team lead shared details of projects it has accomplished and its influence on the security community.

In July 2014, Google announced Project Zero, a research group built to reduce the number of zero-day vulnerabilities used in targeted attacks. Five years later, team lead Ben Hawkes took the Black Hat stage for an update: Has Project Zero achieved its goal of “making zero-day hard”?

“It was an important problem for companies like Google, but also societies as a whole,” said Hawkes of the need to find unknown, dangerous security flaws. “We had seen the shift and increased demand of zero-days not for the purpose of defense but for the purpose of offense.” While bug hunting was not new, it was significant to see Google entering the space in a big way.

Project Zero’s mission of “make zero-day hard” stemmed from its overall goal of finding attacks that are difficult to detect and highly reliable, Hawkes said. The research team aims to make it harder for cybercriminals to find and exploit these vulnerabilities by getting to them first.

“Good defense requires a detailed knowledge of offense,” he said, listing one of Project Zero’s founding principles. “We wanted to create a pipeline of work that mimics what real attackers would do when creating zero-day exploits.” However, Hawkes said, this is a challenge. Researchers want to model attacker behavior, but that behavior is constantly evolving.

Another commonly held belief is “openness benefits defenders more than it benefits attackers.” For this reason, they want to share findings with “as many people as possible” after vendors patch. Project Zero has generated 1,500+ vuln reports and 100+ technical blog posts.

Pivoting into operations, Hawkes shared more about what the team does outside vulnerability research and exploit development. Other projects include methodology building, technical writing, working with vendors and OSS projects, software engineering, and peer collaboration. He called the nature of their work “more of a sketch rather than a blueprint,” describing how researchers have a high degree of flexibility in pivoting to new projects and areas that seem promising.

How Vulnerabilities Are Found
More than half (54.2%) of Project Zero’s discoveries are manual — for example, found during source-code review. Fuzzing helps them find 37.2% of vulnerabilities. As for new methodologies, Hawkes said they should help “find bugs faster than we currently are” or “find bugs that we can’t currently surface.”

Exploit development, another priority, ensures the security impact of a bug is understood and creates an equivalence class for similarly exploitable vulnerabilities. This step lets researchers determine areas of “fragility” in the exploit and how urgent they should consider it to be.

“Project Zero tends to be in a position to advocate for change rather than actually implement it,” said Hawkes of its broader role in security tech. While the team can actively participate in patching for Chrome and Linux kernel bugs, much of its job involves urging for structural change: attack surface reduction, better sandboxing, process improvements, exploit mitigation, better documentation, and fixing bug classes, Hawkes listed. While the team’s vulnerability research and exploit development are highly visible, he noted its structural work is more low-key.

Project Zero has identified many, many vulnerabilities in five years. Hawkes discussed some of the standouts and their effects on the security community. Researcher Jann Horn was one of three parties to discover and report Meltdown when it was disclosed in 2018; he also represented one of several organizations to independently find and report Spectre Variant 4. In total, four variants were found by Project Zero, Hawkes said, in addition to other researchers.

Spectre and Meltdown led to an “industrywide shift” in the security community’s understanding of CPU security and drove architectural changes in kernels, hypervisors, and browsers. “We saw a marked redoubling in the hardware community to improve security capabilities and processes,” Hawkes noted.

Another example, he continued, was in exploring Adobe Flash; Project Zero’s Natalie Silvanovich and Mateusz Jurczyk were primarily responsible for finding and reporting more than 200 Flash use-after-free (UAF), out-of-bounds read/write, and type confusion bugs through manual review and fuzzing. They worked with Adobe and Microsoft to implement exploitation mitigations and with Chrome to use these results to accelerate Flash click-to-play.

Has Project Zero achieved its goal of making zero-day hard? Not yet, Hawkes said. While progress has been made, and it’s harder to find zero-days, there is still work to be done.

“Fixing individual vulnerabilities matters,” he said toward the end of his talk. In the next five years, one goal is to build a coalition of open attack research teams, encompassing researchers from industry, academia, nonprofits, and government, all of which have roles in bug research.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: 5 Things to Know About Cyber Insurance.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/project-zero-turns-5-how-googles-zero-day-hunt-has-grown/d/d-id/1335549?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Analyzes Pilfered Password Reuse

Password Checkup data shows some users still reuse their exposed passwords.

Newly published research from Google surrounding its Password Checkup extension for Chrome found that 1.5%, or 310,000, of 21 million usernames and passwords were stolen or exposed credentials.

Google took a sampling of some 670,000 users and their logins from the early adopters of the extension, which alerts Chrome users when their credentials have been found exposed or stolen. The company also found that users who were warned their passwords were stolen created new passwords just 26% of the time.

“Based on anonymous telemetry reported by the Password Checkup extension, we found that users reused breached, unsafe credentials for some of their most sensitive financial, government, and email accounts. This risk was even more prevalent on shopping sites (where users may save credit card details), news, and entertainment sites,” Google wrote in a blog post this week.

“In fact, outside the most popular web sites, users are 2.5X more likely to reuse vulnerable passwords, putting their account at risk of hijacking,” the post said.

Read more here.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/google-analyzes-pilfered-password-reuse/d/d-id/1335550?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft won’t shift on AI recordings policy

Like a number of big technology companies, Microsoft recently admitted that humans sometimes hear your sensitive voice conversations, but that doesn’t mean it’s going to stop. Rather than abandoning the use of human contractors to improve its AI accuracy, the company has simply decided to be more transparent about it.

Earlier this month, Microsoft was found sharing conversations with its Skype Translator product, an AI-powered system that translates in near real-time between 10 languages. It also let contractors listen to audio from user conversations with its Cortana voice assistant, becoming the latest in a series of companies embarrassed by similar revelations.

Whereas other companies have made a cursory effort to suspend the sharing of voice recordings from AI technology, Microsoft has instead just updated its privacy policy. It added a new section to the policy this week:

Our processing of personal data for these purposes includes both automated and manual (human) methods of processing. Our automated methods often are related to and supported by our manual methods. For example, our automated methods include artificial intelligence (AI), which we think of as a set of technologies that enable computers to perceive, learn, reason, and assist in decision-making to solve problems in ways that are similar to what people do.

To build, train, and improve the accuracy of our automated methods of processing (including AI), we manually review some of the predictions and inferences produced by the automated methods against the underlying data from which the predictions and inferences were made. For example, we manually review short snippets of a small sampling of voice data we have taken steps to de-identify to improve our speech services, such as recognition and translation.

The company also updated its Skype Translator Privacy FAQ, adding the following text clarifying an existing paragraph explaining how it analyzed transcripts:

This may include transcription of audio recordings by Microsoft employees and vendors, subject to procedures designed to protect users’ privacy, including taking steps to de-identify data, requiring non-disclosure agreements with vendors and their employees, and requiring that vendors meet the high privacy standards set out in European law and elsewhere.

In April 2019, it admitted that it also shared Alexa recordings with thousands of contractors so that they could improve the AI’s accuracy.

Google was next on the list in July 2019, when a whistleblower revealed that it, too, was sharing recordings with contractors. That same month, an Apple contractor revealed that third party workers were listening to Siri’s accidental recordings of drug deals and people having sex.

After Microsoft was caught doing the same thing, Facebook contractors said this week that they were listening in on Facebook Messenger transcriptions.

Varying reactions

So, everyone’s been at it. The interesting thing is how the different companies reacted. Earlier this month, both Google and Apple said that they would suspend contractor access to voice recordings, but both of these announcements had their caveats. Apple’s suspension wasn’t permanent, and it didn’t say when it might resume the practice. Google only suspended the sharing of voice recordings for three months, and only in the EU.

Facebook said this week that it had already discontinued the practice, but there was no indication of whether – or when – it might resume the practice. For any more information on that, users may be forced to monitor any updates to the companies’ privacy policies.

These companies generally claim that the data is anonymous, but at least one Siri contractor questioned that, arguing:

These recordings are accompanied by user data showing location, contact details, and app data.

In any case, researchers have also made considerable progress re-identifying anonymous data sets.

These companies are busy trying to find a balance between the need for more data to enhance their AI and what the law – and customers – will tolerate. How comfortable are you with their sharing of your recordings?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/dwC94BWTtBI/

Police site DDoSer/bomb hoaxer caught after jeering on social media

A UK man who DDoS-ed police websites was caught and imprisoned after he jeered at police about the attacks on social media.

Liam Reece Watts, 20, targeted the Greater Manchester Police (GMP) website in August 2018 and then the Cheshire Police site in March 2019, according to ITV News. Both of the public-facing websites were each disabled for about a day, The Register reported.

According to news outlets and Watts’s Twitter posts, the distributed denial-of-service (DDoS) attacks were done in retaliation for Watts having been convicted of calling in bomb hoaxes just days after the 2017 Manchester Arena suicide attack left 22 people dead and 500 injured.

Watts, who was 19 at the time of the DDoS attacks, was caught after he taunted police through Twitter. He used the handle Synic: a possible reference to SYN flood, which is a type of DoS attack in which servers are swamped with SYN – i.e., synchronize – messages.

Watts reportedly wrote this in one of his tweets:

@Cheshirepolice want to send me to prison for a bomb hoax I never did, here you f****** go, here is what I’m guilty of.

Watts reportedly posted that tweet while police were still investigating the first DDoS attack on the GMP site in 2018, and before he unleashed the March 2019 attack on the Cheshire Police site.

He reportedly admitted to carrying out the attack after police searched his home.

Watts said in court that botnets used to carry out DDoS attacks can be rented online for less than USD $100 (£82). DDoS-for-hire sites sell high-bandwidth internet attack services under the guise of “stress testing”. One example is Lizard Squad, which, until its operators were busted in 2016, rented out its LizardStresser attack service… an attack service that was, suitably enough, given a dose of its own medicine when it was hacked in 2015.

The internet is riddled with these services. When the FBI cracked down on DDoS-for-hire sites in December 2018, it led to an 85% slash in attack sizes. That’s good, but it wasn’t cause to let down our guards: NexusGuard – provider of cloud-based DDoS defense – estimated that the 15 services kicked offline by the FBI represented only 11% of all attacks worldwide.

Within a month of Watts’s home being searched and his arrest, both on 26 March 2019, he pleaded guilty to two charges under the Computer Misuse Act.

On Monday, he was sentenced to 16 months in a young offenders’ institution, was given a five-year restraining order to stop him from deleting his browsing history, and had to hand over his computers for destruction. (One assumes the restraining order pertains to whatever computer(s) he buys to replace the demolished ones.) Watts was also handed a victim surcharge tax of £140 (USD $169).

This wasn’t his first conviction for DDoS: Watts was reportedly convicted of a Computer Misuse Act offense in 2016 after doing it to his college.

On his rap sheet, Watts also has a criminal conviction for attempted robbery and for the bomb hoax.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nq0GnuKZhNw/

Google removes option to disable Nest cams’ status light

No more stashing your Nest security cameras in the bushes to catch burglars unaware: Google informed users on Wednesday that it’s removing the option to turn off the status light that indicates when your Nest camera is recording.

You can still dim the light that shows when Google’s Nest, Dropcam, and Nest Hello cameras are on and sending video and audio to Nest, Google said, but you can’t make it go away on new cameras. If the camera is on, it’s going to tell people that it’s on – with its green status light in Nest and Nest Home and the blue status light in Dropcam – in furtherance of Google’s newest commitment to privacy.

Google introduced its new privacy commitment at its I/O 2019 developers conference in May, in order to explain how its connected home devices and services work.

The setting that enabled users to turn off the status light is being removed on all new cameras. When the cameras’ live video is streamed from the Nest app, the status light will blink. The update will be done over-the-air for all Nest cams: Google’s update notice said that the company was rolling out the changes as of Wednesday, 14 August 2019.

An “absurd” update

The change is a plus for the privacy-aware: say, people who are wary of their Airbnb hosts secretly filming them in the shower or bedroom.

On the other end of the spectrum, it’s an outrage to some users who say they’ve spent big bucks on cameras that can stay hidden. One comment on Google’s update notice called it “an absurd update and an invasion of my rights as a consumer” – more of a “post-purchase middle finger” to customers than a privacy plus. More from that incensed user:

Privacy laws do not exist on private property such as my home, where I get to dictate which light remains on. We have spent thousands on 8+ cameras, Nest Guard and Nest Sense products, and spend $40/month on Nest Aware. For me – as a consumer – to have my rights violated without an option to keep the status lights off is a major move backwards.

The whole point of exterior cameras is to remain hidden and out of sight of potential burglars. But yes, let’s forcibly keep the light on so that everyone can see and avoid being recorded. God forbid that some criminal’s privacy rights are violated.

A simple, sticky workaround

That comment above was upvoted by 129 others. One of the replies suggested that the user would be turning to another technology that we often reference when talking about webcams that can be hijacked; one that Facebook kingpin Mark Zuckerberg himself has seen fit to deploy, though in this case, the technology will be applied to the status light and not the lens.

To wit:

Now I have to cover it with a piece of electrical tape.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SRDFtsfKA6M/

iPhone holes and Android malware – how to keep your phone safe

Recent news stories about mobile phone security – or, more precisely, about mobile phone insecurity – have been more dramatic than usual.

That’s because we’re in what you might call “the month after the week before” – last week being when the annual Black Hat USA conference took place in Las Vegas.

A lot of detailed cybersecurity research gets presented for the first time at that event, so the security stories that emerge after the conference papers have been delivered often dig a lot deeper than usual.

In particular, we heard from two mobile security researchers in Google’s Project Zero team: one looked at the Google Android ecosystem; the other at Apple’s iOS operating system.

Natalie Silvanovich documented a number of zero-day security holes in iOS that crooks could, in theory, trigger remotely just by sending you a message, even if you never got around to opening it.

Maddie Stone described the lamentable state of affairs at some Android phone manufacturers who just weren’t taking security seriously.

Stone described one Android malware sample that infected 21,000,000 devices altogether…

…of which a whopping 7,000,000 were phones delivered with the malware preinstalled, inadvertently bundled in along with the many free apps that some vendors seem to think they can convince us we can’t live without.

But it’s not all doom and gloom, so don’t panic!

Watch now

We recorded this Naked Security Live video to give you and your family some non-technical tips to improve your online safety, whichever type of phone you prefer:

(Watch directly on YouTube if the video won’t play here.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wlXfZPzm5No/