STE WILLIAMS

News in brief: Beijing bans Pooh; Ashley Madison offers settlement; patient data shared on Facebook

Your daily round-up of some of the other stories in the news

Beijing bans mentions of Winnie the Pooh

We’ve written quite a bit on China’s ongoing moves to clamp down on internet use, but the latest move from Beijing has really raised eyebrows: now Winnie the Pooh has been censored.

The FT reported at the weekend that mentions of the portly bear had been censored on Sina Weibo, China’s answer to Twitter, while WeChat, the ubiquitous messaging app, had removed Pooh animated gifs from the platform. That’s because China’s premier, Xi Jingping, has been unflatteringly compared to the short, tubby bear in the past, said observers.

One suggested that “talking about the president” has been deemed too sensitive in the run-up to the Communist party’s congress in the autumn. Qiao Mu at Beijing University told the FT: “Historically, two things have been not allowed: political organising and political action. But this year, a third has been added to the list: talking about the president.”

Sina Weibo users who tried to post the bear’s name received the message “content is illegal”.

Ashley Madison victims offered settlement

Users of the Ashley Madison website whose personal details were stolen in the 2015 breach have been offered a share in a settlement of $11.2m by the owner of the site, Ruby Life.

Hackers who attacked the site and stole 33m people’s details including names, addresses, dates of birth and sexual preferences dumped the cache of stolen data online, exposing millions of users.

Ruby Life said it will “contribute $11.2m to a settlement fund, which will provide, among other things, payments to a settlement class members who submit valid claims for alleged losses resulting from the data breach and alleged misrepresentations as described further in the proposed settlement agreement”.

Researcher posts patient data in Facebook update

Data breaches aren’t always the result of malicious attacks by hackers, as a researcher at a British hospital reminded us when he accidentally leaked the personal data of women who had given birth via a careless post on Facebook.

Luigi Carbone, who was studying pre-eclampsia detection in pregnant women, was taking advantage of a heatwave to work in the sunshine, posting a picture of his laptop on Facebook to show that he was outside, reported the Daily Mirror.

Clearly visible on the laptop screen were the personal details of 31 women who had given birth at the North Middlesex University Hospital, and, because his post wasn’t restricted to his friends, it was visible to everyone on Facebook – for more than a week. It was taken down after the social media team at the hospital spotted the post.

The moral of the story is: be careful about what you share on social media.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/h97nLztayzI/

Forgotten your Myspace password? Just a name, username, DoB will get you in – and into anyone else’s, too

Myspace’s account recovery process is hopelessly flawed, according to a security researcher.

Positive Technologies’ Leigh-Anne Galloway stumbled on the issue in the process of attempting to gain access and delete her account back in April.

“I discovered a business process so flawed it deserves its own place in history,” she explained in a blog post, published on Monday.

Myspace only requires a valid name, username and date of birth associated with an account to regain access to that account – and that’s it. No email confirmation. Other details are requested in the recovery form, but filling them in isn’t necessary in order to change the password and gain control of an account, Galloway discovered.

Despite flagging up the issue to Myspace weeks ago, all Galloway has received since has been an automated response. Myspace hasn’t resolved the problem, another security researcher, Scott Helm, verified late last week.

He told El Reg: “Account recovery on Myspace takes scarily little information – even worse part is that they don’t verify the email fields. You can reset with full name and username, which you can get from the profile page, and date of birth, which can be easily found or guessed.”

The vulnerability allows anyone access to any Myspace account, with only these three pieces of information. El Reg approached Myspace owner Time Inc for comment. We’re yet to hear back.

Is it really relevant?

Myspace is no longer the social networking mega-monster it once was, although that”s no excuse for poor security. And yet last year, it emerged that it managed to leak the details of 360 million Myspace accounts.

In response to the online sale of users’ stolen credentials, Myspace said it had “invalidated all user passwords for the affected accounts created prior to June 11, 2013 on the old Myspace platform.” It went on to say that it was “utilizing advanced protocols including double salted hashes” in order to protect users’ accounts.

Such efforts are rendered moot when it’s possible to gain control of an account with some basic info and no knowledge of the password.

“Myspace is an example of the kind of sloppy security many sites suffer from – poor implementation of controls, lack of user input validation, and zero accountability,” Galloway concluded. “Whilst Myspace is no longer the number one social media site, they have a duty of care to users past and present.”

Galloway told El Reg that Myspace was “like a cemetery of personal data.” Those who still have an account on Myspace should delete it immediately, she advised.

Myspace was a Web 2.0 goliath, with a strong emphasis on music: it was a screaming, ugly internet playground for fans and unsigned bands. Then it was completely crushed by Facebook. It’s gone through a series of different owners since, including AOL and News Corp among others.

It has declined in popularity to the point where it is currently rated outside the top 1,000 US websites by traffic, and only 3,374th globally, according to the latest figures from web stats agency Alexa. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/17/myspace_account_recovery/

Cisco plugs command-injection hole in WebEx Chrome, Firefox plugins

Cisco has patched its Chrome and Firefox WebEx plugins to kill a bug that allows evil webpages to execute commands on computers.

A malicious page, when visited by a vulnerable Windows machine, can exploit the security flaw (CVE-2017-6753) to run arbitrary commands and code with the same privileges as the browser. In other words, the page can abuse the installed plugins to hijack the PC.

The hole is present in the Chrome and Firefox plugins for Cisco WebEx Meetings Server and Cisco WebEx Centers, and affects products including WebEx Meeting Center, Event Center, Training Center and Support Center. Internet Explorer and Edge are not considered vulnerable, and both OS X and Linux versions of Chrome and Firefox are also safe.

The bug was discovered by Google Project Zero researcher Tavis Ormandy and Divergent Security’s Cris Neckar.

“A vulnerability in Cisco WebEx browser extensions for Google Chrome and Mozilla Firefox could allow an unauthenticated, remote attacker to execute arbitrary code with the privileges of the affected browser on an affected system,” Cisco said on Monday.

“This vulnerability affects the browser extensions for Cisco WebEx Meetings Server, Cisco WebEx Centers (Meeting Center, Event Center, Training Center, and Support Center), and Cisco WebEx Meetings when they are running on Microsoft Windows.

“The vulnerability is due to a design defect in the extension. An attacker who can convince an affected user to visit an attacker-controlled web page or follow an attacker-supplied link with an affected browser could exploit the vulnerability. If successful, the attacker could execute arbitrary code with the privileges of the affected browser.”

Those running Chrome and Firefox plugins for WebEx should already have the patches running on their machines. Cisco kicked out the automatic update for Chrome on July 12 and Firefox on July 13. Users can see if their versions are the fixed release (1.0.12) by going to the extensions menu in the browser and, if an older version is run, selecting the “update extensions now” (Chrome) or “check for updates” (Firefox) option.

Cisco says that while only the Chrome and Firefox plugins on Windows boxes are vulnerable to the flaw described, shared code between those browsers and the Internet Explorer/Edge plugins means that an update for Microsoft browsers has been released as well. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/17/cisco_webex_holes/

AWS S3 Breaches: What to Do & Why

Although basic operations in Amazon’s Simple Storage Services are (as the name implies) – simple – things can get complicated with access control and permissions.

Since its launch in 2006, Amazon’s Simple Storage Service — S3 — has spread like wildfire. Used worldwide for storing everything from personal photo libraries to government personnel data, S3 underlies numerous other cloud services offered by AWS, and integrates with many third-party hardware and software products. If S3 comes up in security news more often than other cloud service provider offerings, that’s merely a testament to its success.

At the root of S3’s appeal is its simplicity. By abstracting data access to reading and writing “objects” (an object is a simplified version of what are normally called files), S3 makes it easy to drop software and services data into named “buckets” in the cloud. S3 has spread so far and so fast that enterprises may not even realize their data is stored in the cloud, when in fact their backup system or file-sharing service uses S3 as a building block. 

While basic operations remain simple, things get complicated with S3 access control and permissions. While every S3 object defaults to private, once developers start configuring the baroque access controls of AWS, mistakes are easy to make.

Within the last month, two high-profile data breaches became public. An open S3 bucket belonging to a defense contractor exposed 60,000 files, including sensitive US government data. Following that incident, the records for 198 million US voters were exposed when a bucket belonging to a data analytics firm was left unprotected. Most recently, 6 million Verizon customer account PINs were leaked, along with names and phone numbers.

These incidents are not unique. In fact, S3 breaches are common enough that an industry has sprung up around discovering and exploiting them. Actors can regularly scan S3 for accessible objects using publicly available tools and harvest AWS credentials along with other sensitive data.

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

 

With S3 data loss so commonplace, what can you do to protect your data?

First, do not assume that S3 bucket names are invisible or un-guessable. Every S3 bucket requires a globally unique name, and S3 users create their own bucket names. However, unscrupulous S3 hackers will try to guess these names to retrieve the objects within buckets. As an example, the bucket “music-rob-enns-us-west-2” is globally unique, but even a moderately creative hacker would be able to guess it. Numerous breaches have been tied to guessed bucket names, and it’s important to remember that security through obscurity has never been effective. The security of S3 data must not be tied to secret bucket names; instead, use access control and encryption to protect data.

Second, make sure enterprise data management policies extend to S3 and other cloud services, and review the access control implementation for each cloud provider. Cloud access control systems are powerful and complex, and a common source of data breaches includes unintentionally weakened or disabled access controls during application development, deployment, and upgrade. If safeguarding precious data depends on developers following best practices, you are at risk. Consider introducing a separation of duties between application developers and data security.

Third, always encrypt data. S3 provides numerous encryption methods. Configured and used properly, encryption protects data effectively — once data is encrypted, it cannot be read without the corresponding key. At enterprise scale, effective encryption also raises requirements for key management that need to be addressed.

Finally, maintain visibility into changes in the encryption and access control configuration on cloud deployments. Log all configuration changes using AWS CloudTrail, and configure your SIEM to send an alert anytime an S3 bucket is made public or if encryption settings are changed across accounts.

AWS S3 is a powerful and ubiquitous cloud service. Developers ranging from Silicon Valley startups to Fortune 500 companies use S3 as a key building block. As S3 usage scales across applications and teams, access control will become increasingly complex. Following sound policies early in S3 deployment will reduce the chance of a catastrophic data breach down the road.

Related Content:

 

Rob Enns joined Bracket Computing from VMware, where he was vice president of engineering in their networking and security business unit. Before joining VMware, Rob was vice president of engineering at Nicira, which was acquired by VMware in 2012. Previously he spent 11 years … View Full Bio

Article source: https://www.darkreading.com/cloud/aws-s3-breaches-what-to-do-and-why/a/d-id/1329354?_mc=RSS_DR_EDT

50% of Ex-Employees Still Have Access to Corporate Applications

Former employees increase the security risk for organizations failing to de-provision their corporate application accounts.

Nearly half of businesses say former employees are still able to access corporate accounts, a new study found.

Ex-employees pose a big security risk: Twenty percent of businesses have experienced data breaches by former staff, according to OneLogin’s new “Curse of the Ex-Employees” report. Of those, nearly half claim that more than 10% of all data breaches are the direct result of former workers.

Researchers conducted 500 interviews among IT employees who are at least partially responsible for security and make decisions about hardware, software, and cloud-based services. Half say ex-employees’ accounts remain active for longer than a day after they leave the company; 20% take a month or more to deprovision employees after they leave.

The more engrained someone is in an organization, the harder it is to deprovision. Two-thirds of respondents report on-site employees are toughest. Half of respondents don’t use automated deprovisioning and must manually remove access to corporate applications, a lengthy process that increases the chance former employees can still access their accounts.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/50--of-ex-employees-still-have-access-to-corporate-applications/d/d-id/1329370?_mc=RSS_DR_EDT

Researchers Create Framework to Evaluate Endpoint Security Products

Black Hat USA researchers tested more than 30,000 types of malware to learn the effectiveness of endpoint security tools – and they’ll demonstrate how they did it.

If your business has recently shopped for an endpoint security tool, you’ve heard the hype. Terms like “machine learning” and “artificial intelligence” overwhelm the market and allude to more advanced and effective tools – but which products actually deliver?

Security researcher Lidia Giuliano and independent CTO Mike Spaulding wanted to find out. The two spent five months building a business case for an endpoint protection system. Their goal was to better understand problems and create test scenarios for products, building a system to evaluate market solutions for security leaders who struggle with the process.

Their research was initially driven by a spike in ransomware attacks, explains Giuliano. The process of choosing a security tool was complicated by an onslaught in marketing terms.

“Talking to managers, to friends, you would hear ‘machine learning, artificial intelligence, indicators of this, indicators of that’ … ‘we catch everything, we’ve the best AV tool,'” she says. “But what is this based on? What is fact and what’s fiction?”

It was frustrating, says Spaulding. Vendor claims were “so grandiose” and recommendations were “so over the top”. Some advised the potential customers to test in a production environment. “We would never feel that confident,” he notes.

They wanted someone to demystify the buzzwords and provide real-world examples. Unfortunately, they soon realized, there wasn’t much available information on the process of testing and selecting endpoint security tools outside vendor-provided materials.

“One of the key findings is there’s nothing to go by,” says Giuliano. “Vendors will give testing guides, but if you don’t know what you’re looking for, it’s not very helpful to begin with.”

Giuliano and Spaulding set out to separate fact from fiction, and develop a framework to gauge the effectiveness of different products. They plan to share this methodology, and key considerations used in the test process, during their talk at Black Hat USA next week, “Lies, and Damn Lies: Getting Past the Hype of Endpoint Security Solutions.”

Their process involved testing 30,000 pieces of malware against five tools currently on the market. The two initially contacted more companies to have their products tested; some denied requests after learning about the rigorous process.

Using a variety of malware was key, says Giuliano. They tested against old and new forms of malware, and used rehashing mechanisms to make them as close to zero-days as possible. Products may not recognize different variations of malware.

“The results were really interesting,” says Giuliano, who says some products did not perform as promised. “When we mutated a lot of [the malware], a lot of it still slipped through.”

The biggest takeaway for companies browsing tools, says Giuliano, is the importance of understanding business requirements and being empowered to ask the right questions. All this internal research needs to be done before a conversation takes place, she notes.

“Know the problem you’re trying to solve and the value you bring to the business,” she explains. “It’s not about the product you start looking at, but understanding security gaps in your environment.”

When it comes time to testing, both experts advise against testing in a vendor environment.

“You can’t do that because you can’t compare apples with apples,” says Giuliano. “You have to create your own environment.” Adversaries look for common free virtualization tools, she explains, and the researchers saw different outcomes testing with virtualization platforms.

Every organization has different security needs. The idea behind this talk is not to tell attendees what to buy, but to arm them with questions and information for browsing products.

“One of the biggest values for us is prevention,” says Spaulding as an example of a business goal. Most organizations share the common goal of eliminating security issues like ransomware and viruses, but each business model changes based on what they value.

Prevention is high-value compared with EDR, he notes. “At the end of the day, people don’t want the headache in the first place,” he notes.

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.

 

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/endpoint/researchers-create-framework-to-evaluate-endpoint-security-products/d/d-id/1329371?_mc=RSS_DR_EDT

Alexa is listening to what you say – and might share that with developers

Amazon is considering handing transcripts of everything Alexa hears over to third-party developers, according to sources close to the matter cited in a report from The Information.

One developer who used to be a product head on Amazon’s Alexa team, Ahmed Bouzid, says that current access only gives developers “70% of what they need to know” to get better at doing customers’ bidding. The Information reports that some teams already have access to full recording data, though it’s not clear which developers get added to that list and why.

This would be the first time that full transcripts from Amazon’s voice assistant were shared with third-party developers.

Amazon told CBS This Morning that it wouldn’t do this kind of thing without opt-in:

We do not share customer-identifiable information to third-party skills [apps] without the customer’s consent.

That’s not particularly reassuring. It’s common for data-gorging companies to point to a lack of identity details and equate that lack to a privacy shield. But in these days of Big Data, the claim has been proved to be flawed. After all, as we’ve noted in the past, data points that are individually innocuous can be enormously powerful and revealing when aggregated. That is, in fact, the essence of Big Data.

Take, for example, the research done by MIT graduate students a few years back to see how easy it might be to re-identify people from three months of credit card data, sourced from an anonymized transaction log.

The upshot: with 10 known transactions – easy enough to rack up if you grab coffee from the same shop every morning, park at the same lot every day and pick up your newspaper from the same newstand – the researchers found they had a better than 80% chance of identifying you.

At any rate, at this point, Amazon’s smart assistant only records what’s said to it after it’s triggered by someone saying “Alexa” (or one of the other trigger words you can choose for the device).

But it’s certainly possible that the voice assistant can be triggered by mistake. In January, San Diego’s XETV-TDT aired a story about a six-year-old girl who bought a $170 dollhouse and a small mountain  of cookies by asking her family’s Alexa-enabled Amazon Echo, “Can you play dollhouse with me and get me a dollhouse?”

Viewers throughout San Diego complained that after the news story aired, their Alexa devices responded by trying to order dollhouses.

The Google Home voice assistant has its own history of miscues: a Google Home ad, which aired during the Super Bowl in February, featured people saying “OK Google,” causing devices across the land to light up. As well, in April, fast-food restaurant Burger King aired a TV ad that triggered Google Home devices.

Arkansas police, for their part, are hoping that an Amazon Echo found at a murder scene in Bentonville might have been accidentally triggered on the night of a murder. If by any chance it was set to record, the recordings could help with an investigation into the death of a man strangled in a hot tub.

Amazon’s fight to keep Echo recordings out of court was rendered moot when the murder suspect voluntarily handed them over. But the case raised a bigger question: with Echo/Alexa, Siri, Cortana and Google’s Home assistant in many homes these days, and knowing that some of the technology is listening and recording, who might be able to exploit that?

In the Arkansas case, we know that it’s law enforcement who were after device recordings. But in the future, it could be hackers. In Amazon’s case, it’s looking like third-party developers are going to be in on the pony show. With all these ears eager to listen in on us, it’s smart to know the risks and take the appropriate defensive measures.

We’ve passed on these tips for locking down voice assistants in the past, and they’re worth repeating:

  • Not currently using your Echo? Mute it The mute/unmute button is right on top of the device. The “always listening” microphone will shut off until you’re ready to turn it back on.
  • Don’t connect sensitive accounts to Echo On more than a few occasions, daisy chaining multiple accounts together has ended in tears for the user.
  • Erase old recordings If you use an Echo, then surely you have an Amazon account. If you go on Amazon’s website and look under “Manage my device” there’s a handy dashboard where you can delete individual queries or clear the entire search history.
  • Tighten those Google settings If you use Google Home, you’re already aware of the search giant’s appetite for data collection. But Google does offer tools to tighten things up. Like the Echo, Home has a mute button and a settings page online, where you can grant or take away various permissions.

You can also delete your existing Alexa recordings. Here’s how:

To delete specific recordings:

  • Open the Alexa app on your phone
  • Select Settings
  • Select History
  • Choose the recordings you’d like to delete

To delete entire history:

  • Open Amazon.com
  • Select Manage My Content
  • Click on Alexa
  • Delete entire history


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GoupGKQsfvE/

The iPhone lockscreen hole that we can’t reproduce

Last week, Computerworld reported a security hole in the iPhone lockscreen.

The hole wasn’t catastrophic, but when you consider that “locked” is supposed to mean locked, you shouldn’t be able to change any configuration settings on someone else’s phone without unlocking it first.

The ComputerWorld “hack” involves popping up Siri at the lockscreen by holding down the Home button for a second or so, and then saying the words, “Cellular data”. (In the UK, at least, you can also say “Mobile data”.)

Siri then asks if you’d like to turn data off, thus effectively cutting the phone off from the network.

This doesn’t sound like the end of the world from a security point of view, and perhaps it isn’t, but you can see how the feature could be abused.

By siriptiously (sorry, surreptitiously) turning off someone’s phone connection while they’re not looking, but leaving their phone apparently untouched, you could help an accomplice who is about to try some sort of social engineering attack against the victim that would otherwise attract their attention with an unwanted verification call or a warning SMS.

Sure, you could steal or hide their phone, or even just turn off the ringer, with a similar result, but a missing phone might be noticed, so to speak, and even silenced phones usually vibrate when they want attention.

According to Computerworld, the bug exists even on the latest iOS 10.3.2 release – that’s what we’re running, so we put it to the test.

Does it work?

The good news is that we couldn’t replicate Computerworld’s hack.

We were able to activate Siri, to issue the peremptory words, “Mobile data”, and to get directly at a screen offering to turn it off.

But when we told Siri to turn it off, he immediately said (our Siri is a bloke, don’t know why), “You’ll need to unlock your iPhone first,” and popped up the passcode screen to unlock the phone, as you would expect:

What to do?

The bad news is that you can never be quite sure which voice commands will, and which won’t, work when your device is locked – unless you can figure out and try all of them.

So, whether this is a bug or not, we strongly recommend that you turn Siri off at the lockscreen – after all, it’s not called the lock screen for nothing.

To stop Siri listening in at the lockscreen, go to Settings | Siri and turn off Access When Locked.

Better yet, unless you really don’t like touching your phone, consider turning Siri off altogether, which has the handy side-effect of telling Apple to discard all the pattern-matching voice data it’s collected from you so far:

While you’re about it, review the other iOS features you’ve enabled on the lockscreen, in case you’re allowing more access than you thought.

It’s bad enough that Apple no longer allows you to block access to the camera app when your phone is locked; we recommend that you add as few additional lockscreen options as you can.

Go to Settings | Touch ID Passocde and look at the Allow access when locked section:

(We’ve got Siri turned off altogether; if he/she is enabled, you’ll see him/her on in this list, too.)

Remember, when it comes to your lockscreen, less is more.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tjz4TqHneYk/

Facebook users pwnd by phone with account recovery vulnerability

Facebook account recovery using pre-registered mobile numbers is poorly implemented and open to abuse, according to critic James Martindale.

Martindale wrote an article on Medium, titled I kinda hacked a few Facebook accounts using a vulnerability they won’t fix, highlighting his concerns in a bid to push the social network into tightening up its system.

Old phone numbers no longer owned by a user but are still tied to their account can be assigned by a carrier to another person. If the number is still linked to a Facebook account, the new owner can subsequently log into the account without the password and either change it or leave it be (so someone doesn’t know a breach has occurred). The loophole cannot target specific accounts but might be used to hijack an account before, for example, running scams against the account holder’s friends and contacts.

Quizzed by The Register, Facebook said its practices mirrored those of other online services, adding that it already pushes alerts in cases where it detects suspicious password recovery attempts.

Several online services allow people to use phone numbers to recover their accounts. We encourage people to only list current phone numbers, and if we detect the password recovery attempt as “suspicious” we may prompt the person for more information.

Martindale responded that Facebook was missing the point, adding that “several online services” also having account recovery via phone numbers isn’t a very good defence.

Facebook is different from these other services because it allows users to have multiple mobile phone numbers. Martindale stumbled on the issue when his carrier assigned him a number previously linked to another Facebook user. He received a reminder text from Facebook and discovered that the associated account had five other phone numbers linked to it. “Many of my less tech-savvy friends never remove phone numbers, they just keep adding their new number when they switch carriers or move,” Martindale noted.

Mobile account recovery on Facebook [source: James Martindale]

“I probably never would’ve stumbled across this exploit if it weren’t for Facebook sending re-engagement SMS messages to the phone number I inherited,” he added. “I understand sending a few texts to remind an inactive user of what they’re missing out on, but after a while shouldn’t Facebook decide they’re just not interested? These text alerts make it incredibly easy to discover when a phone number is attached to a Facebook account (other than searching Facebook for the phone number).”

“When I started this experiment, I decided I would get to the point where Facebook forces a password reset, and then stop,” Martindale explained. “Facebook surprised me by letting me log in without changing anything. I don’t know of a single website other than Facebook that lets me recover an account with a phone number, and then not change the password.”

Martindale told El Reg he was glad to hear that Facebook has some sort of system to detect suspicious logins while arguing it needed it needed to be improved.

“Once I discovered this exploit, I developed a habit whenever I get a new number to log into the associated Facebook account (if it exists) to see if the exploit still exists and to remove the phone number from the account,” he said. “Never once have I been ‘prompted for more information’. Facebook’s suspicious login detection needs work.”

As well as improving detection of suspicious recovery attempts, Facebook should apply changes so that a user can’t retrieve an account using the same email address or phone number they used to log in. “Google, Microsoft, and a ton of other good online services make users use an alternate email address or phone number, and sometimes require the rest of the obfuscated number/email address in order to continue recovery,” Martindale argued. “This alone would stop this exploit in its tracks.”

In addition, Facebook should apply a mandatory password reset every time users go through the account recovery process. “Don’t let people recover an account without forcing a password reset and sending a notification to every email address and phone number tied to the account,” Martindale said. “The account owner must know when their password is changed so that they can know if somebody is getting into their account without them knowing.”

“When a user adds a new phone number to an account, Facebook should immediately ask them if they want to remove their old phone number,” he added. “If Facebook encourages users to only list current phone numbers this would be the best way to do just that,” Martindale concluded. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/17/facebook_login_security/

Burglary in mind? Easy, just pwn the home alarm

It’s Monday, and infosec-watchers are showing their age by calling internet of things security disclosures “a broken record”. This time, it’s a home security system that’s remotely p0wnable.

iSmartAlarm ships a variety of app-linked security products, including door sensors, motion sensors, cameras, locks, and a controller unit (called the Cube), with iOS and Android apps, Alexa capabilities … pretty much the full suite of ShinyHappySmartLife™ must-haves.

Now, it’s time to get out your bingo cards, because the list of vulnerabilities includes issues with SSL certificate validation, authentication errors, an access control blunder, and a denial of service.

The vulnerabilities were turned up by Ilia Shnaidman of Bullguard Security (developer of a competing system called Dojo), with one CVE request rejected as in error.

So let’s stick with the vulnerabilities that got Common Vulnerabilities and Errors listings, whose discovery is detailed here.

The SSL certificate validation bug is in the CubeOne that handles communications between the iSmartAlarm-protected home and the smartphone app.

During the SSL handshake with its server, the CubeOne doesn’t check the server certificate’s validity, so Shnaidman only needed to forge a self-signed cert to get control over CubeOne-to-server traffic.

An error in how the system handles its XXTEA (corrected block Tiny Encryption Algorithm) keys allowed the researcher to create and use a valid encryption key, leading to the access control and authentication bypass bugs.

Shnaidman says he went public after the vendor didn’t reply to his disclosure (we have contacted the company for confirmation).

At the time of writing, The Register couldn’t find an iSmartAlarm firmware update more recent than March 2017. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/17/burglary_in_mind_easy_just_pwn_the_home_alarm/