STE WILLIAMS

Ransomware wipes evidence, lets suspected drug dealers walk free

Six alleged drug criminals will go free thanks to a ransomware attack on a small Florida city, it was revealed this month.

Stuart is a city in Florida with a population of around 16,500. It suffered an attack involving the Ryuk ransomware in April 2019 that took city servers offline. While reports said that city emergency services, including 911 calls, were unaffected, things were a little different behind the scenes. Detective Sergeant Mike Gerwan explained:

Because we didn’t have access to the internet we were sending police officers to calls blind.

The City refused to pay the $300,000 bitcoin ransom, and instead kept servers disconnected while it rebuilt its servers. At the time, city manager David Dyess said that the city’s data backups saved it from having to negotiate.

While Stuart might have saved some of its data, there were some casualties. Among them were case records that the Stuart police department was relying on for several prosecutions. It was unable to recover crucial evidence for narcotics cases involving 6 defendants facing a total of 28 charges.

The crimes included methamphetamine and cocaine possession, along with selling, manufacturing, and delivering narcotics. Another charge involved illegally using a two-way communication device, according to local station WPTV. Gerwan told reporters:

We lost approximately a year and a half of digital evidence. Photos, videos. Some of the cases have been dropped.

The attackers got into city systems via a spearphishing email, and lurked undetected in the network for two months before launching the Ryuk attack, Gerwan said:

We were totally crippled for the first month and a half. We all went home one day and the next day we came back to work and we were back in the year 1984. Back in 1984 if you wanted to look somebody up you had to find them in the phone book.

Electronic evidence destruction like this seems like a storyline straight out of a Breaking Bad script, but in this case, the ransomware criminals inadvertently did the defendants a favour. It’s a surprisingly common problem, according to Gerwan. He said:

I can’t recall when speaking to my federal partners, that there has been a case where data had not been lost.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/-YH21vT56Rs/

Clearview AI loses entire database of faceprint-buying clients to hackers

Clearview AI, the controversial facial recognition startup that’s gobbled up more than three billion of our photos by scraping social media sites and any other publicly accessible nook and cranny it can find, has lost its entire list of clients to hackers – including details about its many law enforcement clients.

In a notification that The Daily Beast reviewed, the company told its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts they’ve set up, and to the number of searches they’ve run.

The disclosure also claimed that Clearview’s servers hadn’t been breached and that there was “no compromise of Clearview’s systems or network.” The company said that it’s patched the unspecified hole that let the intruder in, and that whoever it was didn’t manage to get their hands on customers’ search histories.

Tor Ekeland, an attorney for Clearview, sent a statement to news outlets saying that breaches are just a fact of life nowadays:

Security is Clearview’s top priority. Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.

Clearview, which has sold access to its gargantuan faceprint database to hundreds of law enforcement agencies, first came to the public’s attention in January when the New York Times ran a front-page article suggesting that the “secretive company […] might end privacy as we know it.”

In its exposé, the Times revealed that Clearview has quietly sold access to faceprints and facial recognition software to more than 600 law enforcement agencies across the US, claiming that it can identify a person based on a single photo, reveal their real name and far more.

Within a few weeks of the Times article, Clearview found itself being sued in a potential class action lawsuit that claims the company amassed the photos out of “pure greed” to sell to law enforcement, thereby violating the nation’s strictest biometrics privacy law – the Biometric Information Privacy Act (BIPA).

Among the many online sources that Clearview has scraped to get all the biometric data it’s selling (or giving away), Twitter, Facebook, Google and YouTube have ordered the company to stop its scraping – a practice that violates the social media giants’ policies.

In a followup report, the Times noted that there’s a strong use case for Clearview’s technology: finding victims of child abuse. Investigators told the newspaper that Clearview’s tools have enabled them to identify the victims featured in child abuse videos and photos, leading them to names or locations of victims whom they may never have been able to identify otherwise. One retired chief of police said that running images of 21 victims of the same offender returned 14 minors’ IDs, the youngest of whom was 13.

Following the Times’ exposé, New Jersey barred police from using the Clearview app. Canada’s privacy agencies are also investigating Clearview to determine if its technology violates the country’s privacy laws, the agencies said on Friday.

David Forscey, the managing director of the non-profit Aspen Cybersecurity Group, told the Daily Beast that Clearview’s breach should be worrying for its customers:

If you’re a law-enforcement agency, it’s a big deal, because you depend on Clearview as a service provider to have good security, and it seems like they don’t.

Put another way by tech policy advocate Jevan Hutson:

Clearview continues to give us a clear view of why biometric surveillance is an unsalvageable trash fire.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/UEh1lsdY7KE/

Southern Water not such a phisherman’s phriend, hauls itself offline to tackle email lure

British utility biz Southern Water was the victim of a phishing attack on Wednesday, resulting in a hurried shutdown of some of the company’s systems.

An industry insider told The Register that Southern Water’s networks, including the system responsible for Supervision, Control, and Data Acquisition (SCADA) were hit. The source, who asked to remain anonymous, added the cause was an employee inadvertently opening an attachment in an email purporting to be from the company’s CEO with a subject of “Coronavirus”.

Customers may have noted a slight wobble in services on 26 February as the company’s social media orifice noted that things had dropped offline due to “essential maintenance”.

A little later, things were back up and running. No harm done. Nothing to see here.

Behind the scenes, however, the tech team were a tad busier as a spokesperson confirmed in response to a question from The Register sent on 27 February:

Yesterday, a phishing attack tried to gain access to our services. It was not successful, our information security team responded very swiftly and no customer or confidential data was accessed.

The attack did not directly cause any outages, however we did suspend a number of our internet services while we investigated. All services are now back up and running.

The Register understands that Southern Water is actually rather chuffed with the way its teams handled the incident. It’s just a shame that it happened in the first place.

Phishing, as all Register readers are all too aware, is an attack where users are tricked into doing what the UK’s National Cyber Security Centre (NCSC) delicately calls “the wrong thing”.

In this case, the phishing was via email and the use of the CEO as the sender will have made it look genuine to the recipient. Stir in some COVID-19 hysteria and we can see how an ordinary user could be persuaded to open something they might regret that slithered past the usual filters.

Southern Water has outsourced chunks of its processes over the years. It renewed a managed service contract with outsourcing giant Capita back in 2018 for a cool £30m. The agreement saw Capita taking care of front and back-office duties for an initial five-year term with an option to extend for a further three years.

Perhaps fortunately for Southern Water, The Register understands Capita’s involvement with the utility is more to do with printing than external email. That said, Capita does have form with email snafus (as its Education Services tentacle will attest), so things might have turned out differently. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/28/southern_water_phishing/

Educating Educators: Microsoft’s Tips for Security Awareness Training

Microsoft’s director of security education and awareness shares his approach to helping train employees in defensive practices.

RSA Conference 2020 – San FranciscoThe process of developing and implementing a cybersecurity awareness program is tricky. How do you enforce regular trainings? How do you convince employees to change their behaviors? How do you teach best security practices when the people in your organization are using more applications and services on a daily basis?

“We’re asked to do a lot of things; we’re pulled in many directions,” said Ken Sexsmith, who heads up security education and awareness at Microsoft, in a session at this week’s RSA Conference. “We’re using the same technology, but we’re busy doing multiple things. We have to find ways to get people interested and motivated to do things differently than they’ve done.”

It’s no small task for a company with 250,000 employees and 441,000 Intune-managed devices hitting the network. Microsoft handles 630 billion authentication requests each month and hosts 1.04 million monthly Teams meetings, in addition to 1.23 million monthly Teams calls. It generates a terabyte of supply chain Internet of Things (IoT) data in a day and processes 128,000 Helpdesk chats.

“Ultimately what we’re trying to do is protect our data,” Sexsmith explained. “It’s no different from any other company.”

Microsoft started with a “digital security strategy” to give a sense of how education affected the organization. “Employees want to know how their work relates to the broader strategy,” he said. The approach covers assurance, identity management, device health, data and telemetry, information protection, and risk management, where education and awareness come into play.

“Humans are the firewall – the last line of defense for what we’re doing,” he added. Today’s attacks have shifted to the individual, with a stronger focus on credential phishing and identity-based threats. The adversaries’ level of sophistication has evolved, said Sexsmith, and the days of receiving emails with poor spelling and grammatical errors are over. “The attackers are getting smart just as we’re increasing our technology and becoming smarter,” he noted. 

Sexsmith took a deep dive into three aspects of Microsoft’s education and awareness program: role-based security and compliance training, awareness campaigns, and information platforms where best practices, education, information, and protection are shared via company intranets.

Employees across the business are required to take three internal training courses: Standards of Business Conduct, Security Foundations, and Privacy 101. Some trainings are role-dependent; for example, engineers are required to take a technical security training course called Strike.

Motivation Is Key
The key challenge is creating an engaging, relatable training course that effectively teaches employees the concepts they need to know, Sexsmith said.

Sexsmith pointed to a few tricks he uses in his programs. One of these is the “Social Proof Theory,” a social and psychological concept that describes how people copy other people’s behavior – if your colleagues are doing a training, you’ll do it, too. Gamification also helps: “People want to learn; people want to master skills, but there’s also a competitive nature around that,” he said. Some trainings use videos that make security concepts more accessible.

One problem, he said, is lessons that aren’t reinforced aren’t retained. Humans forget half of new information learned within an hour and 70% of new information within a day. “By lunchtime, you’re going to forget 50% of the stuff I’m up here saying,” he joked to his morning audience.

To fight this, Microsoft uses a training reinforcement platform called Elephants Don’t Forget to help employees build muscle memory around new concepts. During the gap between trainings, the program sends participants two daily emails with a link to questions tailored to the course. They have 60 seconds to respond to each question; if they get it wrong, they’re given more information on the topic. A customized dashboard shows their scores and progress over time.

“It’s all about the engagement and staying with that until you master it, and that’s when you become the most efficient,” said Sexsmith.

Then there is the application of concepts, which is done through phishing simulations. “This is where the rubber meets the road,” he added. Fake emails, designed to appear as though they come from Microsoft, give employees a chance to apply their new knowledge. They could click on an email and still have a chance to report it, or follow through and share their credentials.

In these test campaigns, Sexsmith blends personal and professional by creating themed phishing emails around Tax Day, spring cleaning, cybersecurity awareness month, and Black Friday. Cybercriminals do the same, and this helps employees gain a sense of different types of phishing emails they might see.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Tense Talk About Supply Chain Risk Yields Few Answers.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/educating-educators-microsofts-tips-for-security-awareness-training/d/d-id/1337192?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Your phone wakes up. Its assistant starts reading out your text messages. To everyone around. You panic. How? Ultrasonic waves

Video Voice commands encoded in ultrasonic waves can, best case scenario, silently activate a phone’s digital assistant, and order it to do stuff like read out text messages and make phone calls, we’re told.

The technique, known as SurfingAttack, was presented at the Network and Distributed Systems Security Symposium in California this week. In the video demo below, a handset placed on a table wakes up after the voice assistant is activated by inaudible ultrasonic waves. Silent commands transmitted via these pulses stealthily instruct the assistant to perform various tasks, such as taking a photo with the front facing camera, read out the handset’s text messages, and making fraudulent calls to contacts.

It’s basically a way to get up to mischief with Google Assistant or Apple’s Siri on a nearby phone without the owner realizing it’s you causing the shenanigans nor why it’s happening – if, of course, they hear it wake up and start doing stuff. It’s a neat trick that could be used to ruin someone’s afternoon or snoop on them, or not work at all. There are caveats. It’s just cool, OK.

Youtube Video

Eggheads at Michigan State University, University of Nebraska-Lincoln, and Washington University in St Louis in the US, and the Chinese Academy of Sciences, tested their SurfingAttack technique on 17 models of gadgets; 13 were Android devices with Google Assistant, and four were iPhones that had Apple’s Siri installed.

SurfingAttack successfully took control of 15 of the 17 smartphones. Only Huawei’s Mate 9 and Samsung’s Galaxy Note 10+ were immune to the technique.

“We want to raise awareness of such a threat,” said Ning Zhang, an assistant professor of computer science and engineering at St Louis, on Thursday. “I want everybody in the public to know this.”

Here’s one way to pull it off: a laptop, located in a separate room from the victim’s smartphone, connects to a waveform generator via Wi-Fi or Bluetooth. This generator is near the the victim’s phone, perhaps on the same table, in the other room, and emits voice commands, crafted by the laptop, via ultrasonic waves. Technically, a circular piezoelectric disc placed underneath the table where the phone is resting emits the pulses from the generator.

The silent ultrasonic wave is propagated through the table to cause vibrations that are then picked up by the smartphone. The signals command the assistant on the phone to do things like “read my messages” or call a contact. A wiretapping device, also placed underneath the table, records the assistant and relays the audio back to the laptop to transcribe the response.

Tiny little small caveat: you’ll need to imitate your victim’s voice

So here’s a catch: to activate someone’s smartphone, the attacker has to imitate or synthesize the victim’s voice.

Smartphone assistants are trained on their owners’ voices so they won’t respond to strangers. A miscreant has to find a way to craft realistic imitations of the victim’s voice, therefore. It’s not too difficult with some of the machine-learning technology out there already. However fiends will have to collect enough training samples of the victim’s voice for the AI to learn from. Qiben Yan, first author of the paper and an assistant professor of computer science at Michigan State University, told The Register the team used Lyrebird to mimic voices in their experiment.

Victims must have given Google Assistant or Siri permission to control their phones. The assistants can only perform a limited number of functions unless the user has already unlocked their phones. In other words, even if you can imitate a person, and send their device ultrasonic waves, the phone’s assistant may not be able to do much damage at all anyway.

For example, if a target has not toggled their smartphone’s settings to allow the digital assistant to automatically unlock the device, it’s unlikely SurfingAttack will work.

A glitchy computer screen

LCD pwn System: How to modulate screen brightness to covertly transmit data from an air-gapped computer… slowly

READ MORE

“We did it on metal. We did it on glass. We did it on wood,” Zhang said. Even when the device’s microphone was placed in different orientations on the tables, SurfingAttack was successful as well as when the circular piezoelectric disc and the wiretapping device were placed underneath a table with the phone 30 feet away.

The best way to defend yourself from these attacks is to turn off voice commands, or only allow assistants to work when a handheld is unlocked. Alternatively, placing your smartphone on fabric on a table would make it more difficult for the ultrasound signals to be transmitted.

Despite all these caveats, the academics reckoned SurfingAttack posed a serious potential threat. “We believe it is a very realistic attack,” Yan told El Reg. “The signal waveform generator is the only equipment which is bulky. Once we replace it with a smartphone, the attack device can be portable.

“One great advantage of SurfingAttack is that the attack equipment is placed underneath the table, which makes the attack hard to discover. For synthesizing victims’ voice, we have to capture victims’ voice recording. However, if we want to target a specific user, it doesn’t seem to have any problem in capturing the users’ voice commands or synthesizing them after recording the victims’ voice.

“Moreover, the Google Assistant is not very accurate in matching a specific human voice. We found that many smartphones’ Google Assistants could be activated and controlled by random people’s voices. Also, many people left their phones unattended on the table, which creates opportunity for the attackers to send voice commands to control their devices.” ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/28/smartphone_ultrasonic_hack/

Tense Talk About Supply Chain Risk Yields Few Answers

RSA panelists locked horns over whether the ban preventing US government agencies from doing business with Huawei is unfairly singling out the Chinese telecom giant.

(Image: rost9/Adobe Stock)

RSA CONFERENCE 2020 – San Francisco -A tense discussion over supply chain risk management at this year’s RSA Conference highlighted ongoing questions, but offered few conclusions, around how the nation can ensure the safety of foreign-made tech products used by the US government in critical infrastructure.

The Wednesday session, titled “How to Reduce Supply Chain Risk: Lessons from Efforts to Block Huawei,” saw panelists spar at times over the realities of preventing security flaws from making their way into technology during the manufacturing process. The concern is primarily around the ability of other nations that produce technology to insert back doors that can later be used to launch an attack or collect intelligence.

At the center of it was the question about whether China-based Huawei is being unfairly singled out after the Trump administration in August banned US government agencies from doing business with the telecommunications equipment manufacturer. The rule now prohibits federal purchases of telecom and video surveillance equipment and services from five Chinese companies, including Huawei. The legality of that ban was upheld by a federal district court judge earlier this month.

Katie Arrington, cyber information security officer of acquisitions at the Department of Defense, who oversees supply chain risk management for the agency, noted the move was made for good reason.

“The recommendation was made to take Huawei out for a very specific reason. The law is the law,” she said. “Our job in the DoD is to make you safe. We are doing our best to buy down the risk. I don’t want to be in a world where I wake up one morning and the banks don’t work, and traffic lights don’t work and break down. I want to make sure that control remains here, where I can touch you.”

But Andy Purdy, CSO for Huawei Technologies USA, argued the ban was unfair given that many other companies based in other countries pose similar risks.

“Is it true or not true that at least five nations in the world have power to implant hidden functionality in hardware and software and launch an attack?” he said.

“That’s ridiculous,” Arrington retorted. “The bottom line is we are a democracy. We’re different. When you have a product from a country that can take over, run, manipulate the most critical things in our country, why would you not want to be sure that country has all the right philosophical endeavors, which they don’t.”

Kathryn Waldron, a fellow with R Street Institute, argued that the supply chain is context-specific and questioned whether kicking Huawei out was a good model for both national and supply chain security going forward

“I think we need to have a much more holistic structure approach that looks at risk of moment, but [also] looks at what sort of policies we put in place that will have positive market growth and will provide market competitors,” she said.

The panel also included Bruce Schneier, security technologist, researcher, and lecturer at Harvard Kennedy School. Schneier challenged Arrington on several points around the discussion of how the supply chain is tied to national security, arguing that conflating the two creates confusion.

“Tying national security to trade policy makes for impossible security trade-offs. Either this is a national security issue, in which case there are things we do and don’t do, or this is a trade issue, in which case we negotiate on a variety of things,” Schneier said. “It cannot be both. It just doesn’t work.”  

Schneier also noted changing attitudes among government officials regarding device security: At one time US spy agencies were using vulnerabilities to their advantage to collect intelligence. But as other nations caught up in their own spying ability, now the US is more concerned about how they might exploit vulnerabilities. Ultimately, he said, supply chain security will continue to be what he called an “insurmountably hard problem.”

“Can we build a trustworthy network out of untrustworthy parts?” Schneier said. “I don’t know if they answer is ‘yes’ yet. We are going to be living in a world of untrustworthy parts.”

Related Content:

 

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/tense-talk-about-supply-chain-risk-yields-few-answers/b/d-id/1337180?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Government Employees Unprepared for Ransomware

Data shows 73% are concerned about municipal ransomware threats but only 38% are trained on preventing these attacks.

RSA CONFERENCE2020 – San Francisco – Nearly 75% of government employees are concerned about the potential for ransomware attacks against cities across the United States, but only 38% of state and local government workers are trained in ransomware prevention, according to a new report.

The “Public Sector Security Research” study, conducted by IBM and The Harris Poll, surveyed 690 people who work for state and local agencies in the US. One in six said their department was affected in a ransomware attack. Despite this, half didn’t notice any change in preparedness among their employers. More than half (52%) of IT and security professionals polled said their budgets for handling cyberattacks have remained stagnant this year.

Some sectors are top of mind for ransomware threats. The study found 63% of respondents are worried a cyberattack could disrupt the 2020 elections. Most government employees place their local Board of Elections among the three most vulnerable systems in their communities.

Public education is another area of concern, ranking as the 7th most targeted industry, according to IBM’s X-Force Threat Intelligence Index, up from 9th the year before. Ransomware affected school districts in New York, Massachusetts, New Jersey, Louisiana, and other states in 2019. Forty-four percent of respondents from the public education sector said they didn’t have basic cybersecurity training; 70% hadn’t received sufficient training on how to respond to an attack. 

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “How to Prevent an AWS Cloud Bucket Data Leak.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/government-employees-unprepared-for-ransomware/d/d-id/1337184?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Clearview AI Customers Exposed in Data Breach

Customers for the controversial facial recognition company were detailed in a log file leaked to news organizations.

Organizations including law enforcement, retail, and the US Immigration and Customs Enforcement are among those whose credentials were exposed in a massive breach at Clearview AI, the facial recognition company recently in the news for collecting more than 3 billion facial images from social media.

The log files include records of some 2,900 institutions and provide details such as the number of searches, the date of the last search, and the total number of log-ins. While most of the organizations found in the file are government institutions, approximately 200 are companies primarily in the retail and hospitality industries.

Clearview AI has been controversial for its collection of images and is the subject of a number of queries and investigations from committees and regulators at various levels of government. It is also the subject of a series of “cease and desist” lawsuits from social media companies including Facebook, Google, and Twitter.

For more, read here and here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “How to Prevent an AWS Cloud Bucket Data Leak.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/clearview-ai-customers-exposed-in-data-breach/d/d-id/1337189?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Slickwraps data breach earns scorn for all

Slickwraps, a Kansas company that makes vinyl wraps for phones and other electronics, announced last week that it had suffered a data breach.

This was no ordinary data breach. This was a breach that earned the deep scorn of both the hacker – who was twice blocked by Slickwraps for reporting the vulnerability – and observers after some other hacker went ahead and exploited the company’s vulnerable setup.

The Verge, for one, called the breach and the aftermath “comically bad”. One of the commenters on The Verge’s story, trost79muh, had this to say about when a company with garbage security meets a bug reporter with an attitude:

The whole thing on both sides was clownshoes, when an unpiercably large ego meets an unfathomably dense IT staff.

The initial hacker – who calls themselves a white-hat security researcher – isn’t coming out of this smelling like roses either. Slickwraps was given little time to follow up on their vulnerability report and they then proceeded to run amok getting and exploiting root and taunting the company instead of clearly explaining the vulnerability.

The hacker who initially found Slickwraps’ vulnerability goes by the handle Lynx0x00. They recently posted an article to Medium (here’s the archived version) detailing how they pulled off the hack and how pathetic Slickwraps’ response was.

You can read the Medium post or The Verge’s writeup for all the gory details, but in essence, Hacker 1 –  Lynx0x00 – found a vulnerability on Slickwraps’ phone case customization page that would enable anyone with the right toolkit to upload “any file to any location in the highest directory on their server (i.e. the ‘web root’).”

From there, an attacker could get at current and former employees’ resumes (including their selfies, email addresses, home addresses, phone numbers and more) and backed-up customer photos (including porn), among many other things.

Then, Hacker 2 came along, read the Medium post, exploited the vulnerability, and gang-emailed 377,428 email addresses from the company’s records using the hacked email address [email protected]. Some customers shared the hacked email on Twitter:

The responses to this breach are all over the map, but they generally fall into two camps: contempt for Slickwraps, and contempt for the way that Hacker 1 and Hacker 2 handled disclosure by breaching the company – not exactly “white hat” behavior, that. Here’s one such critique from Reddit’s r/hacking forum:

Reddit r/hacking thread, White hat hacker: ‘I hacked SlickWraps. This is how.’ IMAGE: Reddit screenshot
[All typos are sic] Theres just so much glaringly wrong with how this person went about this. This wasnt a “oh i found a vuln” this was an “i compromised their entire company, stole customer data and then failed to properly convey the severity”

tagging someone and telling them they failed a “vibe check” is a joke. no wonder noone at the company took the disclosure seriously. and then posting a complaint email and assuming the social media person would put 2 and 2 together that they have been compromised? also not the way to go about a breach report.

Last i checked a fairly common disclosure cycle is about 90 days, not the 7 this person gave them to figure out by vague twitter posts they had been compromised. If youre going to approach a company about your findings at least tell them you have something to disclose dont just tweet about “vibe checks” and then throw a hissy fit when they dont reply right away.

As far as the breached data goes, Slickwraps CEO Jonathan Endicott said in his announcement that the “Slickwraps Family” need not worry, as passwords and financial data are safe and weren’t involved in this breach.

The information did not contain passwords or personal financial data.

The information did contain names, user emails, addresses If you ever checked out as “GUEST” none of your information was compromised.

However, some commenters said that their information was compromised in spite of having registered only as guests on the site.

In their Medium post, Lynx0x00 said that they used the vulnerability to access an extensive list of sensitive information:

  • All SlickWraps admin account details, including password hashes
  • All current and historical SlickWraps customer billing addresses
  • All current and historical SlickWraps customer shipping addresses
  • All current and historical SlickWraps customer email addresses
  • All current and historical SlickWraps customer phone numbers
  • All current and historical SlickWraps customer transaction history
  • Current SlickWraps API credentials for its email marketing service provider
  • Current SlickWraps API credentials for a number of of the company’s credit card and payment handlers
  • Current SlickWraps API credentials for the company’s warehouse management system
  • Current SlickWraps API credentials for the company’s customer service platform
  • Current SlickWraps API credentials for the company’s official brand Facebook account
  • Current SlickWraps API credentials for the company’s official brand Twitter account
  • Current SlickWraps API credentials fo the company’s official brand Instagram account

…all of which the hacker accessed only after exploiting the vulnerability to get remote code execution (RCE), decrypting the local config file, and finding the credentials to get into the company’s database.

Readers, do the actions and disclosure style of this “white-hat” hacker pass your “vibe test?” Is that how responsible disclosure works? I’m a “No” on both counts, but please, do tell us what you think.

Slickwraps says the exploit has been fixed, and it’s working hard to get back customer trust.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MqNwwvLAhW0/

How one man could have flooded your phone with Microsoft spam

Microsoft has a neat web page that helps you get Outlook set up on your phone.

You can either scan in a QR code off the web page, which takes you to the relevant download link…

…or put in your phone number and get an SMS with the link in it:

Just like Italian security researcher Luca Epifanio, our first thought was, “What if someone decides to put in someone else’s phone number and then spam them over and over and over again?”

That would be pretty darned bad – bad for the recipient, whose phone would be swamped with unwanted text messages, and bad for Microsoft, who would look like shabby and unreconstructed spammers.

(It might also end badly for the person who dishonestly triggered all the spam in the first place, if ever they were found by law enforcement or the regulators, but that is an issue for another day.)

We tested it against our own phone number, using various browsers from various countries (we used the Tor proxy so we emerged onto the internet from semi-random places), and were happy to notice, as did Luca Epifiano, that after three messages, that was that.

Microsoft’s website will accept the number a fourth, fifth, sixth time, and so on, but simply and quietly stops texting it once it’s received three messages. (We don’t know how long it takes for the block to be lifted, but it certainly stopped us spamming ourselves at will.)

We tried to send many messages from various locations.
Only the first three showed up.

Well, Luca wondered just how robust Microsoft’s “same number” detection might be, and whether it could easily be bypassed.

Using a locally-installed web proxy, he snooped on his own web traffic to see what the data looked like on the way from his browser to Microsoft.

To his surprise, he found that by replaying the original web request with a non-alphabetic character at the end, such as a star (*) or a plus (+), he’d get three more goes at texting the number.

Then he could pick another character and get three more goes, and so on, allowing him to bypass the three-message limit at high speed, just by churning out new HTTP requests with a tiny modification each time.

Only the digits matter in the phone number to which the message gets sent, but – as Luca suggested in an email he sent us – it looks as though Microsoft’s “number verification” check was done with the extraneous characters included.

In other words, the number wasn’t being trimmed to its simplest correct form (you’ll see this called canonicalisation in the jargon) before it was logged, tested and used.

As a result, numbers that were the same in practice appeared different in theory, allowing the rate limit to be bypassed.

This is a similar sort of problem to one that Google experienced back in 2017, when an adware app that falsely claimed to be from the vendor WhatsApp, Inc. was able to sneak past the Play Store validation checks simply by adding a space character to the company name.

Visually, you couldn’t tell the difference, so the new app looked legitimate, but programmatically the two company names were of different lengths and contained different characters – so the new app was not recognised as an imposter and was admitted anyway.

What to do?

The good news is that you don’t have to do anything – Luca reported this responsibly to Microsoft, who fixed the problem.

We tried adding redundant characters to our own phone number today, and were unable to send any messages after the third had gone through.

Luca also received a bug bounty payout, with the ultimate result that everyone ended up a winner.

We think that the lessons to learn are:

  • Bug hunting isn’t just about machine code hacking and reverse engineering. You don’t need to crack open a debugger and a disassembler to do useful and productive cybersecurity work.
  • Bugs can be deceptively simple. In this case, a single character that would typically be ignored was enough to bypass an important rate limit. If you’re a programmer, don’t forget to test for the obvious things as well as all those complex “corner cases” you need to deal with.
  • Responsible bug reporting really works. If you find bugs, it’s tempting to make a big splash by disclosing them for shock value in a blaze of glory, but as Luca has shown here, you can do the right thing, help everyone else, and still get recognition – without turning security holes into nightmares.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SwdrGrXbCJw/