STE WILLIAMS

Palo Alto Networks Said to Buy Twistlock

Reports in Israel-based business publications say Palo Alto Networks has reached a deal to purchase the container security startup, as well as another Israeli security startup.

Business publications in Israel are reporting that Palo Alto Networks has reached a deal to purchase container security startup Twistlock and another unnamed security company, also based in Israel.

According to the reports, Palo Alto Networks will pay between $450 million to $500 million for 4-year-old Twistlock, based in Herzliya, Israel.

The news, which has not yet been confirmed by either Palo Alto Networks or Twistlock, comes on the same day Palo Alto Networks announced its new Prisma cloud security suite. Prisma combines existing, renamed Palo Alto products and newly acquired offerings into a comprehensive set of products intended to provide complete security for a customer’s cloud infrastructure.

Analysts expect the Twistlock and other rumored acquisition will be significant sources of questions at Palo Alto Networks’ quarterly earnings call, scheduled for later today.

Read more here and here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/palo-alto-networks-said-to-buy-twistlock/d/d-id/1334831?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Don’t Just Tune Your SIEM, Retune It

Your SIEM isn’t a set-it-and-forget-it proposition. It’s time for a spring cleaning.

Any system information and event management (SIEM) system will require a good deal of customization out of the box before the organization deploying it will see useful data. Once set up, though, the SIEM quickly becomes an invaluable tool in the engineer’s toolbox for identifying risks, trends, malicious activity, and even simple misconfigurations. But how many organizations are treating the SIEM as a static solution and simply “setting and forgetting”?

The reality is that our environments are constantly evolving: data types, endpoints, subnets, compliance requirements, technology and business risk, and other variables we draw upon when we set up the SIEM initially are all dynamic. Of course, in larger organizations there are likely to be teams dedicated to maintaining the SIEM that remain aware of these changes and adjust as necessary. But in smaller organizations, we tend to get bogged down in day-to-day activities that prevent us from capturing all these changes. After all, the shortage of cybersecurity staff is well known and shows no signing of easing. Over time, alerts become stale and incomplete if these changes are neglected, eventually leading to the infamous alert fatigue.

If you find yourself in the latter category, consider treating your SIEM updates like a spring cleaning exercise this year — one or two long days dedicated to reviewing all your alerts, rules, variables, and other criteria in the SIEM to see what is still relevant and what’s not, what’s changed, and consider some of the following items:

  • Have any application updates for the SIEM been released but not applied? Hopefully not, but take this opportunity to confirm and patch your system to the most current, stable release.
  • Has the network team stood up any new subnets or equipment, such as DMZ environments, to or from which traffic should be monitored?
  • Has the systems team deployed new servers whose logs should be monitored?
  • Have your developers released any new applications that should be incorporated?
  • Have your endpoints started using any new software (operating systems or productivity applications) that should be monitored?
  • Have any applications or services moved to new servers and/or subnets?
  • Have any new special access privileges been introduced (vendors and contractors, for example)?
  • Have any new users been granted elevated privileges?
  • If your organization uses file integrity monitoring or user and entity behavior analytics, is this configured across all nodes where necessary and is it monitoring all the right metrics?

Before moving on, consider one more often-overlooked value: Has the business’s risk appetite toward any of the above items changed? What was considered low risk a year ago may be a high risk today or vice versa. Having a good grasp on what is important to the business will pay dividends toward knowing what is critical and what is not — and keep in mind that what is critical to us as security professionals may not always seem critical to the business.

In the interest of being thorough, approach the review as an out-of-the-box setup. Don’t assume a rule or alert’s configuration is still valid because you remember setting it up and don’t think anything needs to be changed. That’s the equivalent of not vacuuming under the couch because nothing should be under there, but moving the couch usually disproves this theory.

Just as you would approach spring cleaning your home, overturn the dusty items and look in all the dark corners for anything that could be working better. Engage your SIEM vendor if you can — most will offer a periodic review with your account manager to look over your shoulder and offer guidance on what you may be doing right or wrong, what you could be doing better, and any useful tips or tricks they’ve seen.

This should not be an evaluation of what else to add to your plate but an opportunity to ensure the information you are receiving is timely, accurate, and relevant. In the end, you should be left with a squeaky-clean environment and a SIEM that behaves as well as it did on Day 1.

Related Content:

Robin Hicks has spent the last 10 years in various TechOps roles, with the last four dedicated to information security. He specializes in network security and secure network design, audit, and GRC. He holds an undergraduate degree in Information Technology with a Security … View Full Bio

Article source: https://www.darkreading.com/perimeter/dont-just-tune-your-siem-retune-it/a/d-id/1334781?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Impersonation Attacks Up 67% for Corporate Inboxes

Nearly three-quarters of organizations hit with impersonation attacks experienced direct losses of money, customers, and data.

Email-based cyberattacks are going up, causing confidence in security defenses to fall as businesses grapple with greater losses caused by email-borne cyberattacks, researchers report.

Sixty-one percent of 1,025 IT decision makers polled in Mimecast’s “State of Email Security” report, conducted by Vanson Bourne, say it’s likely or inevitable their businesses will suffer the negative effects of emailed cyberattacks this year. Their concerns aren’t unfounded: Impersonation, phishing, and insider threats have all become more common in the past year.

The majority (94%) of organizations say they were hit with a phishing attack in 2018, researchers found, and 54% noticed an increase in the volume of phishing attacks. Nearly half (45%) reported a rise in targeted spear-phishing emails packing malicious links, and 53% suffered a ransomware attack that directly affected business operations – up 26% from a year ago. Of the ransomware victims, 86% say they experienced at least two days of downtime; three days was the average.

Impersonation and business email compromise (BEC) attacks were up among 67% of respondents, and 73% of those hit with such attacks say they experienced direct resulting losses. Members of the C-suite, human resources, and finance teams are likely to be impersonated, according to the report.

“You’d think they’d be imitating financial officers, but they don’t necessarily have to,” says Joshua Douglas, vice president of threat intelligence at Mimecast. These are all high-ranking employees with intrinsic trust throughout the organization, he explains, and they have access to data –specifically financial data – that attackers want. They can imitate someone in HR to get bank accounts changed on payroll, or they can pose as a CEO to request gift cards. If an intruder has the authority of a trusted figure, he can shift the dynamic in social engineering schemes.

How much damage can an impersonation attack cause? Quite a bit, researchers found. Once an adversary gets a foot in the door, he can obtain credentials and transition to lateral movement. By mimicking a legitimate employee, the attacker gets more trust from other people throughout the organization, Douglas says. This makes the attacker even harder to detect.

Nearly 40% of respondents say they experienced data loss as a result of email-based impersonation attacks, 29% reported direct financial loss, 28% lost customers, and 27% admitted some employees lost their jobs. Still, 38% of those who suffered losses say their data was most valuable to lose.

“If you look at the loss of data, that’s the hardest for any company to recover from,” Douglas says. “Once that data’s out, it’s out.” If a rival business gains access to a victim’s intellectual property, they can mimic a product. Loss of user data causes trust issues and customer loss.

Insider threats are another growing trend, he continues, but most employees don’t know when they’re doing something wrong. “Most insider threat activities are accidental,” Douglas explains. “There’s a small portion that are, in fact, malicious,” and those are tough to identify.

Security mishaps are common but can be prevented with awareness training if conducted properly. Most awareness training (62%) happens in a group session; other popular methods include interactive videos highlighting best and worst practices (45%), formal online testing (44%), reference lists of tips (44%), and one-on-one training sessions (44%), researchers report. In large training groups, interest quickly wanes and employees fail to pay attention to content.

Douglas recommends “microtraining,” or short intervals during which employees are engaged. Rather than doing a long cybersecurity training course a few times a year, businesses should instead adopt a system that regularly reminds employees of best cybersecurity practices.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/perimeter/impersonation-attacks-up-67--for-corporate-inboxes/d/d-id/1334834?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Researchers uncover smart padlock’s dumb security

Remember the Balboa Internet of Things (IoT) hot tub whose security was so dire it allowed researchers to remotely tweak important settings via the internet?

A few months on and the researchers behind that exposé, Pen Test Partners, have turned their attention to another incarnation of the same IoT theme in the form of the ‘smart’ Bluetooth padlock made by a Chinese company Nokelock (not to be confused with the unrelated company Nokē).

While Nokelock might not jump out as a household name, its smart padlocks feature prominently on Amazon.com for around $40 (£30) – including one rated ‘Amazon’s Choice’ – as well as under a range of other brand names.

Obviously, the point of a traditional padlock is to stop anyone who doesn’t have a key from unlocking it. In the case of the Nokelock, the function of the key is performed by a fingerprint reader built into the shackle that is configured using a smartphone app.

This convenience means that lots of users can be enrolled to use it without having to hand out keys that cost a lot to copy and might get lost.

Unfortunately, says Pen Test Partners, the Nokelock and its API also come with some major security flaws that prospective owners might like to know about before they stump up their cash.

Such as the ability to:

  • Unlock the Nokelock within a range of 10m without needing to know anything about the registered account.
  • Discover the owner’s information from the Nokelock database, including the email address and password hash.
  • Discover the lock’s location from its GPS coordinates.
  • Assign the lock to another account, locking owners out of their Nokelock.

Frankly, it’s hard to imagine a more damning list of vulnerabilities for a security lock, compounded by the fact these flaws are now in the public domain as proofs of concept.

Even more concerning is the fact that this is not the first smart padlock Naked Security has covered that has glaring weaknesses (see previous coverage of the eerily similar Tapplock from last year), which hints at wider development problems in this category of product.

The pwn

Communication with the Nokelock happens via the Bluetooth Low Energy (BLE) protocol, which is encrypted using AES.

However, the researchers discovered the key could be discovered by getting hold of an API token supplied by creating a new user with a temporary email address, or by getting the lock to respond to getDeviceInfo which helpfully returned the key. Meanwhile, as well as the API calls being sent via HTTP, no authentication is applied, which could:

Allow an attacker to read information about a user or lock, including email address, password hash and the GPS location of a lock.

The user password hash was also stored as in unsalted MD5, a crushingly obsolete hashing algorithm that makes security people groan when they encounter it.

Wall of silence

When Pen Test Partners tried to disclose these weaknesses to Nokelock, it was met with… no response at all. Writes David Lodge:

So, let’s get on to the fun bit: the API. We found a number of vulnerabilities in the API, for which we tried to disclose to the vendor, from January 2019, through many mechanisms, including email, phone and WeChat. We even tried to get a Mandarin speaker to talk to them.

That’s damning because it suggests that the company is unable or unwilling to fix the problems in its products despite them being on sale.

But doesn’t revealing these flaws makes them less secure?

That argument is called security by obscurity and it’s based on the fallacy that nobody else will find out that the weaknesses exist.

On the contrary, the ethos of responsible disclosure demands that companies are given time to fix the flaws or to agree to a timescale by which that will be achieved. If researchers receive no response despite their best efforts, it is their job to make the wider world aware of their findings. Cautions Lodge:

Even though the idea of a Bluetooth padlock is a great one, I cannot advise anyone to use a Nokelock (or clone) and expect their stuff to be safe.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lytxdK-aJXA/

Three tech-support scammers charged with ripping off the elderly

Three alleged tech-support scammers have been charged with bilking the elderly out of at least $1.3 million for tech support services they didn’t need and never got.

The US Attorney’s Office for the Southern District of New York announced on Friday that the three had been arrested the day before.

According to a complaint filed by FBI Special Agent Carie Jeleniewski, the trio would allegedly cold-call their victims, running through the standard tech support scammer’s ruse of claiming to be from one of the big computer companies and warning the victims that their computer was infected with a virus. This went on for years, starting at least in 2013 and continuing on up until this month.

In fact, while investigators were interviewing one of the defendants, Gurjet Singh, at his home in Queens, New York, a carrier truck pulled up to deliver a check made payable to NY IT Solutions Inc. – one of the companies the alleged fraudsters set up to deposit money mailed in by their victims. According to the criminal complaint, Singh had been in the midst of explaining to officers that he collected checks and then wired the money to Gunjit Malhotra, from India. Singh’s cut of the allegedly swindled funds: 8%.

The defendants are Malhotra, 30, of Ghaziabad, India; Singh, 22, of Queens, New York, and Jas Pal, 54, also of Queens. They’ve each been charged with one count of conspiracy to commit mail fraud, which carries a maximum sentence of 20 years in prison. They’ve each also been charged with one count of conspiracy to access a protected computer in furtherance of fraud, which carries a maximum sentence of five years in prison. Maximum sentences are rarely handed out.

Singh was also charged with aggravated identity theft, which carries a mandatory minimum prison sentence of two years in prison.

You have a virus! That will be $662.99, please

The first victim described in the complaint reported to police in March 2018 that her computer stopped working. She got a pop-up that instructed her to call a phone number for repairs, so she did. She was connected to someone who claimed to be from a well-known tech company and who told her that she needed extra security. A private carrier would drop by her New Jersey home and pick up a check for approximately $662.99, she was told.

Then, somebody remotely took over her computer and, purportedly, “repaired” it.

When the woman gave investigators her supposedly infected, the supposedly repaired computer, they saw no repairs. What they did see was a “Google search engine” saved to its desktop… except that it wasn’t a search engine. Rather, when investigators accessed it, it popped up the phone number to call for “repairs.”

The scammers got to other victims by cold-calling their targets and telling them their computers were infected with viruses. Sometimes, they laid it on thick by throwing in the specter of Russian hackers who had planted “multiple viruses.” You’d best download our software so we can remotely “fix” your problem, the victims were told. Plus, we should set up a service plan in order to keep this “service” (and, of course, the victim’s checks) coming.

Sometimes, the victims called the scammers, after having hit upon fake tech support ads that came up when they ran searches for help.

The fees varied – $225, $350, $399, or 5-year “plans” for $799.99. One victim sent in eight or so checks that added up to the shocking sum of $65,810… and then sent in yet another 10 checks that totaled about $70,805.

The typosquatting/malvertising tool kit

Pop-up windows, cold calls, malvertising and fake ads are all well-known tools in the tech-support scammer’s kit. In 2017, researchers at Stony Brook University rigged up a robot to automatically crawl the web searching for tech support scammers and to figure out where they lurk, how they monetize the scam, what software tools they use to pull it off, and what social engineering ploys they use to weasel money out of victims.

What they found describes how the victims in this recent swindle got caught.

They found that users often get exposed to these scams via malvertising that’s found on domain squatting pages: the pages that take advantage of typos we make when typing popular domain names. For example, a scammer company will register a typosquatting domain such as twwitter.com.

Studies have shown that visitors who stumble into the typosquatting pages often get redirected to pages laced with malware, while a certain percentage get shuffled over to tech support scam pages.

Once there, a visitor is bombarded with messages saying their operating system is infected with malware. Typically, the site is festooned with logos and trademarks from well-known software and security companies or user interfaces.

A popular gambit has been to present users with a page that mimics the Windows blue screen of death.

The frequency of fake blue screens of death has over the years turned “Microsoft” into a red-alert word. According to Microsoft’s 2018 global survey, three out of five Windows users had encountered a tech support scam in the previous year. The number’s dropping, Microsoft said, but not fast enough: the scams are still going strong, targeting all ages and all geographies.

And no, you’re not immune from the siren call of tech support scammers if you don’t use Windows. The wolves pull on plenty of other sheepskins, such as pretending to be calling from Apple or other big-name tech companies, and festooning their sites with such companies’ logos.

But Microsoft has waged a particularly long-drawn-out battle, having been at war with these scammers since 2014, when it dragged multiple US companies into court. That’s also when it began to collect customer complaints about the scams via its Report a technical support scam portal.

What to do

Many elders are sitting ducks for these fraud slingers. Two years ago, when the Federal Trade Commission (FTC) launched a crackdown on tech support scammers, it released a 48-minute scam call featuring an actor portraying one of these scammers’ preferred prey: a tentative, gullible, easily sweet-talked, elderly man.

As part of its Operation Tech Trap – a broad crackdown on tech support scams both in the US and elsewhere – it passed along these tips on what to do if you get an unexpected tech-support call or pop-up:

  • Hang up on callers. They’re not real tech-support staffers. And don’t rely on caller ID to prove who a caller is. Criminals can spoof calls to make it seem like they’re calling from a legitimate company or a local number.
  • If you get a pop-up message that tells you to call tech support, ignore it. While there are legitimate pop-ups from your security software to do things like update your operating system, you shouldn’t call a number that pops up on your screen in a warning about a computer problem.
  • If you’re concerned about your computer, call your security software company directly – but don’t use the phone number in the pop-up or on caller ID. Instead, look for the company’s contact information online, or on a software package or your receipt.
  • Never share passwords or give control of your computer to anyone who contacts you. Doing so leaves your computer open to malware downloads and backdoors.
  • Get rid of malware. Update or download legitimate security software and scan your computer. Delete anything the software says is a problem.
  • Change any passwords that you shared with someone. Change the passwords on every account that uses passwords you shared.
  • If you paid for bogus services with a credit card, call your credit card company and ask to reverse the charges. Check your statements for any charges you didn’t make, and ask to reverse those, too. In the US, report it to ftc.gov/complaint.

Tips are all well and good for those of us who have the wherewithal to absorb them. But the elderly, all too often, don’t have that capacity.

With great power comes great responsibility: If you’re one of the tech-literate, please do keep an ear out for any friends, relatives and neighbors who get flustered with technology and bewildered by pop-ups. Let’s all try our best to protect loved ones from scammers who are more than happy to sweet-talk or techno-babble-bedazzle their life’s savings out of them.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/T6IrRsq5NpY/

New research generates deepfake video from a single picture

You’ve all seen the deepfake video of a digital Barack Obama sockpuppet controlled by Jordan Peele, but we bet you haven’t seen an animated video of the Mona Lisa talking before. Well, thanks to the magic of AI, now you can.

Deepfake AI produces realistic videos of people doing and saying fictitious things. It’s been used to create everything from fake celebrity porn through to creepy video amalgams of Donald Trump and Nick Cage.

According to the team at Samsung Research’s Moscow-based AI lab, the problem with existing deepfakes is that the convolutional neural networks that they train on munch through a huge amount of material. When it comes to deepfakes, that means either lots of photos of the target, or several minutes of video footage.

That’s fine if you’re mimicking a public figure, but it’s problematic if you don’t have that much footage. The Samsung AI researchers came up with an alternative technique that let them train a deepfake using as little as a single still image, in a technique they call one-shot learning. The quality improves if they use more images (few-shot learning), they say, adding that even eight frames can create a marked improvement.

The technique works by conducting the heavy training on a large set of videos depicting different people. This technique, which the researchers call ‘meta-learning’ in their paper, helps identify key facial ‘landmarks’ which it can then use as anchors when creating deepfake videos of new targets.

What does one-shot or few-shot learning mean for deepfakes? It means that you can animate paintings. In their video explaining the technique, the researchers bring the Mona Lisa to life:

(Watch directly on YouTube if the video won’t play here.)

They also animate Marylin Monroe and Salvador Dali. Perhaps we’ll see this kind of thing appearing in movies soon?

The researchers also see a bright future for this technology in virtual reality (VR) or augmented reality (AR) applications. We might see these AI-generated images replace the blocky avatars that plague current VR, for example. They say:

In future telepresence systems, people will need to be represented by the realistic semblances of themselves, and creating such avatars should be easy for the users.

This few-shot training also has other, darker implications though. It makes it easier for attackers to produce new deepfakes, even if the target isn’t very prominent and doesn’t have much existing footage. This could include an executive VP for a company whose share price you want to move, or a minor mid-level government official.

We don’t condone that, and neither do the researchers, who point out that every new multimedia technology creates opportunities for misuse:

In each of the past cases, the net effect of democratization on the World has been positive, and mechanisms for stemming the negative effects have been developed. We believe that the case of neural avatar technology will be no different. Our belief is supported by the ongoing development of tools for fake video detection and face spoof detection alongside with the ongoing shift for privacy and data security in major IT companies.

What this shows us is that AI is evolving at breakneck speed, and we should expect more convincing deepfakes as the barrier to entry falls.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lKDQ9r5S9jY/

Guilty of hacking in the UK? Worry not: Stats show prison is unlikely

Analysis Nearly 90 per cent of hacking prosecutions in the UK last year resulted in convictions, though the odds of dodging prison remain high, an analysis by The Register has revealed.

Government data from the last 11 years revealed the full extent of police activity against cybercrime, with the number of prosecutions and cautions for hacking and similar offences being relatively low.

Figures from HM Courts and Tribunals Service revealed there were a total of 422 prosecutions brought under the Computer Misuse Act 1990 (CMA) over the last decade, with the figure rising to 441 including the year 2007.

Criminals convicted of CMA offences were quite likely to avoid prison in 2018, with just nine (including young offenders sent to youth prisons) receiving custodial sentences out of 45 convictions. Among those were Mustafa Ahmet Kasim, the first person ever to be prosecuted under the CMA by the Information Commissioner’s Office. A further dozen CMA convicts received suspended sentences in 2018.

Between 2008 and 2018, 79 people – 24 per cent of the total prosecuted in that period – were found not guilty at court or otherwise had their cases halted. Of the guilty, 16 per cent were given immediate custodial sentences. That number comes up to 45 per cent if suspended sentences are included.

The CMA is the main statute used to prosecute hackers, as well as some data-related crimes such as securing unlawful access to computers and their contents.

The odds of getting off with a police caution instead of a full-blown prosecution for a CMA offence were exactly 50:50 in 2018, with 51 cautions being issued as well as 51 criminal court cases. In those 51 prosecutions, 45 defendants were found guilty, a rate of around 90 per cent – slightly above the usual average across all criminal offences of around 75-80 per cent of prosecutions resulting in a guilty verdict.

Computer Misuse Act 1990 cautions and charges 2007-2018

The 2013 jump in prosecutions could be explained by that being the statistical year after Theresa May, as Home Secretary, withdrew her extradition order against accused hacker Gary McKinnon, signalling a greater willingness to prosecute at home rather than extradite.

Among the lucky six to be found not guilty in 2018 or who otherwise had their cases stopped was Crown court judge Karen Jane Holt, aka Karen Smith, whose prosecution under the Computer Misuse Act was halted by order of another Crown court judge.

The most common range of fines fell between £300 and £500, with one criminal having been fined more than £10,000 last year – the only one to be so punished since 2012. In general, around five fines were issued per year for the last 11 years.

In the 11 years’ worth of data analysed by The Register, just one person got away free with an absolute discharge from court (in 2017) after being found guilty. A maximum of six people per year received conditional discharges, with last year featuring just three. Community sentences accounted for a total of 95 disposals from court since 2007, with 15 of those having been handed out last year.

Don’t worry about rotting behind bars

Even when a prison sentence was handed down by judges, the duration was relatively short. Over the past decade, the most frequent sentence lengths fell between 6-9 months and 18-24 months. Current UK sentencing laws automatically halve prison sentences in favour of release on licence, with release from prison usually being a bit earlier again than half the headline figure, as a criminal barrister explains on his website.

The figures could be interpreted to show that Britain is a relatively forgiving jurisdiction for computer hacking crimes, something this story advertising an IT security startup staffed with young grey hats may or may not bear out.

Not every CMA prosecution is started because of hacking, it is important to note, though the law is often used for hacking cases. In 2017, a former Harrods IT worker, Pardeep Parmar of Hitchin, pleaded guilty to a CMA offence after being let go from the posh department store and taking his work laptop to a local computer shop, asking to have it taken off their domain. Similarly, a former Santander bank clerk, Abiola Ajibade of Consort Road, Southwark, pleaded guilty after having been caught accessing and sending customers’ details to her then boyfriend.

“Cybercrime has become accepted as a low-risk, potentially high-reward activity for organised criminals. If they act professionally, they can make substantial sums of money with very little chance of being caught,” opined Richard Breavington of law firm RPC, which also obtained some of the data. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/29/computer_misuse_act_prosecutions_analysis/

Infosec bloke claims: Pornhub owner shafted me after I exposed gaping holes in its cartoon smut platform

An irate infosec researcher has accused Pornhub owners Mindgeek of out-of-scoping what he described as “critical” vulns in a cartoon pornography-themed mobile games site.

John Sawyer, a mobile app security specialist, had a poke around some of the APKs (Android application package) listed on Nutaku, a highly NSFW Mindgeek site dedicated to free browser games featuring lots of – well, there’s no other way to put this – bonking Japanese-style cartoon characters.

In a Reddit AMA (ask me anything), a Nutaku functionary described the site as “a distribution platform much like GooglePlay, Steam, the AppStore” for adult-themed apps, though it was the APK for Nutaku itself that Sawyer was examining.

Sawyer was not impressed with what he found, telling The Register that he uncovered a slack handful of remote code execution (RCE) vulns, weak password hashing, sending login credentials over plain HTTP (no S), credentials ending up in logfiles and more.

He reported these to Nutaku, which directed him to the Pornhub bug bounty scheme. Even so, Sawyer said, Mindgeek didn’t take them seriously – to the point where some of the bugs were declared out of scope of its bounty scheme after submission and so not eligible for a payout.

“Technically, they’re right,” he conceded. At the time of writing the Pornhub HackerOne entry states: “The scope of this program is limited to security vulnerabilities found on the Pornhub and Pornhub Premium websites as well as in the Pornhub Android application. Vulnerabilities reported on other properties or applications are currently not eligible for monetary reward.” It does add: “High impact vulnerabilities outside of this scope might be considered on a case by case basis.”

Sawyer told us that RCEs ought to be patched, whether or not they’re declared as out of scope. He also added that he had made it clear from the outset that he wasn’t chasing the bug bounty cash, and had asked for that to be sent to a charity rather than dropped into his pocket.

“You don’t need full control of the network or device to grab credentials, you just need to be on the same network,” he said.

The researcher’s concern was not isolated. Others have mentioned on Twitter that they have had dissatisfying experiences with Mindgeek’s bug bounty scheme.

Mindgeek, owners of Nutaku and Pornhub, as well as the future operators of a large chunk of Britain’s upcoming porn ID card scheme, told El Reg it “takes the security of its users very seriously.

“We review each submission to our bug bounty programs manually and reward them according to their severity. In this case, the security researcher submitted reports that we consider out-of-scope as per the rules published publicly on the program’s pages,” adding that even if it had misidentified some of the submitted reports, “the outcome would have been the same nonetheless”. ®

Mindgeek’s spokesman added: “None of the reports regarding the Android APK for Nutaku demonstrated a means to remotely capture login credentials without having full control of the user’s device and or its network connection.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/29/nutaku_app_mindgeek_infosec_bug_bounty/

News aggregator app Flipboard breached: All passwords reset after hackers pinch user data

News aggregation app Flipboard has publicly confessed that hackers accessed personal data about its members.

Although the company did not say how many customers had been affected, the app has been installed more than half a billion times, according to its Google Play Store listing.

The databases that got away, according to a Flipboard statement, included account credentials, names, hashed and salted passwords, and email addresses. Some of these were SHA-1 hashed, while those created after March 2012 were hashed and salted with the more modern and tougher-to-crack bcrypt function.

The app’s makers do not collect financial data or government ID card information.

Flipboard is a news aggregator. Rather than visiting your favourite news website and reading their glorious headlines, beautiful stock images and cutting-edge captions the way the gods journalists intended, Flipboard allows you to create a personalised “news magazine” that you swipe your way through.

It’s not just Flipboard accounts that may be vulnerable, the company warned. “If users connected their Flipboard account to a third-party account, including social media accounts, then the databases may have contained digital tokens used to connect their Flipboard account to that third-party account.”

All such tokens have been deleted or replaced.

All passwords have been reset, though Flipboard insisted that not all of its users had been compromised and that it was still “identifying the accounts involved”. Law enforcement agencies have, it added, been told of the breach and an unidentified third-party security firm is analysing what happened.

The fallout from this hack is likely to persist. With such a large userbase, the number of affected accounts seems likely to fall into the six-figure bracket – or, if luck is not on their side, a heck of a lot more. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/05/29/flipboard_hacked/

Level Up Your Data Forensics Game at Black Hat USA

Learn about the latest supply chain attacks, red team threats, and “deep fake” detection tricks at the premier cybersecurity event in Las Vegas this August.

The threat landscape is always shifting in cybersecurity and there’s no better place to check in with colleagues and learn the latest tricks and techniques than Black Hat USA in Las Vegas.

Attend this premier cybersecurity event and check out the Briefings Data Forensics Incident Response track which offers techniques that will help you understand how an attack unfolded, if and when a breach occurred, and how it can be prevented in the future. Fantastic Red-Team Attacks and How to Find Them promises to reveal prevalent and ongoing gaps across organizations uncovered by testing defenses against a broad spectrum of attacks via Red Canary’s Atomic Red Team testing framework. Plus, you’ll learn the open-sourced Event Query Language for creating high signal-to-noise analytics. In a live demonstration, you’ll see how these powerful but easy-to-craft analytics can catch adversarial behaviors that are commonly missed in organizations today.

In Detecting Deep Fakes with Mice you’ll see how researchers worked to train different machines and creatures to detect real vs. fake speech in “deep fake” videos. For machines, you’ll look at two approaches based on machine learning: one based on game theory called generative adversarial networks (GAN), and one based on mathematical depth-wise convolutional neural networks (Xception).

For biological systems, researchers gathered a broad range of human subjects as well as mice, which don’t understand the words, but respond to the stimulus of sounds and can be trained to recognize real vs. fake phonetic construction. Researchers theorize that this may be advantageous in detecting the subtle signals of improper audio manipulation, without being swayed by the semantic content of the speech. In this 25-Minute Briefing you’ll learn how they evaluated the relative performance of all four discriminator groups (GAN, Xception, humans, and mice) using a “deep fakes” data set recently published by Google.

The Enemy Within: Modern Supply Chain Attacks take you behind the scenes of today’s cloud-powered industries with a Microsoft security expert. You’ll learn about previously undisclosed supply chain attacks including techniques and objectives of adversaries, mechanisms that were effective in blunting attacks, and the sometimes-comical challenges of dealing with one of the most complex assets to defend: developers.

For more information about these Briefings and many more check out the Black Hat USA Briefings page, which is regularly updated with new content as we get closer to the event!

Black Hat USA will return to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/level-up-your-data-forensics-game-at-black-hat-usa/d/d-id/1334820?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple