STE WILLIAMS

Employee allegedly stole government spyware and hid it under his bed

A former, unnamed programmer for spyware maker NSO Group was indicted last week for allegedly stealing source code, disabling company security so they could load it onto a storage drive, and trying to sell it on the Dark Web for USD $50m.

Actually, that would have been a bargain: According to a translated version of the indictment (PDF), the powerful spyware’s capabilities are estimated to be worth “hundreds of millions of [US] dollars.”

The company’s products have made headlines on multiple occasions.

NSO Group, an Israeli company, sells off-the-shelf spyware that’s been called History’s Most Sophisticated Tracker Program.

One of its products, codenamed Pegasus, enables governments to send a personalized text message with an infected link to a blank page. Click on it, whether it be on an iOS or Android phone, and the software gains full control over the targeted device, monitoring all messaging, contacts and calendars, and possibly even turning on microphones and cameras for surveillance purposes.

Pegasus is supposed to be used solely by governments, to enable them to invisibly track criminals and terrorists. But once software blinks into existence, keeping it out of the hands of the wrong people can be very difficult.

One case in point came last year, when Pegasus was reportedly used to target Mexico’s “most prominent human rights lawyers, journalists and anti-corruption activists, in spite of an explicit agreement that it be used only to battle terrorists or the drug cartels and criminal groups that have long kidnapped and killed Mexicans,” as the New York Times reported.

According to Amnesty International, Pegasus has also been used in the United Arab Emirates, where the government targeted prominent activist and political dissident Ahmed Mansoor. Last month, Mansoor was sentenced to 10 years in jail and a fine of 1,000,000 Emirati Dirham (USD $272K) on charges including “insulting the UAE and its symbols.”

In short, in this epoch of epic law enforcement frustration over the encryption that increasingly bars investigators from cracking suspects’ (and surveillance targets’) devices, such powerful spyware translates into intellectual property gold.

The indictment of the alleged spyware thief was first picked up by Israeli news outlets. One of them, Globes, compared it to a Hollywood thriller:

Software worth hundreds of millions of dollars is stolen by an employee of a leading cyber security company. All the warning lights turn on during the theft and no one does anything. For about three weeks, the worker keeps the powerful weapons under the mattress in his apartment in Netanya—and no one does anything. During the period, he checks Google (yes, Google) [to find out] how he can sell the secret software, and after the test he offers to sell his weapons to a foreign party on the ‘Dark Net’—for $50 million.

That is, in truth, exactly what the indictment alleges. According to the indictment, the employee—although they’re not named in the indictment, the English translation of the document uses male pronouns to refer to the defendant, so we’ll follow suit—started working as a senior programmer in NSO Group’s offices in Herzliya in November 2017.

Years earlier, in August 2012, he had allegedly searched the internet for ways to disrupt the company’s security software. Later, he allegedly disrupted the security software so that he could transfer data between his workstation and an external drive without authorization. Then, on 29 April 2018, the programmer was summoned to a conversation with his direct manager to chat about the company’s dissatisfaction with his performance. His boss invited him to a hearing scheduled for 2 May.

After that, he allegedly made his move: according to the indictment, he copied the spyware, which Globes reports was, specifically, the infamous Pegasus tool. Then, he allegedly took the storage device and stuck it under his mattress. He Googled how to sell the hot commodity, as well as who might be interested, according to the indictment.

Then, he allegedly used the encrypted, anonymous Mail2Tor email service to hide his tracks on the dark net as he listed Pegasus for sale. The programmer allegedly tried to blur the way the tool was obtained by posing as one of a group of hackers that managed to break into NSO’s systems.

At one point, he heard from an interested, also unnamed buyer. He was no buyer, though: suspicious of the seller, the “buyer” instead reported it all to NSO.

The programmer then allegedly requested payment be made in the virtual currencies Monero, Verge and Zcash. Three days after the “buyer” requested additional details about the sale being exclusive, Israeli police arrested the programmer, before he had a chance to sell the spyware.

The government is charging the ex-employee with attempting to “maliciously cause damage to property used by armed forces,” and of actively trying to harm the security of the country. As well, he’s charged with trying to illegally sell the software without a security marketing license and of disrupting NSO’s company security operations, as well as theft by an employee.

Regardless of what you think of spyware used to target a) criminals, terrorists, and drug cartels or b) anti-corruption activists or other persecuted groups, this case illustrates (like the CIA’s Vault 7 leak and the NSA’s hack by the Shadow Brokers before it) just how hard it is to keep vulnerabilities, and the tools that exploit them, under wraps.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_-qy36jDhow/

Chrome and Firefox pull history-stealing browser extension

One minute that favourite browser plug-in is your friend, the next it’s quietly turned into a privacy “Chernobyl” that’s profiling your browsing in the most intimate way possible.

Browser makers should be on top of this phenomenon and yet, here we are reporting on the latest example, this time spotted by software engineer Robert Heaton.

He’d been using a Chrome and Firefox extension called Stylish for years to re-skin websites and hide their “distracting parts” such as Facebook and Twitter feeds. (Safari and Opera versions are also available.)

Usefully, it even:

Added manga pictures to everything that wasn’t a manga picture already.

Not hard to see why Heaton and two million others might want to use it then.

Unbeknownst to him, however, in January 2017 the extension was sold to new owners, SimilarWeb, who changed its privacy policy – and outlook.

This came to his attention when he noticed Stylish had started sending obfuscated data back to its website as part of what looked like data gathering.

Sure enough, after more research:

When I looked at the contents of the decoded payload, I realized that Stylish was exfiltrating all my browsing data.

From inside his browser, Stylish could monitor every website he visited. Worse, because Heaton had an account login for the extension, it could relate his activity to his identity.

Stylish and SimilarWeb still have all the data they need to connect a real-world identity to a browsing history, should they or a hacker choose to.

Extensions getting new owners and undesirable, unexpected behaviour isn’t a new business model, and this particular change wasn’t exactly a secret because (as Heaton readily admits) the change of ownership and its implications was widely reported at the time in the tech press.

Unaccountably, it seems browser makers didn’t pick up on the implications of the change in ownership, which is why Mozilla has this week abruptly removed it from its list of Firefox Add-Ons, writing:

We decided to block because of violation of data practises outlined in the review policy.

Given that the Stylish page on Chrome’s extensions listing returns a 404 error, it seems that Google, too, has had second thoughts, closely followed by the same for Opera.

Of course, none of this will help the two million users who already run the extension and aren’t aware that it changed.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/l_vsdMLUP3I/

Linux experts are crap at passwords!

Remember the Gentoo data breach story last week?

Someone broke into the Linux distro’s GitHub repository, took it over completely by kicking out all the Gentoo developers, infected the source code by implanting malcious commands (rm ‑rf) all over the place, added a racist slur, and generally brought a week of woe to the world of Gentoo.

In case you’re wondering, rm ‑rf is Unix/Linux system command language for remove files (rm) recursively (‑r), which means “including any subdirectories”, and forcibly (‑f), which means that the user won’t see any warnings or prompts. The Windows equivalent is DEL /S /F /Q, a command you often regret almost immediately after you hit [Enter].

Fortunately, Gentoo’s GitHub respository wasn’t the primary source for Gentoo code, and few, if any, Gentoo users were relying on it for software updates.

Phew.

Other good news is that the stolen GitHub account is back under Gentoo’s control now; the hacked files have all been identified and removed; and Gento has learned (and, at the same time, taught the rest of us) three main lessons.

Lesson 1. A prompt notification goes a long way.

At first, Gentoo knew merely that something bad had happened – it was locked out of its own GitHub account, which was a bit of a giveaway – but not how or why.

Nevertheless, the organisation didn’t beat around the bush in preparing a breach notification message, and it didn’t waste time trying to work a marketing spin into its initial report.

As a result, the issue got widespread attention and community help right away.

Lesson 2. Pick a proper password.

Gentoo’s final summary of the incident says:

The attacker gained access to a password of an organization administrator. Evidence collected suggests a password scheme where disclosure on one site made it easy to guess passwords for unrelated webpages.

In other words, the user whose password was guessed had fallen into the trap of using different but nevertheless obviously related passwords on multiple sites.

It’s an easy thing to do – pick a core password (for example, pASS//orD) and then use some easily-derived additional text each time you need a new password, for example like this:

   pASS//orD-FB   
   pASS//orD-TW
   pASS//orD-G+
   pASS//orD-Y!

Technically, this means you are complying with the rule that says, “One site, one password – never use the same password on different sites.”

But if I were to figure out, or even just to guess, that -Y! in the last password was meant to denote Yahoo!, it would be an easy jump to try suffixes like -FB, -TW and -G+ for Facebook, Twitter and Google Plus respectively.

Don’t use a core password with tweaks or suffixes for each site – the crooks will figure out your pattern sooner or later.

Use a password manager and let it choose a totally different password for each site.

3. 2FA is your friend.

Apparently, Gentoo wasn’t using any form of two-factor authentication (2FA) before the breach.

It is now!

2FA, also known as two-step verification, usually means you have to put in your regular username and password and then follow it up by typing in a one-time code that works only for the session you are trying to set up.

Those one-time codes generally come either from an app on your phone, or via an SMS or other text message sent by the service provider.

2FA isn’t perfect, but it does make things harder for the crooks, because they can’t just steal or guess your password – they typically need your phone (and its unlock code), too.

What to do?

The Gentoo breach turned out to have a root cause that wasn’t about malware attacks, phishing emails, social engineering calls, exploits, zero-days, or any other technological trickery.

This story is a straightforward reminder that cybersecurity basics matter – and that making it very slightly less convenient for legitimate users to login every time makes it very much harder for crooks to login at any time.

If you’re asked to trade a tiny bit of personal convenience for a lot of extra cybersecurity for your company…

…take one for the team!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zKUbFqtJ8e0/

Welsh firm fined £60k for pummelling phones with 270k pay-day loan texts

A Welsh firm has been handed a £60,000 fine for spamming more than 270,000 pay-day loan texts around Christmas 2016.

The UK’s data protection watchdog doled out the penalty to STS Commercial Limited – which is registered as an IT service provider – after finding that the biz didn’t have consent to send the messages, which elicited 268 complaints.

The 274,423 messages, sent between November 2016 and January 2017, riffed on the festive season, with offers along the lines of: “This is Mia from CashKittyYou’re APPROVED to apply TODAY100 for 11/weekGet paid before xmas, repay in JaneSee [URL]” [sic].

However, under the Privacy and Electronic Communications Regulations (PECR), firms can only send such direct marketing bumph if they have the right consent from the would-be recipients.

In this case, STS tried to rely on consent gained by a third party, but had failed to carry out sufficient due diligence checks to ensure the data complied with PECR – and neither could provide evidence to support any such consent.

The Information Commissioner’s Office was alerted to this incident after an unrelated meeting with telephone networks revealed that unsolicited marketing activity had been identified across a network originating from Bridgend.

But it isn’t the first time the firm has been on the ICO’s radar – it was investigated in 2015 for sending masses of unsolicited texts.

The ICO said the firm and its directors had been “repeatedly reminded” of their obligations under PECR, sent direct marketing guidance and told about the commissioner’s powers.

For failing to heed this advice, and for spamming the 270,000-plus people, the ICO handed down a £60,000 fine – slightly below the median value of £75,000 of all PECR fines handed out between 2010 and April 2018.

If it pays by 7 August, it can pay a reduced rate of £48,000. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/06/spam_pay_day_loan_texts_get_welsh_firm_60k_penalty/

Trading Platforms Riddled With Severe Flaws

In spite of routing trillions of dollars of stock and commodity trades every day, financial cousins to online banking applications are written very insecurely.

While many banking application developers have made great strides in hardening their software from attacks, much of the rest of the fintech application field is wide open for ownage through very basic but severe vulnerabilities reminiscent of the kind we saw nearly a decade ago.

Next month at Black Hat USA, a researcher from IOActive will detail some stark examples of this during a presentation that will show the depths of flaws found present in stock-trading platforms used by millions of traders around the globe.

“When I’m testing a web platform or mobile platform, it is as if I’m testing an application from 2010 or 2012,” says Alejandro Hernandez, senior consultant for IOActive. This is a follow-on from initial research he presented last year on a limited number of mobile-trading applications. This year, he expanded the scope to desktop, web application, and mobile-trading software offered by a wide range of financial institutions. 

Hernandez found a universal lack of security controls up and down the list of 79 applications he tested. Across all three categories, he saw examples of decades-old protocols being used for communication that were sending transmissions unencrypted. He also found examples of unencrypted storage, unencrypted log files, and no enforcement for strong password policies or automatic logout.

“You’d think by the nature of the technology that these kind of technologies would be super secure,” Hernandez says. “You assume, well, if my mobile-banking app or home-banking websites are secure – or at least somewhat secure – that trading applications where trillions of dollars are traded per day would also be secure. But that’s not the case.”

Specifically in mobile-trading applications, Hernandez found many did not even perform SSL certificate validation to prevent man-in-the-middle attacks. Additionally, few of them had any kind of anti-reversing mitigations; upon reverse-engineering them, he found that most included hard-coded secrets in the code. In the web applications, he found session cookies still created without security flags and very few of them using HTTP security headers. 

Meantime, because the desktop applications include full feature sets, they have even more interesting flaws due to a larger attack surface. For example, the desktop applications frequently use a customized programming language that make it easy for traders to install add-ons and plug-ins. But this feature also opens them up to being easily tricked into installing malicious plug-ins, such as a malicious trading robot or malicious financial indicator function. Similarly, the way these desktop applications are designed to interact and communicate with other trading technologies, they’re frequently wide open to denial-of-service (DoS) conditions.  

The applications tested are primarily used by consumer investors, but Hernandez says a number of institutional investors also rely on them. The malicious applications of all these flaws are seemingly limitless. In addition to run-of-the mill identity theft, bad actors could use information stolen from well-known traders to shape the way they invest. They could perform DoS attacks, and these flaws could be used to install backdoors and other hostile code that most traders wouldn’t ever be able to detect.

“And this research I did only scratches the surface,” Hernandez says. “It’s only the end-user platforms. We haven’t tested the back-end services, the back-end networks, or the back-end protocols they use in exchanges.”

Related Content: 

 

 
Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/application-security/trading-platforms-riddled-with-severe-flaws/d/d-id/1332227?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Reactive or Proactive? Making the Case for New Kill Chains

Classic kill chain models that aim to find and stop external attacks don’t account for threats from insiders. Here what a modern kill chain should include.

The kill chain model is not new to most security professionals. Created in 2011 by Lockheed Martin, the model highlights the seven stages bad actors typically go through to steal sensitive information. In case you need a refresher, the steps include reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objective. The goal for security analysts and investigators is to disrupt the chain early, before sensitive data slips out the door. Although the model works for certain kinds of attacks, in many others, it doesn’t.

Using more sophisticated techniques than ever before, attackers are coming from both the inside and outside, whether they’re employees seeking to do harm, compromised users, or external bad actors. The classic kill chain model was designed to help organizations combat external threats by bad actors. Some organizations try to squeeze other types of threats, such as those posed by insiders, into the classic model, which doesn’t work because the behavior of insider threats is not the same as those of outsiders.

Reactive versus Proactive
Kill chain models are reactive by nature. The goal is to stop a potential attack in progress before damage is done. The traditional kill chain aligns with that goal, but there are other models for threats, like malicious insiders, that also fit reactive cyber-risk models.  A second type of cyber-risk model that can be extremely effective against threats, is a proactive model. That model flips the recipe on its head and seeks to reduce the attack surface before an attack occurs. Let’s first look at examples of reactive cyber-risk models, which very commonly can fit into one of two categories:

Flight Risks: Employees looking to leave the company elevate the risk of data loss. They tend to be less sophisticated and exhibit less cautious behavior on their way out. The kill chain–style reactive risk model begins with looking for early indicators — for example, if an employee frequently visits job search websites, something he or she typically would not do. However, even if employees are visiting those kinds of websites, that doesn’t necessarily mean they are a threat. They become a potential threat when they move to the next stage when, for example, they upload unusually large encrypted files to cloud storage at odd working hours.

A combination of those two stages — an employee has repeatedly visited job search websites and has uploaded an unusually large file at odd working hours — is a good indication that the person is a flight risk and must be closely monitored. The next stage entails the employee aggressively trying to pull sensitive data off the network. He may attempt to email sensitive data to an outside address, get blocked, and continue to try other methods until he succeeds.

The goal of this kill chain–style risk model is to identify people who are flight risks and approach them before the exfiltration occurs. Or if they do exfiltrate data, identify the activity and stop them before they cause real damage to the company. 

Persistent Insiders: Unlike flight risks, these threats are more sophisticated insiders who have no intention of leaving the organization. They repeatedly look for whatever sensitive data they can get their hands on to hurt the organization and/or sell for profit. Organizations won’t see these employees looking at job search websites. Instead, they will visit websites where they can circumvent web proxies. These are websites that allow them to hide, and then jump to the Dark Web, for example, to move data and bypass controls.

The next stage of the chain is when they persistently try logging into systems to which they typically do not have access. They quietly “jiggle doors” looking for sensitive data that is outside the scope of their, their peers’, and overall team’s role.

Combining these two stages — visiting suspicious websites and jiggling doors — are good examples that indicate a person may be a persistent threat. The next stage is when the person acts. For example, on a regular basis, s/he may encrypt small amounts of sensitive data and exfiltrate it outside the network. By breaking the data down into small amounts, the person aims to evade detection, and by encrypting it, makes it even more difficult because the company cannot see what’s inside.

Obviously, the goal is to stop the person before getting to the final stage of exfiltration. The chain shows the progression of events so that organizations can stop the threat before damage is done.  

Insider threat models are an example of a reactive chain of events. Many organizations have tried to squeeze these into the original kill chain model only to find they need to skip stages, and often feel like they’re trying to put a square peg in a round hole. Leveraging the principal that emerged and was made popular by the kill chain is very important, but being flexible to adapt to today’s threat landscape is critical to success. 

To take the leap to proactive cyber-risk management, consider a predictive model for combatting ransomware. Instead of looking for indicators of a threat in progress, the chain begins with identifying which machines, applications, and systems are susceptible to ransomware, and then determining which ones contain sensitive data. From there, organizations can easily understand which assets need better patching or tighter controls, and finally see which of these machines are actively being attacked and how effective their response has been. Together, this provides predictive, proactive visibility to reduce the attack surface and get ahead of the attackers.

Whereas reactive kill chain models aim to find threats and stop them before it’s too late, proactive models aim to reduce attack opportunities before attackers strike. If companies adopt this broader set of models, in addition to applying the classic one, they will spend less human resources and time hunting threats and stay ahead of attackers before they cause harm.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Ryan Stolte is co-founder and CTO at Bay Dynamics, a cyber risk analytics company that enables enterprises and government agencies to prioritize and mitigate their most critical threats. Ryan has spent more than 20 years of his career solving big data problems with … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/reactive-or-proactive-making-the-case-for-new-kill-chains-/a/d-id/1332200?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Gentoo GitHub repo hack made possible by these 3 rookie mistakes

The developers of Gentoo Linux have revealed how it was possible for its GitHub organization account to be hacked: someone deduced an admin’s password – and perhaps that admin ought not to have had access to the repos anyway.

The distro’s wiki has added a page describing the SNAFU. It describes the root cause of the cockup as follows:

The attacker gained access to a password of an organization administrator. Evidence collected suggests a password scheme where disclosure on one site made it easy to guess passwords for unrelated webpages.

Oops! Sounds like someone has a core password with predictable variations!

The wiki page also reveals that the project got lucky. “The attack was loud; removing all developers caused everyone to get emailed,” the wiki reveals. “Given the credential taken, its likely a quieter attack would have provided a longer opportunity window.”

Also helpful was that “Numerous Gentoo Developers have personal contacts at GitHub, and in the security industry and these contacts proved valuable throughout the incident response.”

But the project’s critical of itself for the following reasons:

  • Initial communications were unclear and lacking detail in two areas.
    • How can users verify their tree to be sure they had a clean copy?
    • Clearer guidelines that even if users got bad copies of data with malicious commits, that the malicious commits would not execute.
  • Communications had three avenues (www.gentoo.org, infra-status.gentoo.org, and email lists.) Later we added a wiki page (this page) and were inconsistent on where to get updates.
  • GitHub failed to block access to the repositories via git, resulting in the malicious commits being externally accessible. Gentoo had to force-push over them as soon as this was discovered.
  • Credential revocation procedures were incomplete.
  • We did not have a backup copy of the Gentoo GitHub Organization detail.
  • The systemd repo is not mirrored from Gentoo, but is stored directly on GitHub.

The project’s fix has a few elements. Two-factor authentication is now on by default in the project’s GitHub Organization and will eventually come to all users the project’s repos. A password policy that mandates password managers is planned. Also on the agenda is a review of who needs access to repos and cleanout of those who don’t, proper backups and an incident plan so that the project won’t need to rely on its luck if it’s popped again. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/05/gentoo_github_hack_weak_password_no_2fa/

Cyber boffins drill into World Cup cyber honeypot used to cyber lure Israeli soldiers

Security researchers have unpicked mobile apps and spyware that infected the mobile devices of Israeli military personnel in a targeted campaign which the state has claimed Hamas was behind.

Earlier this week, Israeli military security officials revealed that hackers whom they claim were Hamas-affiliated* had installed spyware on Israeli soldiers’ smartphones.

The officials didn’t say how it was determined that the Gaza ruling party was behind the malware lure.

About 100 individuals fell victim to the attack that came in the form a malicious World Cup score tracking app and two fake online dating apps. The snoopy mobile code had been uploaded to the official Google Play Store.

The bogus apps have reportedly been removed from the Google Play Store. Google has yet to respond to a request from El Reg to discuss the incident.

“Golden Cup,” the bogus World Cup app, actually bundled functionality to provide live scores as well as full spectrum snooping.

Israeli military officers told Reuters that “Hamas operatives, using false identities, contacted soldiers on social media and encouraged them to download the apps”.

Scores of soldiers were duped – a number the military said was “under 100”. All had since either self-reported the issue or been given a tap on the shoulder – victims of the infection were tracked down by security analysts in the US military. “We know of no damage that was done,” one of the Israeli military officers said.

How bad was it?

Once the apps were installed onto the victims’ phones, the spyware was then able to carry out a number of malicious activities including, but far from limited to, recording a user’s phone calls. The software nasty was also capable of stealing a user’s contacts, SMS messages, images and videos stored on the mobile device alongside information on where they were taken.

Other exploits – including taking a picture when the user receives a call and capturing the user’s GPS location – were also on the menu once a user installed the mobile spyware. The malicious software was also capable of taking recordings of the user’s surroundings.

“This attack involved the malware bypassing Google Play’s protections and serves as a good example of how attackers hide within legitimate apps which relate to major popular events and take advantage of them to attract potential victims,” according to security researchers at Check Point, the Israeli software security firm.

Check Point is due to publish its research today.

The mobiles of dozens of soldiers were compromised by malicious code posing as dating apps after hackers posed as attractive young women in a similar incident back in January last year. ®

Botnote

*Standard disclaimers about the difficulties of attribution in cyberspace apply. The attack in play made use of clever social engineering attacks, a hallmark of malware from the Middle East in general.

The IDF regards Hamas as a proxy for Iran, its principal enemy both on the ground and in cyberspace.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/05/world_cup_mobile_malware_trick/

Don’t fear 1337 exploits. Sloppy mobile, phishing defenses a much bigger corp IT security threat

AppSec EU IT admins should focus on the fundamentals of network security, rather than worry about sophisticated state-sponsored zero-day attacks, mobile security expert Georgia Weidman told London’s AppSec EU conference on Thursday.

Weidman, founder and CTO of mobile security testing firm Shevirah, cut her teeth in the industry six years ago mingling with the black hat crowd, where elite security researchers tried to outdo each other with exotic exploits, and looked down their noses at attacks based on phishing emails and links.

Since she started helping enterprise customers test their mobile device management and other technologies, Weidman realized it’s the simple stuff that causes the vast majority of problems.

Rather than fear a nation’s top government hackers abusing an unheard-of vulnerability, you should keep an eye on defenses blocking phishing links, dodgy apps, and stopping files leaking over Bluetooth from handhelds. These are much, much more likely.

“It’s patching or getting phished,” Weidman told The Register after her keynote. “It’s not nation states spending God know what on zero days. We still haven’t gotten the basics right.”

Enterprises seeking signs of exploitation in the mobile devices used by their workers often look for Cydia, an alternative to Apple’s App Store for jailbroken iOS iThings. Jailbreaking devices in violation of enterprise security policies can be an issue, but the presence of Cydia is not enough. Data-stealing apps are far more of a threat.

Similarly, Weidman said employees should be trained to be wary of mobile phishing attacks, which can hit devices through multiple communication channels as well as email.

The controls are out there, so use them

roundup

Seriously, Cisco? Another hard-coded password? Sheesh

READ MORE

During her presentation, Weidman ran through enterprise-grade security controls available on the market – such as mobile threat defense and mobile application management – while offering examples of how they may fall short under attack.

Weidman has presented or conducted training at venues around the world including for the NSA, West Point and the Black Hat security conference. She was awarded a DARPA Cyber Fast Track grant to continue her work in mobile device security, culminating in the release of the open-source Smartphone Pentest Framework (SPF).

These days she is developing proof-of-concept iOS and Android exploits for testing purposes. Android is so fragmented that it’s hard to develop reliable exploits, Weidman said during her presentation.

Privately, after her talk, Weidman praised Google for open-sourcing its mobile exploit research, through its Project Zero initiative and other conduits. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/05/mobile_enterprise_security_appseceu/

NSO Group bloke charged with $50m theft of government malware

A former worker at NSO Group – the Israeli biz infamous for selling zero-day exploits to governments nice and nasty – has been charged with stealing his employer’s spyware, and trying to sell it for $50m on the black market.

The 38-year-old former bod was reportedly told he was going to be fired by his bosses at NSO, and apparently decided on a novel form of golden parachute. Israel’s State Attorney’s Office claims he took software nasties and vulnerability information worth an estimated $90m and tried to sell it on the dark web for $50m in crypto-currencies.

“The accused committed these crimes out of greed, despite knowing, even if he shut his eyes from seeing it, that his crimes might damage state security and lead to the collapse of a firm employing 500 workers,” the State Attorney’s Office said, the Jerusalem Post reports.

Sad Android

Inside the ongoing fight to stamp out govt-grade Android spyware

READ MORE

However, he came a cropper when the apparent buyer of the hacking toolkits double crossed him, and informed NSO of the transaction. The company went to the cops, who raided the employee’s home and found the missing data, reportedly under the suspect’s mattress.

It appears that the surveillance-ware in question was NGO’s flagship Apple-hacking software codenamed Pegasus. This exploited what were zero-day holes in iOS and OS X, until the code was identified by Canadian non-profit Citizen Lab and the flaws fixed by Apple. There’s a similar package for Android named Chrysaor.

The employee has now been charged with alleged theft and damaging private assets in a manner that would jeopardize state security interests. It could be an interesting trial, if the proceedings are ever made public. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/06/nso_group_employee_charged/