STE WILLIAMS

How do we stop facial recognition from becoming the next Facebook: ubiquitous and useful yet dangerous, impervious and misunderstood?

Facial recognition is having a rough time of it lately. Just six months ago, people were excited about Apple allowing you to open your phone just by looking at it. A year ago, Facebook users joyfully tagged their friends in photos. But then the tech got better, and so did the concerns.

In May, San Francisco became the first major city in the world to effectively ban facial recognition. A week later, Congress heard how it also needed to ban the tech until new rules could be drawn up to cover its safe use.

That same week, Amazon shot down a stakeholder proposal to ban the sale of its facial recognition technology from being sold to law enforcement. Then, in June, the state of Michigan started debatinga state wide ban, and the FBI was slammed by the Government Accountability Office (GAO) for failing to measure the accuracy of its face-recognition technology.

The current sentiment, especially given a contentious political environment where there is an overt willingness, even determination, to target specific groups, is that facial recognition is dangerous and needs to be carefully controlled. Free market America has hit civil rights America.

It hasn’t helped that China’s use of the technology has created a situation previously only imagined in dystopian sci-fi movies: where a man who jaywalked across a road is identified several weeks later while walking down a different street, arrested and fined.

This turn is especially frustrating for one CEO of a facial recognition firm, Shaun Moore of Trueface, a company that until recently was based in the city that voted to ban its product, San Francisco.

Moratorium

Moore is keen to point out that San Francisco didn’t actually ban the technology; it can still be used if the authorities get a warrant. This is true.

The decision is more of a moratorium: any local government authority that wants to use facial recognition will need to apply to do so, and be approved, before it can. That system will only be lifted when new rules designed to balance privacy and accuracy with technological ability are drawn up.

He is, unsurprisingly, not happy about his company’s product being blocked by legislation. “It is not the right way to regulate,” he complains, especially since it has led to a broader sense that the technology is inherently dangerous. “We risk creating a Facebook situation,” he warns – where Congress feels obligated to act against a specific technology based on fears but with little or no understanding of how it works.

For one, Moore argues, he doesn’t know of any law enforcement agency that wants to use the technology for real-time surveillance. They want to use it as an investigative tool after the fact by scouring footage. “It can take five to seven days off investigation time,” he told us. “It is one piece of evidence that can be used to search for other evidence.”

In other words, fear of what facial recognition could be used for is limiting its usefulness in current investigations. Faster, more effective investigations mean better results and more available police time to cover more crimes: a win-win.

He also argues that the fear of ubiquitous surveillance is simply not possible, at least not yet. “We don’t have the processing power, we can’t physically do that,” he says in reference to the fear that widespread cameras could be turned into tools of constant surveillance.

But as we dig into the concerns around facial recognition, it increasingly feels like the proposed moratoriums make a lot of sense.

One of the biggest concerns is around accuracy: how confident can we be that someone on a camera, identified as a specific individual through facial recognition, is really that person? The answer is always given as a percentage likelihood. But that raises a whole host of other questions: what level of accuracy is sufficient for someone – like a police officer – to act?

Color blind

Combined with a well-recognized problem that the datasets used to train these systems are heavily skewed toward white-skinned men – which results in more accurate results for them but less accurate results for anyone who isn’t a man, or white – and you have a civil rights nightmare waiting to happen.

Moore says that his company – and the facial recognition industry as a whole – is “absolutely” aware of that dangerous bias. While stressing that it is not the technology itself that is racist but there are “lots of racist people” and that the data itself can cause bias, he says that the industry is working hard on fixing those biases.

Trueface is paying people in other countries across Asia and Africa to send them photos of their faces in order to build a much larger database of faces with different features and darker skin tones and that approach is “actively pushing the bias down.”

He says that combined with improvements in the technology, we are rapidly getting to the point where within two-to-three years, the degree of accuracy in facial recognition will be in “high 90s” for all types of people – which is basically the same as other forms of identification that we all accept within society, like banking.

He even argues that level of accuracy could help counteract human biases: it would be harder for a police officer to justify, say, stopping a black man because he thought he looked like a suspect if there was a facial recognition result that said it was only 80 per cent accurate.

But then, of course, we delve into the complex and fraught world of what is supposed to happen versus what really happens on the street. Moore admits that if there isn’t a clear picture of someone or the individual in question is wearing a hat, then it is never going to be possible to get a high-90s accuracy.

Except he describes it in a way that many of his clients are likely to see it: “If someone is actively avoiding cameras, or pulls on a hat, then there’s nothing we can do.” We relay the recent story from London where a man was stopped, questioned and fined £90 ($115) for “disorderly behavior” because he tried to hide his face. He didn’t want to be on camera; the cops immediately assumed he was up to no good.

Abuse

Moore admits that facial recognition use is going to be based on a “social contract” and that “to me, that was inappropriate” to stop and fine the man. It was “probably his right” to avoid the cameras, he notes, but then quickly adds that he “would like to assume that the police officers are trained to recognize behavior.” And, he points out, the issue only got a “spotlight on it because facial recognition was in the same sentence.”

Which is a fair point. Like any new technology, the initial sense of amazement at what has become possible is soon replaced with a fear of the new, of its possible abuses. And when any abuses do come to light, there are given disproportionate weight, leading to a sense of crisis that then drives lawmakers to believe they need to act and pass new laws.

This technology journalist often cites the wave of newspaper headlines in 1980s that surrounded the terror that once was “mobile phones.” There were even calls to ban them entirely because they being used by football supporters to organize fights.

Facial recognition has already proven its worth, Moore argues. One recent example was how a man traveling on a false ID was identified and arrested at a Washington DC airport thanks to their facial recognition system. And, faced with the unpleasant reality of gun violence and mass shootings in the US, its use at live events could end up saving lives and keeping everyone safer. “Guns are a serious problem,” he notes. “This technology is there to make better decisions.”

Which gets us back to the rules and regulations. Which don’t exist yet. Moore feels strongly that there is one area where federal – rather than local – regulation is needed. And that should include restrictions on use.

How then?

The question is what do those rules look like, how are they applied, and around what specific issues can they be drawn. Moore says he doesn’t have the answers, but he does help identify some key building blocks:

  • Government versus commercial use
  • Real-time use versus analysis of recorded footage
  • Opt-in use (where identification is used to provide access) versus recognition (where identification is used to stop, prevent or limit someone)
  • Transparency and benchmarks

The use of facial recognition is always going to be “situational,” Moore argues. And, he notes, it may well be necessary for the use of facial recognition within the US to be reliant on the use of technology that is created within the US, in order to make sure that the new rules are baked into hardware and software.

Even assuming new federal rules, a bigger question then is: how do you stop companies and/or specific police departments from abusing the technology?

Moore seeks to reassure us. “There are bad people. We have turned down multiple clients where their use of the technology was not aligned with what we wanted to do.” It would hard for companies to hide their planned use for such technology, he argues, because “we spend six months at a minimum with clients. If they were trying to deceive us, we would know it, and just shut it down.”

But what was intended as a reassurance in some respect only serves to further highlight concerns: this technology can be used in wrong and dangerous ways, and there are people out there who are already seeking to spend money to create systems that make the company that sells the technology uncomfortable enough to walk away.

Moore is right when he says that facial recognition is an “inevitability.” The big question is: is this the sort of technology that should be introduced and then scaled back – like ride-sharing or social media – or is the sort of technology that needs to forced to argue its case before it is introduced?

Moore thinks it’s the former. His former hometown thinks the latter. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/03/how_do_we_stop_facial_recognition_from_becoming_the_next_facebook_ubiquitous_and_useful_yet_dangerous_impervious_and_misunderstood/

Sodin Ransomware Exploits Windows Privilege Escalation Bug

Exploitation of CVE-2018-8453 grants attackers the highest level of privileges on a target system.

In a world where ransomware runs rampant, Sodin stands out. The newly discovered malware exploits Windows vulnerability CVE-2018-8453 to elevate privileges — a rarity for ransomware.

Kaspersky Lab researchers have been watching Sodin, also known as Sodinokibi and REvil, since they spotted it in April. Sodin captured their attention because it exploits Windows privilege escalation vulnerability CVE-2018-8453, says senior malware analyst Fedor Sinitsyn.

CVE-2018-8453, also discovered by the Kaspersky Lab team, was under active attack when Microsoft released a patch back in October. Researchers saw FruityArmor APT using the vulnerability in a small number of targeted attacks, primarily against victims in the Middle East. The exploit was packaged into a malware installer, which required system privileges to install a payload that would grant the attackers persistent access onto victims’ machines, they reported.

Now researchers have spotted the same vulnerability in Sodin, which they say is a rarity for ransomware. Statistics show detections across Asia, Europe, North America, and Africa, though they point out most are in Asia-Pacific — specifically Taiwan, Hong Kong, and South Korea. Sinitsyn says researchers did not notice a pattern among industries or organizations targeted.

Each Sodin sample has an encrypted configuration block with the settings it needs to work. After launch, it checks the configuration block to verify whether the option to use the exploit is enabled, Sinitsyn explains. If it is, Sodin checks the architecture of the CPU it’s running on and passes execution to one of the two variants of shellcode contained inside the Trojan’s body.

“The shellcode will then attempt to call a specific sequence of WinAPI functions with malicious crafted arguments in order to trigger the vulnerability,” Sinitsyn says. “As a result, the running Trojan’s process gains the highest privileges in the system. The goal here is to make it harder for security solutions to counteract this malware.”

Sodin uses a hybrid scheme to encrypt victim files. Its implementation of cryptographic operations is quite sophisticated, he adds. The ransomware employs a combination of asymmetric elliptic curve cryptography and a modern symmetric stream cipher.

“Overall, this Trojan leaves the impression that the criminals behind its development know what they are doing,” he continues.

Adding to the attackers’ sophistication is their use of Heaven’s Gate, a technique that allows the Trojan’s 32-bit process to execute pieces of 64-bit code. Many debuggers don’t support this architecture switch; as a result, it’s difficult for researchers to analyze the malware. Further, says Sinitsyn, Heaven’s Gate may impede detection for some security tools or analysis systems.

Heaven’s Gate has been seen in different types of malware, including coin miners, but this is the first time Kaspersky Lab researchers saw the technique used in a ransomware campaign.

Sodin is designed as ransomware-as-a-service (RaaS), meaning operators can choose the way it spreads, and Sinitsyn anticipates this scheme will allow attackers to continue distributing the ransomware across channels. “It is already propagating to vulnerable servers via vulnerable server software as well as to endpoints via malvertising and exploit kits,” he says.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/perimeter/sodin-ransomware-exploits-windows-privilege-escalation-bug/d/d-id/1335145?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New ‘WannaHydra’ Malware a Triple Threat to Android

The latest variant of WannaLocker is a banking Trojan, spyware tool, and ransomware.

WannaLocker, a mobile lookalike of the infamous WannaCry ransomware, just got a little more dangerous.

Researchers at Avast this week reported observing a new version of the malware that combines WannaCry’s user interface with new spyware, a banking Trojan, and remote administration functions.

The three-pronged threat, which the security vendor calls WannaHydra, is currently targeting users of four major banks in Brazil. But if it takes off, the malware could prove to be a major issue for Android users everywhere, Avast said.

WannaLocker surfaced in June 2017 around the same time as WannaCry. Avast first observed it targeting users of Chinese gaming forums disguised as a plug-in for a popular game.

When installed, the malware encrypted certain files stored on the device’s external storage while leaving other files untouched, including files smaller than 10KB in size and files containing “DCIM” or “download” in its path, Avast noted at the time.  It then demanded a ransom of 40 Renminbi (currently around $5.80) for the decryption key. Trend Micro described the malware as a variant of SLocker, one of the earliest known ransomware tools, with a copycat GUI of WannaCry. 

The latest version of WannaLocker works by presenting users of the four targeted bank apps with a fake message urging them to sign into their accounts to address some account-related issue. Once installed, the malware collects a variety of information including the name of the device manufacturer and other hardware information, the phone number, text messages, call log, photos, contact list, microphone audio data, and GPS location information.

Like previous strains, the latest version of WannaLocker has the capability to encrypt files on the infected Android device’s external storage. But that particular feature appears to be still a work in progress, Avast said.

“The new malware, WannaHydra, shares the same UI [as WannaCry] in its ransomware module, but contains more capabilities including spyware and a banking Trojan,” says Nikolaos Chrysaidos, mobile threat researcher at Avast.

“It has quite wide-ranging abilities to collect information and could be used to extract personal and financial information in addition to delivering the ransomware package,” he says. It’s unclear, though, how much money, if any, the attackers might be asking as ransom, he notes.

WannaHydra appears to be still in development, so additional features could be added or some of it removed at a later date, he adds. Such mobile malware typically poses more of a threat to Android users because attackers continue to find it easier to deliver malicious code on Android than on iOS devices, he says.

“In this instance, the attack is targeted toward users in Brazil, but anyone can be a victim of mobile malware,” he cautions.  That is why it is important for Android users to only download apps from trusted developers on certified app stores like Google Play. 

Sam Bakken, senior product marketing manager at cybersecurity vendor OneSpan, says it is quite likely that the ransomware functionality in the new WannaLocker variant itself exists as a fallback option in situations where an infected device might not have installed the app of one of the four targeted banks. “It would then fall back to the ransomware attack,” in these situations, he says.

Android users should ensure they are on the most up-to-date version of the operating system as possible, Bakken notes. Often that can be difficult because Android users are often tied to the OS upgrade schedules of their device manufacturer or cellphone service providers, he says.

In addition, even when downloading apps from Google’s official store, users should make sure to check the number of downloads the app has and pay special attention to negative reviews, Bakken advises.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-wannahydra-malware-a-triple-threat-to-android/d/d-id/1335148?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Military Warns Companies to Look Out for Iranian Outlook Exploits

Microsoft patched a serious vulnerability in the Microsoft Outlook client in 2017, but an Iranian group continues to exploit the flaw.

The US Cyber Command, the military agency tasked with US online operations, has warned companies and government agencies that malware linked to state-sponsored groups from Iran uses a flaw in Microsoft’s Outlook mail client to turn off security features and gain access to users’ credentials. 

The vulnerability, patched in October 2017 by Microsoft, continues to be a threat because many companies do not regularly patch their systems. The last attacks used the vulnerability less than three weeks ago, says Nick Carr, senior manager investigating adversary methods at security service provider FireEye. 

In addition, there are some signs that the patch, which turned off the vulnerable feature in Outlook, could be reversed by attackers, he says. 

“This is a really interesting infection vector that we think will continue to be an issue,” Carr says. “We are aware of, and our red team and other red teams have exploited, the brittleness of this patch. It can basically be disabled by modifying the registry key to roll back the patch entirely.”

The warning comes as political tensions between the Trump administration and Iran continue to ratchet up, with both sides claiming to have launched cyberattacks against the other nation’s networks. Security experts have linked the use of the Outlook exploit to two Iranian-sponsored groups, known as APT34, which attacks targets in the Middle East, and APT33, which targets  organizations in the US, Europe, and the Middle East. 

With Iran willing and able to use destructive malware, such as the data-destroying Shamoon attack, companies need to bolster their defenses, says Brandon Levene, head of applied intelligence at Chronicle, the threat intelligence arm of Alphabet, Google’s parent company.

“Patch your systems or at least mitigate the outward access of these systems against exploitation if you cannot patch,” he says. “The second is that understand if you are a viable target for Iranian interests, then these are things you need to understand as part of your threat models.”

The Outlook flaw allows attackers to use the home page feature of the e-mail client to inject their own HTML and VisualBasic code, escaping from the secure sandbox. The vulnerability, CVE-2017-11774, can be triggered remotely, according to security firm SensePost, which discovered the flaw and reported it to Microsoft.

“This does have the downside of not allowing you to easily trigger the home page straight away, but you gain a stealthy persistence method,” SensePost stated in its analysis. “I can also recommend you build some ‘shell checks’ into your exploit, as the home page gets cached by Outlook, so the exploit may trigger even after you have unset the home page value.”

While 20 months should be enough time for a company to fix such a flaw, often such issues slip through the security process. The attack has been in use by Iran since at least 2018, security experts say.

On Tuesday USCYBERCOM submitted five files to VirusTotal that the military agency identified as part of an ongoing attack targeting a vulnerability in Microsoft Outlook patched in a regularly scheduled fix in October 2017.  

“USCYBERCOM has discovered active malicious use of CVE-2017-11774 and recommends immediate #patching,” the organization stated on Twitter. “Malware is currently delivered from: ‘hxxps://customermgmt.net/page/macrocosm’ #cybersecurity #infosec”

It’s the first time USCYBERCOM has warned companies of a non-Russian attack, Levene says.

Some of the files used in the attack date back to 2016 and 2018. The malicious website, however, is as recent as a couple of weeks. Overall, the warning by USCYBERCOM is not very timely but gives a sense of what the military considers a threat, Levene says.

“Are these technical indicators really useful? Not really,” he says. “These are historical indicators. It does set an interesting precedent for allowing us to get a better idea of the TTPs [tactics, techniques, and procedures] and behavior sets that CYBERCOM believes, at least, are relevant even now.”

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/us-military-warns-companies-to-look-out-for-iranian-outlook-exploits/d/d-id/1335150?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Serious Security: Beware eBay scrapers promising to help you

Thanks to Andrew Last and Peter Mackenzie of Sophos Support for their help with this article.

If you’re an eBay user, the question you’ve probably been asked most often by friends and family – notably by those who’ve used it either never or rarely – is, “Don’t you get ripped off a lot?”

When you’re buying and selling at a distance, so that one of you has to ship before getting paid or to pay before it’s shipped, there are plenty of ways for the transaction to go wrong.

But for all the horror stories, most eBay transactions work out just fine – no one gets ripped off and there’s a net positive outcome for everyone.

Of course, it’s not just buyers and sellers on online trading sites that can lead you astray.

Here’s an example of a recent eBay spam that we investigated.

Judging by the automated look of the message, the spammer is scraping the details of items soon after sellers publish them and then offering eBay-related “viral promotion” services.

The text even includes some machine-generated – though admittedly not very convincing – flattery claiming a special interest in the product being advertised:

The message actually has no spaces in it – the gaps between the words are filled with a Unicode character known officially as MODIFIER LETTER UP TACK (presumably because it looks like a drawing pin or tack with the point facing upwards).

We’re guessing this is a simple trick intended to stop basic text analysis tools from splitting the message into words, given that most European languages use the regular space character (Unicode value +0020; ASCII 32) to denote word endings, rather than an obscure TACK character (Unicode value U+02D4).

Ebay doesn’t allow links in messages, both to reduce fraud and to discourage buyers and sellers from moving off-market, so the spammer has to fall back on embedding their advert in an image instead:

The URL in this case is a short vanity name in the .ME domain, hooked up to a URL redirector service that lets the owner of the domain change the final destination of the URL any time they like.

That makes it cheap and technically simple spammer to make an easily typed URL such as…

buyme.example

…redirect immediately to a much more complex URL such as:

online.service.example/showads?affiliate=14432search=custom%20stuffrating=3

In other words, the spammer effectively has their very own URL shortening service, and once they own buyme.example, they also get the right to use any and all subdomains, too.

So they can set up a number of different redirects for different online sites, such as ebay.buyme.example, amazon.buyme.example, alibaba.buyme.example and more.

Each one then takes you to a different advertising URL, tagged with the spammer’s affiliate code so that they get a modest click-through fee any time someone uses the link, and with search terms vaguely relevant to the product you’re selling right now, or related to the trading site you’re using.

Bingo!

The spammer now has their very own targeted advertising service – admittedly with very modest revenue, but on an even more modest budget.

Better yet for the spammer, they can run the whole thing largely automatically, and run dozens of these schemes at the same time, too.

The spammer gets to use the cloud for everything, and doesn’t need to set up any servers or services of their own – they don’t need to know a thing about how to operate DNS, how to run a web server, or how to format HTTP 301 redirects.

The whole campaign can be run using little more than a web browser, a few site-scraping scripts and a low-value pre-paid credit card.

But where’s the harm?

Apart from the annoyance of eBay message spam, is there any harm done or risk posed by this sort of “service”?

In the example we checked out, the spam took us to a set of online ads on popular “gig outsourcing” site Fiverr, offering the very sort of services that lots of home-business eBay users might actually find useful.

Cheap and cheerful photo-editing of product shots to make them sell for a few more dollars – what’s the harm in paying someone $5 to help you realise $15 more in your product’s auction?

Product videos, ready to use – what’s the harm in paying $30 to someone to do the work for you in a country where that’s serious money, and where the “gig worker” has no realistic local prospects of a job at all?

Actually, there are risks, regardless of what you think of the ethical and moral righteousness of zero-hour contracts and the so-called gig economy.

We tried the link in this spam many times from various parts of the world, in different browsers, and at different times of the day, and although we almost always ended up with the same ads, tagged with the same affiliate codes…

…we occasionally also received the added bonus of a COOL FREE DOWNLOAD that turned out to be malware, including one sample that tried to trick us into installing a cryptocurrency miner.

The cryptominer foisted on us was, of course, preconfigured to mine for someone else so that we’d be paying for the electricity but they’d get the cryptocoins.

Your mileage may vary

By the way, don’t forget that the malware we bumped into while investigating this message isn’t necessarily the same as what you might experience, even if you followed exactly the same link from exactly the same spammer.

In fact, it’s not merely possible, but actually quite likely that your experience would be different to ours.

The crooks use the cloud these days to deliver malware on demand, instead of packaging it into self-spreading viruses or worms like they used to, because that means they can alter the details of each malware attack at will.

They can infect only every fifth visitor, or hit German users with keyloggers but everyone else with ransomware, or try to infect you only during office hours, and so on.

How bad can it be?

When you type in unsolicited links – especially links like the one in this spam, which was deliberately presented in a way to sidestep the accepted policies on links in eBay messages – you’re putting an awful lot of trust where it doesn’t belong.

Let’s assume that the original spammer is essentially honest, and is aiming merely to make a humble income out of advertising other people’s attempts to make a humble income out of your own attempts to make a modest income out of selling stuff on eBay, who will in turn make a modest income out of your sale.

Well, there’s still a lot that can go wrong, including:

  • The original URL is an HTTP link. Anyone in the path between the spammer and you can not only detect that you’ve clicked an ad, but also modifiy or completely rewrite the reply that goes back to you.

Remember that even though an HTTPS link doesn’t say much about the truth or trustworthiness of the content that comes back from a secure web server, HTTPS nevertheless makes it very much harder for other people to mess with the content you do see, whereas HTTP links provide no protection against in-transit modifications at all.

  • The spammer could have chosen a poor password. Anyone who can guess or recover the password to the spammer’s internet account can modify the redirection URL at any time, and carry out a DNS hijack or a redirection hijack.

DNS hijacks are where a crook changes the signpost that points to your web server, so that some, many or all of the future requests to visit your server end up taking the wrong route and reaching the wrong destination.

Redirection hijacks are very similar, but they let the original web request get through to the usual server – with any encryption done correctly – and then trick the web server itself into farming off the request to a new site, once again with any encryption done correctly.

Worse still, crooks can turn the redirections on and off at will, thus revealing their treachery (or pushing out their malware) only occasionally and unpredicatably, allowing them to stay unnoticed for longer.

  • Ad spammers could end up taking you to untrustworthy or badly run ad-serving services. When a spammer is aiming to make a few cents out of other people making a couple of dollars out of you making a few tens of dollars selling an unwanted gift, there isn’t a whole lot of time for due diligence.

When an ad server is compromised and rogue ads are added into the queue, the resulting is known as malvertising, short for ‘ads that may lead to malware’.

Malvertising is a tricky problem because most ad services deliberately target their ads for every visitor, depending on who’s visiting, where they’re coming from, and which advertisers are bidding at that moment; as a result, rogue ads may show up only occasionally, and reproducing the rogue ad may be difficult or even impossible.

What to do?

  • Don’t type in unsolicited links just to take a look. At best you will come out OK, and you’ll have lost nothing but time. At worst you could end up under attack from malware.
  • Run an anti-virus and web filtering tool. Sophos Home is free, and protects you not only from malware downloads but also from visiting known-dodgy URLs in the first place, whether by accident or design.
  • Report messages that are obviously constructed to look like something they are not. URLs in weird fonts embedded into images are there because the sender already knows they aren’t supposed to be there at all, so report this sort of thing to the provider of the service.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HN6XESBEzMc/

D-Link must suffer indignity of security audits to settle with the Federal Trade Commission

Taiwanese networking equipment vendor D-Link will have to submit to a decade of product security audits after agreeing to settle a lawsuit brought by the US Federal Trade Commission.

It has also pledged to maintain a “comprehensive software security programme” for the next 20 years, designed to make its IP cameras and routers safe for consumers.

You can find the settlement order on the US antitrust body’s website, here (PDF).

“Notably,” the company said in a statement, “the order does not find D-Link Systems liable for any alleged violations.”

Back in 2017, the FTC accused D-Link of a long list of shoddy security practices, including, but not limited to, the use of non-removable default passwords in its IP cameras, command-injection flaws, leaked router security keys and the use of plain-text password storage in its mobile app.

FTC said D-Link failed to take “reasonable steps” to secure its products, putting the privacy of customers everywhere at risk. The trade watchdog interpreted this as a consumer rights issue.

“When manufacturers tell consumers that their equipment is secure, it’s critical that they take the necessary steps to make sure that’s true,” FTC Consumer Protection Bureau director Jessica Rich wrote at the time.

american eagle against backdrop of american flag

D-Link sucks so much at Internet of Suckage security – US watchdog

READ MORE

The suit alleged six violations of the FTC Act of 1914: one count of unfairness and five counts of misrepresentation. After years of legal wrangling, the trial finally kicked off in January 2019.

During the case, D-Link argued that it shouldn’t be on trial, since no actual customers have been harmed, and even managed to dismiss the unfairness claim.

The company has not admitted its guilt, but agreed the security of devices is very important and said it would make sure to comply with the FTC’s requirements.

“We chose to defend against this litigation based on our strong belief in the quality and security of our products and practices,” D-Link said. “This settlement allows D-Link Systems to vigorously continue with its current comprehensive software security program and sets a new standard for secure software development practices for IoT devices.”

“This case will have a lasting impact and, we hope, positively shape public policy in the important areas of technology, data security, and privacy,” added John Vecchione, lead trial counsel for D-Link Systems and CEO of Cause of Action Institute.

Cause of Action, formerly known as Freedom Through Justice Foundation, is a “government accountability” nonprofit that is working for “economic liberty unencumbered by overregulation” and it took on the D-Link case pro bono.

Besides bi-annual audits by an agency appointed by the FTC, the 32-page settlement outlines a security programme for D-Link, which will have to be documented in writing, with annual reports to the board of directors and annual device security assessments.

Most of it reads like best practice: for example the settlement mandates performing threat modeling, using automatic firmware updates when possible and conducting pre-release vulnerability testing of every release of software – things a responsible hardware vendor should be doing anyway.

There’s also a requirement for a process for accepting vulnerability reports from security researchers, and biennial security training for personnel and vendors responsible for developing, implementing, or reviewing router or IP camera software.

It’s not clear if D-Link will be using it, but the vendor has been granted a two-year “safe harbor” period to get its house in order. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/03/dlink_to_suffer_the_indignity_of_security_audits_to_settle_with_the_ftc/

Burned Out?

TA505 Group Launches New Targeted Attacks

Russian-speaking group has sent thousands of emails containing new malware to individuals working at financial institutions in the US, United Arab Emirates, and Singapore.

Russian-speaking threat group TA505 has begun targeting individuals working at financial institutions in the US, United Arab Emirates, and Singapore with new malware for downloading a dangerous remote access Trojan (RAT).

Researchers at Proofpoint say they have observed TA505 sending out tens of thousands of emails containing the downloader — dubbed “AndroMut” — to users in the three countries. The group is also targeting users in South Korea in a separate but similar campaign.

In both campaigns, TA505 is using AndroMut to download “FlawedAmmyy,” a full-featured RAT that allows the attackers to gain administrative control of an infected device to monitor user activity, profile the system, and steal credentials and other data from it.

FlawedAmmyy is malware that first surfaced in 2016 and is based on the leaked source code of a legitimate remote admin tool called Ammyy. TA505 has used it in previous campaigns.

“We regularly see the group Cobalt Strike using the legitimate Cobalt Strike penetration testing software in their attacks,” says Chris Dawson, threat intelligence lead at Proofpoint. Threat actors also frequently abuse other remote admin tools, such as Team Viewer and VNC, in attacks. It is less common for a legitimate tool to be converted into standalone malware like FlawedAmmyy, he says.

The AndroMut downloader itself is new and first surfaced last month. The malware is written in C++ and appears to bear some resemblance to another downloader called Andromeda. AndroMut uses encryption and obfuscated API calls to evade detection and includes several anti-analysis features, including checks for sandboxes and emulators, checks for mouse movement, and checks for debuggers, Proofpoint said.

“TA505 has evolved over the last few years from an extremely high-volume actor dealing in global ransomware and banking Trojan campaigns to a targeted actor focused on regional campaigns and malware,” Dawson says.

Its current malware portfolio includes multiple downloaders and sophisticated RATs. “Infections with tools like these are quiet,” he notes. “Individuals and organizations often don’t know that they are infected until the actor decides to install additional malware, steal credentials and identity information, or launch further attacks inside an organization.”

The evasion and anti-analysis capabilities built into modern malware tools like AndroMut highlight the need for multilayered protections. In addition to securing emails and endpoint devices, organizations need to monitor for malware communication with command-and-control systems, Dawson notes.

For enterprises, the threat posed by TA505 appears to be growing, according to Proofpoint. The group is behind some of the largest email campaigns ever, including one to distribute the Locky ransomware.

Through 2017 and the first half of 2018, TA505 launched such massive campaigns that they dramatically affected global malicious email volumes, Dawson says. “The group saturated organizations with Locky ransomware and the Dridex banking Trojan,” he notes.

When TA505 shifted to smaller — though still relatively large — campaigns distributing RATs and other malware, it triggered a similar shift in this direction among other attackers that continues today, Dawson says.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/ta505-group-launches-new-targeted-attacks/d/d-id/1335136?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Black Hat Q&A: Understanding NSA’s Quest to Open Source Ghidra

National Security Agency researcher Brian Knighton offers a preview of his October Black Hat USA talk on the evolution of Ghidra.

The National Security Agency (NSA) made a splash in the cybersecurity industry this year when it released its Ghidra software reverse-engineering framework as open source for the community to use. Now that the tool is in the public’s hands, NSA senior researcher Brian Knighton and his colleague Chris Delikat, will be presenting a talk at Black Hat USA about how Ghidra was designed, and the process of rendering it open source.

We recently sat down with Brian to learn more about Ghidra and his Black Hat Briefing.

Alex Wawro: Can you tell us a bit about who you are and your recent work?

Brian Knighton: I’ve worked at NSA for about 20 years. The past 18 years I’ve been a member of the GHIDRA team, developing various aspects of the framework and features. My focus these days is applied research, utilizing Ghidra for cybersecurity and vulnerability research of Internet of Things (IoT) devices from smartphones to autonomous and connected vehicles.

My educational background includes a BS in Computer Science from University of Maryland and an MS in Computer Science from Johns Hopkins University.

Alex: What are you planning to speak about at Black Hat, and why now?

Brian: I’m going to use this opportunity to discuss some implementation details, design decisions, and the evolution of Ghidra from version 1.0 to version 9.0, and of course open source.

Alex: Why do you feel this is important? What are you hoping Black Hat attendees will learn from your presentation?

Brian: It’s important to describe how Ghidra came about, why certain things are implemented the way they are, why we selected Java, and why it’s called a framework. In the end, I hope it will allow the community to better utilize Ghidra for cyber-related research.

Alex: What’s been the most interesting side effect, so far, of taking Ghidra from internal tool to open-source offering?

Brian: The entire team is amazed and humbled by the overwhelming interest and acceptance of Ghidra. I knew it would be well received, but I’m surprised by how much. I feel honored to have been a part of it. For me personally, two specific things jump out.

The first was being on the floor at RSA and experiencing the energy, the excitement, and the positive interactions with so many folks during the three-day conference. The second was delivering a Ghidra lecture at a local university. One of the many reasons for releasing Ghidra was to get it into the hands of students and ultimately help advance cyber proficiency, and now I was actually doing it first-hand.

For more information about this Briefing check out the Black Hat USA Briefings page, which is regularly updated with new content as we get closer to the event! Black Hat USA returns to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/threat-intelligence/black-hat-qanda-understanding-nsas-quest-to-open-source-ghidra/d/d-id/1335123?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Disarming Employee Weaponization

Human vulnerability presents a real threat for organizations. But it’s also a remarkable opportunity to turn employees into our strongest cyber warriors.

Employee awareness has become a critical necessity for modern organizational security. While the human factor has always presented an “inside threat” for companies, it’s fast-growing: The more social, hyperconnected, fast-paced our culture becomes, the greater are the risks employees bring into the organizational cybernetic space.

Worse, no matter how robust today’s cyber defense systems are, it seems that attackers always remain one step ahead. With vast data publicly available on any employee, “bad guys” easily gather and utilize personal information to target specific employee groups. These sophisticated tactics instantly expose employees’ vulnerabilities and turn them into human weapons, which in some recent global cyberattacks have had a destructive impact on the entire organization.

Almost any sizable company today implements some sort of security awareness training program from lectures and posters to computer-based training modules, videos, and articles. These tools offer mostly static, dated content, designed to be passively consumed by employees. Lack of context and relevance to employees’ daily routine yields disengagement and creates high friction between employees and IT and HR teams, who are constantly chasing employees to enforce the training.

Adopting a Secure Cyber Lifestyle
There are better ways to engage employees and transform their behavior simply by leveraging the tremendous opportunity that modern reality offers. Due to multiple breaches in social networks, employees are gradually realizing just how vulnerable they are and how exposed and easy it is to breach their personal data. They also are starting to understand that they carry that risk home, to their family, home computers, and personal emails.

If we address employees’ underlying concerns, we can recruit them to play an active role in the cyber awareness mission and build a secure cyber lifestyle that goes well beyond the organizational environment alone. But to be effective, we need to assume a hacker mindset and customize the training to specific employee clusters and individuals. When it comes to training, there’s no “one-size-fits-all” and the more we understand employees’ cyber behavior, the better we can tailor the training program to them. Utilizing innovative training solutions with advanced performance analytics, allows us to test, analyze, and adapt the program itself to each employee and where they are in the learning curve.

Smart Phishing Awareness
Phishing accounts for 90% of data breaches, and roughly 30% of phishing messages get opened by targeted users, according to Verizon’s “2019 Data Breaches Investigation” report. Training employees to identify phishing email and avoid falling prey to attacks has become mandatory, and phishing simulations are the best way to train employees on “real-life” scenarios in their own inbox. To plan and manage an effective phishing simulation campaign, you first need to segment employee groups by their department and role and select the right message for each group. C-level executives are known to be a high-target group for attackers, so the C-suite will need to receive additional, customized training.

Next, employees need to be clustered by their actual response to the phishing email — which conveys the risk level they present for the organization. The messages and training frequency need to adjust continuously, while employee progress and overall organizational resilience levels are being assessed, analyzed, and reported back. Only consistent, customized and adaptive training will transform employee behavior and build lasting organizational resilience to phishing attacks.

Educational Apps
Social networks and mobile apps have become another strong attack vector taking advantage of employees’ false sense of security. Organizations must understand how employees interact with apps across different platforms and cultures, and then use the same tools and behavior patterns to build interactive training experiences. Interactive mobile games utilizing virtual reality, for example, can simulate a cyberattack on a social network and train employees for a safer behavior on these social platforms. These training apps should be accessible to employees via their personal mobile devices, just like social apps are present in every aspect of their social and professional lives.

Virtual Reality
Virtual reality can also be used to train specific or sensitive employee groups. These 3D enabled scenarios leverage the gaming element to convey a strong learning experience. Splitting employee to groups such as a red team versus blue team can create a multisensory learning opportunity that will leave a strong mark on employees’ awareness and change their behavior in the long term.

These are just a few examples of commercially available advanced training methods that can empower employees with the knowledge and tools needed to adopt a cyber secure lifestyle. Employee awareness is an essential tool in our cyber ecosystem. Only smart, engaging training programs that considers employees’ weaknesses and tailor the training to their professional profile, culture, and learning rhythm will convert employees from an organizational threat to a robust defensive workforce.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Ilan Abadi joined Teva Pharmaceutical Industries in May 2012 as Global CISO. In his current role, Ilan is in charge of establishing cybersecurity strategy and structure and managing ongoing cyber activities, including current and future security threats. Among his … View Full Bio

Article source: https://www.darkreading.com/perimeter/disarming-employee-weaponization/a/d-id/1335076?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple