STE WILLIAMS

If for some reason you’re still using TKIP crypto on your Wi-Fi, ditch it – Linux, Android world bug collides with it

It’s been a mildly rough week for Wi-Fi security: hard on the heels of a WPA2 weakness comes a programming cockup in the wpa_supplicant configuration tool used on Linux, Android, and other operating systems.

The flaw can potentially be exploited by nearby eavesdroppers to recover a crucial cryptographic key exchanged between a vulnerable device and its wireless access point – and decrypt and snoop on data sent over the air without having to know the Wi-Fi password. wpa_supplicant is used by Linux distributions and Android, and a few others, to configure the Wi-Fi for computers, gadgets, and handhelds.

This key is used in networks that employ EAPOL (Extensible Authentication Protocol over LAN). The good news is that no more than around 20 per cent of wireless networks will be vulnerable, it is estimated, because the attack requires TKIP and WPA2 to be in use – and no one should be using TKIP in 2018.

In this paper [PDF], “Symbolic Execution of Security Protocol Implementations: Handling Cryptographic Primitives,” to be presented at the Usenix Workshop on Offensive Technologies symposium next week, Mathy Vanhoef and Frank Piessens of the Katholieke Universiteit Leuven in Belgium, explained how a decryption oracle can be used to perform unauthorized decryption of wireless network traffic.

Doing back, er, bit flips

In this Twitter thread, Vanhoef summarized what’s going on: “The problem is that data is decrypted with RC4, and then processed, without its authenticity being checked. So you can flip bits, see how to client reacts, and based on that, recover plaintext.”

And as wpa_supplicant maintainer Jouni Malinen explained in his advisory on Wednesday: “It is possible for an attacker to modify the [EAPOL] frame in a way that makes wpa_supplicant decrypt the key data field without requiring a valid MIC [Message Integrity Code] value in the frame, ie: without the frame being authenticated.”

Malinen added that WPA2 shouldn’t be set up with TKIP as the latter is known to be weak anyway. However, as Vanhoef noted, there’s still people out there using this combination. So, in short, just ensure TKIP is disabled. Malinen added that to recover group encryption keys, a snooper would have to make 128 connection attempts per octet, because an attacker’s bit-flips will make the four-way authentication handshake fail. Not only is this slow, it could crash the access point under attack.

Vanhoef conceded that an attack would be slow to pull off, taking around 20 minutes per byte recovered. “Several clients can be attacked in parallel, but it’s still a non-trivial attack. Patch, but don’t worry too much,” he said.

Nonetheless, the wpa_supplicant team has taken the bug seriously, and developed a fix that should hopefully or eventually trickle down to netizens. Access points and devices will need an update for networks to be free from the flaw. Malinen said if possible, affected users should just kill off TKIP in their networks.

The wpa_supplicant maintainers have pushed out a hotfix here, and the next version, 2.7, will carry the fix. Vanhoef has a proof-of-concept of his attack over on GitHub. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/09/wifi_eapol_oracle_attack/

Google Engineering Lead on Lessons Learned From Chrome’s HTTPS Push

Google engineering director Parisa Tabriz took the Black Hat keynote stage to detail the Chrome transition and share advice with security pros.

BLACK HAT USA 2018 – Las Vegas – Parisa Tabriz, director of engineering at Google, today shared the story of how the company made the switch to enforce HTTPS in Chrome, as well as lessons she learned in collaboration, management, and perseverance over the course of the multiyear project.

As Black Hat founder Jeff Moss put it in his introduction, there are “maybe 20 companies in the world who are in a position to actually do something about raising the level of security and resiliency for all of us.”

Google is one of them. “Anything Google does that is an improvement essentially impacts us all,” Moss added. Recently, for example, the company reached the end of a years-long initiative to label HTTP websites as “not secure” in an effort to push Web developers to embrace HTTPS encryption. The change went into effect last month with the release of Chrome 68; now HTTP sites display an alert to visitors.

The change wasn’t an easy one, as conference attendees learned today during Tabriz’s keynote. She began her talk by detailing what she believes are the key steps to success in information security, and then weaved these points into the story of how HTTPS enforcement went from bold idea to reality.

Tabriz’s first – and foremost – point: “Identify and tackle the root cause of the problems we uncover and not just be satisfied with isolated fixes.”

She pointed to Project Zero, a team of Google security analysts tasked with finding zero-day bugs. The group was created in 2014 to hunt vulnerabilities in operating systems, browsers, antivirus software, and other tech. Its goal is to better understand offensive security as a means of understanding exploitation against defenders and ultimately lead to structural improvement and better security, Tabriz explained.

Her second point: “We have to be more intentional in how we improve long-arc defensive projects.”

Staying motivated to see an effort through over a long period of time, as certain projects can take years depending on their scale, is crucial. As part of this, Tabriz emphasized the importance of identifying milestones, working toward those milestones, and celebrating progress along the way. 

Her third point: build a coalition of champions and supporters outside security so the team’s efforts are successful. “There’s an understanding we’re generally working on similar goals,” she said, adding that the cost of building exploits against high-profile targets has increased due to the efforts of people working together to address root causes of bad security.

Enforcing HTTPS: Idea to Implementation
Tabriz’s ideas are evident in the story of Google’s move to enforce HTTPS on the Chrome browser. 

“We wanted to make the risk of unencrypted traffic more comprehensible and also more consistent,” she explained. The team had a clear vision of what that state should look like but knew it would take many years to achieve it. After all, the Web isn’t an entity owned by any one person or institution, she noted, and they had to navigate an “ecosystem of constraints and incentives” – including industry organizations and browsers with different standards, proprietary ideas, and technology – in the process.

“To make a change like this it had to be gradual, and it had to be very intentional,” Tabriz said. The process began back in 2014 with brainstorming how to change the browser’s user interface, and continued in 2015 with a public UI proposal. Going public with the idea led to support and sentiment from the broader infosec community, which helped convince Chrome leadership this was worth pursuing.

Over the course of the project, Tabriz emphasizes the importance of celebrating milestones to keep up morale. The team created stickers of icons from their UI designs, for example, when those were finalized. 

Tabriz also pointed to the myriad ways in which their work could have failed to demonstrate lessons learned. Management could have killed the project, for example, when the timeline turned out to be years longer than anticipated. Because the team was able to regularly articulate progress and demonstrate positive impact in terms of overall code health, they were never told to discontinue the project. When the benefits aren’t immediately clear, it’s important to get people invested in the project’s success.

The HTTPS push also could have failed without broader support from Google’s leadership and external parties. The team worked with several Web standards groups to communicate the change, she explained, and the ability to kill progress could have come from outside the company.

“We rely on everyone working in technology to clear the path for a safer future,” Tabriz said.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/google-engineering-lead-on-lessons-learned-from-chromes-https-push/d/d-id/1332519?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Researchers Release Free TRITON/TRISIS Malware Detection Tools

Team of experts re-creates the TRITON/TRISIS attack to better understand the epic hack of an energy plant that ultimately failed.

BLACK HAT USA – Las Vegas – A team of ICS experts who spent the past year studying and re-creating the so-called TRITON/TRISIS malware that targeted a Schneider Electric safety instrumented system (SIS) at an oil and gas petrochemical plant has developed open source tools for detecting it.

Researchers from Nozomi Networks, along with independent ICS expert Marina Krotofil, previously with FireEye, today demonstrated how the malware works, as well as a simulation of how it could be used to wage a destructive attack. TRITON/TRISIS was discovered in 2017 in a Middle Eastern plant after an apparent failure in the attack shut down its Triconex safety systems. No official payload was ever deployed or found, so researchers, to date, have only been able to speculate about the attackers’ ultimate plans.

Nozomi Networks recently released the TriStation Protocol Plug-in for Wireshark that the researchers wrote to dissect the Triconex system’s proprietary TriStation protocol. The free tool can detect TRITON malware communicating in the network, as well as gather intelligence on the communication, translate function codes, and extract PLC programs that it is transmitting.

The researchers today added a second free TRITON defense tool, the Triconex Honeypot Tool, which simulates the controller so that ICS organizations can set up SIS lures (honeypots) to detect TRITON reconnaissance scans and attack attempts on their safety networks.

Andrea Carcano, founder and chief product officer at Nozomi, says he and his team re-created the attack in their lab, and found that writing custom malware like TRITON/Trisis would not be as difficult or as expensive as you’d think. TriStation software sells for about $3, for example, on one e-commerce website in China, and the hardware sells for anywhere from $5,000 to $10,000 on eBay and Alibaba. The TriStation engineering software file names have information on the software architecture and structure, Carcano said.

“Everyone believed this malware was more sophisticated,” he says. “But if you have a little knowledge and patience, you can go online and download the manuals, software, and buy a Triconex controller on eBay like we did … and build the attack in your house.”

Schneider Electric, meanwhile, maintains that a TRITON-type attack would require a skilled threat actor. Andy Kling, director of cybersecurity and system architecture for Schneider, notes that even if an attacker had the Triconex equipment on hand, the attacker would still need to understand how it works. “You’d still need to have a certain level of understanding of how not to step on the safety program and how to stealthily inject this malware into the device,” he says. “Those skills, we believe, are still pretty high.”

Kling says Schneider is still operating under the same working theory that the TRITON malware was “in development” and that the attackers likely hadn’t yet completed their malware arsenal for fully deploying the remote access Trojan. “Or perhaps they had another team and they had not gotten to the final payload piece,” he said.

Liam O’Murchu, director of security technology and response at Symantec, says there are still plenty of questions surrounding the intention of the attack and whether it was a test-run or a failed attack. “Was it a competitor trying to take over another competitor? That’s one suspicion about it,” he says. “They were specifically focused on that particular target.”

During its forensic investigation of the attack, Schneider wrote its own malware detection tool for TRITON. “We have been running the program to help our customers see if they were infected by it,” Kling says. “We’ve been running it for months, and there have been no ‘positive’ [detection] results worldwide, and no new indicators of compromise.”

Another Hole
While analyzing TRITON, the Nozomi researchers also stumbled on a built-in backdoor maintenance function in the Triconex TriStation 1131 version 4.9 controller.

“We also found two undocumented power users with hard-coded credentials,” Nozomi wrote in a blog post today. “One of the power user’s login enabled a hidden menu, which from an attacker’s perspective, could be useful.”

But Carcano says the feature was not part of the TRITON malware attack. That maintenance support feature has basically been phased out of later versions of the TriStation.

“Fifteen years ago, when those products were developed, you would have a support account built into the products. That was the norm,” says Schneider Electric’s Kling. But later versions of the software, from 4.9.1 and up, included recommendations and the ability to delete the account.

“As then time went on, [we added] a secure version of this product, and this [support] account no longer exists,” he says.

Kling says Triconex users should update their software if they’re still running the older 4.9 version.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/researchers-release-free-triton-trisis-malware-detection-tools/d/d-id/1332520?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

No, The Mafia Doesn’t Own Cybercrime: Study

Organized crime does, however, sometimes provide money-laundering and other expertise to cybercriminals.

BLACK HAT USA 2018 – Las Vegas – Organized crime organizations play less of a role in cybercrime than you’d think. Instead, a new generation of criminal entrepreneurs runs much of the cyberattack operations worldwide, according to new research presented here today.

Over a seven-year period, Jonathan Lusthaus, director of the human cybercriminal project at Oxford University’s sociology department, studied the role of mafia/organizations in cybercrime in 20 different countries, including Russia, Ukraine, Romania, Nigeria, Brazil, China, and the US. He found that while organized crime provides help or guidance to cybercrime gangs or campaigns in some cases, the bulk of these hacking enterprises are conducted by a new breed of criminal.

“I was a little surprised by the limited role that organized crime appears to play in cybercrime,” Lusthaus said. “I was particularly surprised that I didn’t find more cases where these groups were protecting cybercriminals.”

But that actually makes sense, he said. “Cybercriminals often aren’t competing with each other in a traditional territorial way, so they don’t always need gangsters and strongmen to keep them safe or resolve disputes between them,” he explained. “There are some examples of this but many others where mafias just aren’t present.”

The premise that organized crime and cybercrime are one and the same has been based mainly on “innuendo” and assumptions, according to Lusthaus, who presented his findings here in his talk “Is the Mafia Taking Over Cybercrime?”

Organized crime organizations’ cybercrime activity is “more nuanced,” according to Lusthaus, happening in a more “organic” fashion. “They tended to get involved in ways that matched their traditional skill sets and where there was a genuine need for what they could provide, such as in running money-mule or money-laundering operations,” he said. “It’s also clear that they are using technology to enhance their other criminal operations, though this isn’t cybercrime per se.”

Where organized crime organizations intersect with cybercrime falls into four activities, according to Lusthaus’ research: providing protection for cybercriminals, investing in cybercrime ops, acting as “service providers” for a cybercrime scheme, and helping guide cybercriminals in their activities.

Lusthaus interviewed hundreds of law enforcement officials, former cybercriminals, and other experts and individuals in the private sector for his research, which is based on his newly published book, “Industry of Anonymity: Inside the Business of Cybercrime.”

Lusthaus found an interesting paradox: While many of the people he interviewed believed organized crime plays a major role in cybercrime, few were able to provide examples. “Many participants in this study believed that organized crime involvement in cybercrime was substantial. But when pressed, this appeared to be a theoretical rather than an empirical view,” he wrote in a white paper he released in conjunction with his Black Hat presentation.

‘Service Provider’
That said, Lusthaus found several examples where organized crime and cybercrime work together.

In some cases, organized crime groups are investing financially in cybercrime, mainly as a way to leverage outside hacking expertise to make money. In one case shared by a UK law enforcement official, a cybercriminal got funding from a “well-established” organized crime syndicate to fund the work of a programmer to write software that would allow the group to obtain payment card information from banks. That deal backfired after a dispute between the cybercriminal and the group, and the cybercriminal had to go on the run after his life was threatened.

Lusthaus also said there are cases of organized crime groups offering their own services to cybercrime operations, including the “offline” money-laundering of stolen money. One high-profile case was the 1994 breach of Citibank by Vladimir Levin, who had millions of dollars from the hack laundered in illegal money transfers. A US law enforcement official said a Russian mafia group in St. Petersburg called the Tambov Gang financed and handled the flow of those stolen funds to Russia.

Lusthaus found other examples, as well – most recently mafia groups that smuggle between Eastern and Western Europe card skimmers and blank cards used for counterfeit credit and debit cards using stolen account information.

Organized crime also can serve as a coordinator for specific cybercrime operations. “This usually involves recruiting those with technical skills, among others, to carry out the jobs,” Lusthaus wrote in his paper.

Interestingly, mafia groups rarely provide protection for cybercriminals, his research showed. That role typically gets filled by law enforcement or political figures for a monetary price.

Meanwhile, Lusthaus pointed out a way to deter cybercrime: by recruiting and hiring cybersecurity professionals in regions where cybercrime ops are rampant and often the only option for these individuals, such as in some Eastern European countries.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/no-the-mafia-doesnt-own-cybercrime-study/d/d-id/1332488?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

White Hat to Black Hat: What Motivates the Switch to Cybercrime

Almost one in ten security pros in the US have considered black hat work, and experts believe many dabble in criminal activity for financial gain or employer retaliation.

A new report aims to shed light on what motivates security professionals to choose black hats over white ones as part of a broader study on the overall cost of cybercrime. 

To learn more about the organizational cost of cyberattacks and what lures hackers to the “dark side,” Malwarebytes and Osterman Research teamed up and polled 200 security pros between May and June 2018. They found security-related costs are enormous and growing, partly due to a spike in breaches and partly due to a proportion of industry experts donning “gray hats” and dabbling in cybercrime for money.

The cost of crime can be broken into three parts: budgeted costs for security infrastructure, services, and labor; off-budget costs related to major security incidents, and handling the cost of insider breaches. An organization with 2,500 employees can spend about $1.9 million on security, researchers write in a new report, “White Hat, Black Hat and the Emergence of the Gray Hat: The True Costs of Cybercrime.”

It’s little surprise to learn most organizations in the US have suffered some type of security breach in the 12 months preceding the survey. Phishing is the most common, though respondents also listed adware/spyware, spearphishing, and adware. Organizations reported an average of 1.8 “major” attacks — or those which lead to significant operational disruption or shutdown — during 2017. 

Mid-market companies, which usually have 500-1,000 employees, are hit hardest. Small businesses don’t have a wealth of valuable data; large ones have ramped up their defenses. Those in the middle are hit with more attacks than their smaller counterparts and have similar rates of attack as larger enterprises; however, they have fewer resources to defend themselves and less staff to combat threats. 

“Midsize businesses are the perfect target if you’re a cybercriminal,” explains Malwarebytes intelligence director Adam Kujawa.

It’s tough to stay safe when security is expensive. The average starting salary for an entry-level security pro in the US is $65,578, slightly above the global average of $60,662. Top security professionals in the States make an average of $133,422, the second highest among nations surveyed. One of the biggest costs to organizations, in addition to hiring talent, is retaining it.

But is it enough to keep security experts away from cybercrime? More than half of respondents know, or have known, someone who has engaged in black hat activity, the highest rate among the five nations polled. Twenty-two percent have been approached to participate in cybercrime; 8% considered it.

Researchers asked about respondents’ willingness to become “gray hats,” or folks who maintain their roles as security professionals while becoming a black hat hacker on the side. Those in the US think 5.1% of their infosec colleagues are gray hats; in the UK, 7.9% of security pros are believed to be gray hats. 

Most people in security think cybercrime is more lucrative and easier to enter than white hat security roles, they report. Nearly three in five security pros in the US think people become black hats because it’s more financially rewarding than become a security professional. More than half think it’s to retaliate against an employer, and half think black hats are driven by “some sort of cause of philosophical reason.” 

Security pros in the states are most likely to think employer retaliation is the driver, with more than half (53.3%) reporting that as the reason compared with the global average of 39.7%. Malicious insiders are harder to find and have the potential to cause deeper damage than external attackers.

“That’s a very expensive attack,” Kujawa points out. “The value of the data is probably more than what a regular cybercriminal could gather or accomplish.” Further, the company loses its trust in network infrastructure, which requires more work to address than securing the doors against outside threats.

“Cybercrime is more available to everybody than it ever has been,” he continues. Survey respondents believe these days, it’s easier for anyone to wear a black hat. “One of the big lures we got from the survey is a lot of global mid market companies suggest it’s easier to get into cybercrime without getting caught.”

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/white-hat-to-black-hat-what-motivates-the-switch-to-cybercrime/d/d-id/1332521?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Project Zero boss: Blockchain won’t solve your security woes – but partying just might

Black Hat Parisa Tabriz, a director of engineering at Google and head of the web giant’s Project Zero bug-hunting squad, today opened this year’s Black Hat USA conference with a reminder that partying is key to securing software.

There’s more to it than that, of course: clear goals and targets have to be set, management and staff have to be in agreement and reading from the same page, and the root causes of bugs need to be identified and addressed rather than sticking plaster slapped over holes.

Writing secure code and protecting systems is an arduous task, so employees need to stay motivated – and celebrating successes regularly, with a little party or two, encourages folks to get things done.

Oh, and don’t be distracted by fads like blockchain databases…

“Blockchain is not going to solve security problems,” she told the crowd, much to the chagrin of vendors who have signs up in the expo hall proclaiming the opposite. “We have made great strides in the past decade, but the threat landscape is becoming increasingly complex and our current approach is insufficient.”

By way of example, she discussed Google’s four-year project, completed in July, to have Chrome label non-HTTPS webpages as insecure. There was significant pushback when the naming’n’shaming move was proposed, however, by setting out clear goals and working to get management to buy into it, the project was launched.

The Googlers working on the move even held a poetry slam to write haikus describing where they wanted to go, including this gem:

Secrets in the tubes

People in the middle snoop

Protect with crypto

By 2015, a section detailing the push was added to and developed on the Chromium wiki. This was used to push the case to management to make the switch. Each milestone was celebrated within the team, sometimes as simple as baking a cake and having a bit of a party.

A black hat hacker

Security world to hit Las Vegas for a week of hacking, cracking, fun

READ MORE

Another key to success is setting firm and clearly defined deadlines. Project Zero has come in for some flak for enforcing a 90-day disclosure rule: no more than three months after the vendor has been notified of a vulnerability in its product or project, Google will go public with the details.

There are exceptions, notably with the computer processor world’s Spectre and Meltdown flaws – which involved six months of behind-the-scenes work – however, in general the rules have encouraged the industry to speed up the issuing of security bug fixes, Tabriz opined. We’re told 98 per cent of vulnerabilities are patched within the 90-day deadline, a marked improvement from the long delays for patches that were previously the norm.

You can watch her hour-or-so-long keynote in the video below, from the nine-minute mark.

Youtube Video

In his introductory remarks, Black Hat founder Jeff Moss echoed Tabriz’s calls for a secure-by-default world. There are about 320 companies, he said, that control the online safety of billions of us, typically operating system, browser, and key hardware manufacturers.

“We have to build a culture around defense,” he said. “It’s up to us to put pressure on companies. We can change the security posture for the entire world.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/08/black_hat_keynote_parisa_tabriz/

Understanding Firewalls: Build Them Up, Tear Them Down

A presentation at Black Hat USA will walk attendees through developing a firewall for MacOS, and then poking holes in it.

Firewalls traditionally focus on traffic coming into a network (or endpoint) from the outside. Advanced threats use a number of techniques to get around that focus – and those techniques aimed at MacOS are at the heart of research being presented at Black Hat this week.

Patrick Wardle, chief research officer at Digita Security and founder of Objective-See, decided that the best way to understand the limitations and possibilities of a firewall was to build his own. The first part of his presentation at Black Hat (and a subsequent talk at DEF CON) will be about how one goes about building a firewall that looks at traffic flowing in both directions and precisely what such a firewall can be expected to stop.

(See Wardle’s session, “Fire Ice: Making and Breaking macOS Firewalls,” on Thursday, August 9, at Black Hat USA)

The second part of the presentation will look at how an attacker would go about breaking through the firewall to reach the target within. Wardle says existing third-party firewalls for MacOS protect traffic in both directions and can be quite effective.

“There are some Mac malware samples that, the first thing they do when run, is enumerate the installed software and look for one of these firewall products,” Wardle says. “And if they see one of these firewall products, they will actually not infect the system because they know that the firewall will basically detect them and then give away their presence to the user.”

But even good firewalls are at a disadvantage to attackers because, in the Internet era, certain communications simply must be allowed. “I run through a variety of hacks where we can basically abuse trusted protocols, trusted processes. And even though the firewalls will see these connections, they will allow them because they have no way of telling that they’re actually malicious,” Wardle says.

Many Mac users are more trusting than they should be because of the Mac’s reputation for security. It’s a reputation that Wardle says is based on history and aggressive marketing – and is less deserved than was once the case.

“In my expert professional opinion, if you look at the latest version of Windows – Windows 10 – and compare it to the latest version of OS X, there’s really no comparison in terms of security. The Windows operating system is just so much more secure,” Wardle says. “Any attacker who wants to infect your Mac computer, if they’re advanced and sophisticated enough, they are going to have no problem hacking in.”

The firewall that Wardle developed for his presentation will be available on Github at the end of his session. The software will be free and open source.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/black-hat/understanding-firewalls-build-them-up-tear-them-down/d/d-id/1332516?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

10 Threats Lurking on the Dark Web

Despite some high-profile takedowns last year, the Dark Web remains alive and well. Here’s a compilation of some of the more prolific threats that loom.PreviousNext

Image Source: Shutterstock via Who is Danny

Image Source: Shutterstock via Who is Danny

Security pros can never rest. Even with the operation last year that took down AlphaBay and Hansa, industry experts say many groups continue to trade in malware, ransomware, and stolen credentials on the Dark Web, and that the criminals who were caught simply reorganized.

“People need to understand that there’s an underground economy – a marketplace where all these things are being traded and sold,” says Munish Walther-Puri, chief research officer and head of intelligence analytics at Terbium Labs.

And it’s a global market, points out Jon Clay, director of global threat communications at Trend Micro. “These markets are all over the place; they can be from Russia, China, the United States, France, Eastern Europe, Africa, wherever,” Clay says. “And in a lot of these developing countries, the median wage is low and there are not always jobs available.”

Getting started is as easy as viewing tutorials on the Dark Web on how to become a cybercriminal, how to write code, and how to launch attacks, he says. “You don’t always even need the knowledge or know-how. Threat actors can go right ahead and launch a ransomware attack using full attack services available to them,” Clay explains.  

Terbium’s Walther-Puri says companies need to understand that the threat on the Dark Web is ongoing and ever-shifting – and they have to keep up. To do so, they must first identify their crown jewels and determine a baseline of exposure. Only then can they develop an ongoing monitoring strategy, he says.

But before companies get too proactive, they also have to understand the threat landscape. That’s where this compilation of leading threats on the Dark Web comes in. 

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/10-threats-lurking-on-the-dark-web/d/d-id/1332514?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Could deliberately adding security bugs make software more secure?

The best way to defend against software flaws is to find them before the attackers do.

This is the unshakeable security orthodoxy challenged by a radical new study from researchers at New York University. The study argues that a better approach might be to fill software with so many false flaws that black hats get bogged down working out which ones are real and which aren’t.

Granted, it’s an idea likely to get you a few incredulous stares if suggested across the water cooler, but let’s do it the justice of trying to explain the concept.

The authors’ summary is disarmingly simple:

Rather than eliminating bugs, we instead add large numbers of bugs that are provably (but not obviously) non-exploitable.

By carefully constraining the conditions under which these bugs manifest and the effects they have on the program, we can ensure that chaff bugs are non-exploitable and will only, at worst, crash the program.

Each of these bugs is called a ‘chaff’, presumably in honour of the British WW2 tactic of confusing German aircraft radar by filling the sky with clouds of aluminium strips, which also used this name.

Arguably, it’s a distant version of the security by obscurity principle which holds that something can be made more secure by embedding a secret design element that only the defenders know about.

In the case of software flaws and aluminium chaff clouds, the defenders know where and what they are but the attackers don’t. As long as that holds true, the theory goes, the enemy is at a disadvantage.

The concept has its origins in something called LAVA, co-developed by one of the study’s authors to inject flaws into C/C++ software to test the effectiveness of the automated flaw-finding tools widely used by developers.

Of course, attackers also hunt for flaws, which is why the idea of deliberately putting flaws into software to consume their resources must have seemed like a logical jump.

To date, the researchers have managed to inject thousands of non-exploitable flaws in to real software using a prototype setup, which shows that the tricky engineering of adding flaws that don’t muck up programs is at least possible.

Good idea, mad idea?

Now to the matter of whether this idea would work in what humans loosely refer to as the real world.

The standout objection is that the concept is a non-starter for the growing volume of the world’s software that is open source (secret code and open source are incompatible ideas).

The next biggie is that even applied to proprietary software, adding bogus flaws would tie down legitimate researchers who take the time to find and report serious security vulnerabilities.

While it’s true that attackers would also be bogged down, adding the same layer of inconvenience to the job of the good guys might negate this benefit.

The worst-case scenario is that attackers eventually fine tune their flaw hunting rigs to spot the bogus code and you end up back at square one. In this world, injecting new chaff to defeat this would become a full-time job.

It’s not as if the fact that chaff had been added would be hard for anyone to discover – all they’d have to do would be to compare the size of a new version with an old one and make an educated guess about how much was new features and how much was chaff.

More likely, developers would run a mile for fear that the process of injecting chaff would in itself risk creating new and possibly real flaws, even if those were simply denial of service states caused by a program crashing.

In the end, intriguing though the chaff concept is, the best way to cope with security flaws remains the proven method – find and efficiently mitigate or patch them before the attackers find out.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FT2vuDhHE7U/

Facebook wants to be the future of online banking

Here’s what the Wall Street Journal reported on Monday: Facebook has asked big banks to share their customers’ personal financial data, including card transactions and checking-account balances.

And here, basically, was the response from anybody who’s ever heard of Cambridge Analytica: Hysterical laughter with a bit of “Oh, hell NO. We should trust Facebook with our financial data why!?

And here, in essence, was Facebook’s response, as it tried once again to convince everybody that it knows how to spell the word “privacy”: No, we aren’t asking for financial data! We just want to insert ourselves between you and your bank and keep you from waiting on the phone so long. Because bots! Chatbots! In Messenger!

Facebook has, in fact, approached big banks, including Wells Fargo, JPMorgan Chase, Citigroup and US Bancorp, with an eye toward partnering. According to the WSJ, this is how it envisions this swap: the banks will give Facebook its users’ banking data, and the platform would give bank customers the ability to conduct business within Facebook itself – specifically, within Messenger.

People familiar with the discussions in the talks told the newspaper that one feature Facebook has talked about would show its users their checking-account balances. It’s also pitching fraud alerts, some insiders have said. The WSJ also reports that the banks have been hit up by Google and Amazon on the data-sharing front: they reportedly want to provide basic banking services on applications such as Google Assistant and Alexa.

A spokesperson for Facebook told The Next Web that no, Facebook hasn’t asked banks for users’ transaction data. Rather, this is all about getting banking chatbots into Messenger to chat us up.

Facebook’s statement:

Like many online companies with commerce businesses, we partner with banks and credit card companies to offer services like customer chat or account management. The idea is that messaging with a bank can be better than waiting on hold over the phone – and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences – not for advertising or anything else.

Unfortunately for Facebook’s bot plans, users are still steaming over the Cambridge Analytica revelations, and banks have caught the user data heebie-jeebies.

After all, it’s been a scarce six months since news emerged about Facebook losing control of 50m users’ data – data that wound up getting sucked up by a developer and sold to the data analytics firm so it could flesh out a tool to sell to Steve Bannon.

That tool was designed to use Facebook users’ personalities and other data so as to target Americans’ inner demons and influence their behavior in the 2016 US presidential election. Cambridge Analytica founder Christopher Wylie described it as “Steve Bannon’s psychological warfare mindf**k tool.”

The data crisis might have been sparked by Cambridge Analytica, but it’s spread well beyond it to reveal that Facebook’s been sloppy with plenty more companies that have been lapping up its user data. There was CubeYou, yet another firm that dressed up its personal-data snarfing as “nonprofit academic research,” in the form of personality quizzes, and then handed over the data to marketers, a la Cambridge Analytica. Facebook suspended CubeYou in April.

Then, in June, Facebook suspended AggregateIQ, an analytics firm, for collecting and storing data on thousands of Facebook users. The company is reportedly tied to Cambridge Analytica and allegedly left CA’s code lying around, open for all to access.

Most recently, also in June, Facebook suspended Crimson Hexagon while it investigates whether the data firm’s contracts with the US government and a Russian nonprofit tied to the Kremlin violated its policies.

All of this user data bungling has led to multiple grillings by Congress (here’s Day One, with CEO Mark Zuckerberg’s questioning by the US House of Representatives, and here’s Day Two, when he was questioned by the Senate) …plus, in March, the Federal Trade Commission (FTC) launched an investigation into Facebook and how Cambridge Analytica used Facebook user data.

At this point, to put it lightly, many won’t be soothed by Facebook’s assurance that it’s not out to strip our financial data from banks and do lord knows what new marketing/data crunching/fumbling with it.

Meanwhile, the banks that Facebook’s cozying up to are reportedly keeping it at arm’s distance, citing concerns about data privacy. People familiar with the talks say that it’s a sticking point, the WSJ reports, and a spokesperson for Wells Fargo told NY Daily News that it’s just not going there:

Maintaining the privacy of customer data is of paramount importance to Wells Fargo. We are not actively engaged in data-sharing conversations with Facebook.

…while Trish Wexler, a spokeswoman for JPMorgan Chase, told the Daily News and WSJ that the bank isn’t sharing such “off-platform” transaction data with Facebook, and it’s had to “walk away from some opportunities as a result.”

Another bank recoiled on Monday: the multibillion-dollar Italian banking conglomerate UniCredit last week said it stopped advertising on Facebook, given that “it’s not acting in an ethical way.”

So that’s what the banks are saying …Publicly, at any rate. All this talk about customers’ privacy might be just a teensy bit coy, though.

At least one of us here at Naked Security is taking Facebook at its word on this one. Mark Stockley says that he’s come away from having written thought leadership pieces for a banking software company well-assured that banks are…

SUPER EXCITED about banking over chat bots, for good reason. The customers they really want are buried in WhatsApp and Facebook Messenger and they a) don’t understand it and b) desperately want to be there. They think it’s the next big thing. And the Blockchain. And Fintech. And PSD2. And digital banking. And the marketplace bank.

You can see the push, coming as it is from the financial whipper snappers who own the next generation of mobile banking. As “finance and tech guy” Segun Adeyemi wrote in a Medium article last November, money conversations are happening on social media, but the transactions skip out and happen somewhere else: on Venmo, for example.

PwC has been surveying consumers about their banking habits for several years. In this year’s survey, its No. 1 “big theme” was blunt and very Messenger-friendly: “Think mobile-first, or else.”

The banks obviously believe that they’ve got to get mobile or get out. Given the financial landscape, it’s hard to imagine they’re not excited to be talking to Facebook about how they can inject themselves into the mobile world, regardless of what they say about users’ financial data privacy.

Whatever happens in the push to get banking mobile, let’s just hope that the banks, and Facebook, actually mean what they say about the importance of privacy. The last thing we need, or that Facebook needs, is a Cambridge Analytica-style pratfall with our banking data.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/24gfm3P-Xno/