STE WILLIAMS

The Y2K Boomerang: InfoSec Lessons Learned from a New Date-Fix Problem

We all make assumptions. They rarely turn out well. A new/old date problem offers a lesson in why that’s so.

(image by kwarkot, via Adobe Stock)

Twenty years ago, the IT world collectively thought it had dodged a millennium-sized bullet when years of preparation saw the dawn of January 1, 2000 without a world-wide computer catastrophe. For some, though, the Y2K bullet has turned out to be a boomerang. And it’s a boomerang that carries lessons for security professionals.

The date-change problem that was dodged in 2000 has reared its ugly head in 2020. Why? Because software developers in 1999 desperately trying to figure out how to keep computer systems using a two-digit year field from becoming confused about the century made some assumptions.

Their solution was called “windowing” or relying on a “pivot year”: a simple flag meant that two-digit dates in a certain range (typically 20 — 99) were assumed to be in the 20th century, while those in the range 00 — 19 were assumed to be in the 21st.

It was further assumed, in those heady days when programmers were partying like it was 1999, that any system in use at the turn of the millennium would have been replaced (by a system with new, more date-inclusive software) within twenty years.

And now you can see where the problem comes in.

Long-lasting systems

Assumptions like “surely they won’t be using this system 20 years from now” almost always turn around to bite IT professionals. Because these assumptions forget that companies will undoubtedly choose the cheapest option…which might very well mean forcing the IT department to keep scotch-taping together the same old system year after year after year. 

In this particular case, not only are many of those millennial systems still in service, time-squeezed programmers have grabbed those two-digit date routines and used them, unaltered, in newer systems. All of this meant that, on January 1, 2020, more than a few pieces of business-critical software stopped working, or stopped working correctly.

In some cases the issue was discovered and patched before the end of 2019. In others, emergency patches were rushed into service at the New Year.

And in still others, companies spent days (or more) waiting to print receipts, accept money from customers, or carry on other critical business processes while software developers worked to swat all of the date-related issues.

This is far from the first case of assumptions turning around to bite industry professionals. Some of us can remember “no one will ever need more than 64k of memory” as a particular set of teeth, and I have no interest in re-fighting the battle of IPv4 address space.

For security professionals, the real lesson in all of this is that architects, system engineers, and software developers are going to make assumptions in the work they do. Most of them will seem reasonable at the time they’re made. Some will continue to seem so, but many will ultimately turn out to be the cause of a headache, or worse. So what can a security professional do to reduce the assumption-based risk?

Always ask

The first is to have a seat at the table with those architects and developers so that you can understand some of those pesky assumptions. Another reason for the seat is to make sure that as many assumptions as possible are documented. Your successors will thank you for that.

Next, you really should go through existing systems (especially those that have been on the job longer than you have) and get some serious visibility into what they’re made of and how they work, with a particular emphasis on identifying the assumptions that earlier developers might have made. It’s easy to be lulled into believing that a regular patching and updating process will have cleared out the irrational underbrush, but it takes a serious update to solve an architectural limit designed into the system.

Finally, don’t be afraid to look at your systems with beginners’ eyes. When we’re in a hurry, under pressure, trying to show colleagues how knowledgable we are, or all three, we tend to make our own assumptions about how things work. (“I don’t see a cable — all the network access MUST be over WiFi.”) Don’t be afraid to ask the dumb-sounding, most basic questions in order to verify precisely how things work. And when you ask those questions, don’t let systems or people give rushed, unexamined answers: Make sure they’ve verified the obvious facts they’re handing you.

The assumptions your organization makes may not have the headline-worthy impact of millennial issues, but they can still raise serious vulnerabilities and major system resilience issues. We’ve all hear the old saw, “When you assume you make an a** of u and me.” Just because it’s hoary and clichéd doesn’t mean it’s not true.

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/the-y2k-boomerang-infosec-lessons-learned-from-a-new-date-fix-problem/b/d-id/1336837?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Tips for Infosec Pros Considering A Lateral Career Move

Looking to switch things up but not sure how to do it? Security experts share their advice for switching career paths in the industry.PreviousNext

(Image: Punsa - stock.adobe.com)

(Image: Punsa – stock.adobe.com)

Cybersecurity professionals have their pick from a diverse range of specialties within the industry, from network security to penetration testing to incident response. It’s not uncommon to switch specialties over the course of a career. The question is, how do you to go about changing?

“As part of normal [career] growth, I’ve noticed people want to move into different areas,” says (ISC)2 CIO Bruce Beam. Some people may not make the jump from offense to defense but instead switch from security operations roles to positions more focused on compliance.

A lateral career jump can be beneficial not only for security pros, but for the industry overall. The ability to move from job to job is needed because it introduces different perspectives into the workplace, says Kayne McGladrey, member of IEEE and CISO at Pensar Development.

“Right now we have an unprecedented challenge in hiring a diverse workforce in cybersecurity,” he explains. Still, it’s more difficult for some practitioners to make a transition because of obstacles in the hiring process. It may be easy for a Certified Ethical Hacker to apply for a job seeking the CEH, for example, but someone without that certification may be filtered out.

“Human resources, in a lot of organizations, has become a regulatory control function and inhibits hiring because of its focus on certifications,” McGladrey says. This is partly why it’s difficult for blue teamers to jump to the red team, a process that “looks to be an insurmountable and very difficult series of certifications,” he points out.

Another challenge for infosec pros seeking a lateral career move is the lack of time spent in their desired area of expertise. If HR sees two applicants with the same skills, but one has been in the related role for two to five years, they’re more likely to pick who has more experience.

“In cybersecurity we have a slightly more pronounced competition for talent, but also people change jobs more frequently in cybersecurity,” McGladrey says. It’s not unusual to meet a CISO who has held three different jobs in the past five years, he points out. In an industry where professionals commonly love learning and seeking new challenges, it’s likely they’ll also want to test new career paths.

For security practitioners who want to work in a new area of the industry but don’t know how to go about doing it, McGladrey and Beam share their steps and advice. How about you? Have you made a lateral career move? What tips would you offer security pros? Feel free to share your thoughts in the Comments section below.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/careers-and-people/7-tips-for-infosec-pros-considering-a-lateral-career-move/d/d-id/1336839?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Elaborate Honeypot ‘Factory’ Network Hit with Ransomware, RAT, and Cryptojacking

A fictitious industrial company with phony employees personas, website, and PLCs sitting on a simulated factory network fooled malicious hackers – and raised alarms for at least one white-hat researcher who stumbled upon it.

S4x20 CONFERENCE – Miami – For seven months, researchers at Trend Micro ran a legitimate-looking phony industrial prototyping company with an advanced interactive honeypot network to attract would-be attackers.

The goal was to create a convincing-looking network that attackers wouldn’t recognize as a honeypot so the researchers could track and study attacks against the phony factory in order to gather intel on the real threats to the industrial control system (ICS) sector today.

The faux company’s factory network, which they purposely configured with some ports exposed to the Internet from May through December of last year, was mostly hit with the same types of threats that IT networks face: ransomware, remote access Trojans (RATs), malicious cryptojacking, and online fraud, as well as botnet-style beaconing malware that infected its robotics workstation for possible lateral movement.

But there also were a few more alarming incidents with shades of more targeted intent. In one attack on Aug. 25, 2019, for instance, an attacker worked its way around the robotics system, closed the HMI application, and then powered down the system. Later that month, an attacker was able to start up the factory network, stop the phony conveyer belt – and then shut down the factory network. Attackers via the HMI shut down the factory and locked the screen, while another opened the log view of the robot’s optical eye.

“Yes, your factories will be attacked if they are directly connected to the” Internet, says Stephen Hilt, who will present Trend Micro’s findings from the research here today. But the majority of the attacks and incidents the pretend factory suffered were the same-old, same-old threats facing other industries, notes Hilt, who previously worked on a honeypot for Trend Micro called GasPot in 2015 that was set up to simulate a gas-tank monitoring system to study what attacks would hit those gas tank systems. 

“From our stance, traditional IT threats are what we saw, plain and simple,” says Hilt. But, he says the team can’t be sure they didn’t face any targeted ICS threats that didn’t get very far. Either way, it’s a far cry from Trend Micro’s 2013 water utility honeypot that counted 39 attacks, 12 of which were identified as targeted.

The phony factory network was made up of real ICS hardware and a combination of physical hosts and virtual machines, including two Allen-Bradley Micrologix 1100 PLCs, a Siemens S7-1200 PLC, and an Omron CP1L PLC. The engineering workstation was kept inside the factory network, but the researchers used the same administrative password for the HMI and robotics workstation, which were left exposed on the public Internet – a common mistake made in ICS environments – as a lure.

The PLCs were configured to run the tasks for the “plant”: agitator, burner control, conveyer belt control, and palletizer for stacking pallets using a robotic arm. Hilt says the PLCs were programmed to vary the start and stop of a motor, for example, to emulate real processes. The plant network included three VMs – an HMI for controlling the factory, a robotics workstation to control the palletizer, and an engineering workstation to program the logic for the PLCs. The file server ran on a physical machine.  

After about a month, they opened Remote Desktop Protocol and Virtual Network Connection ports, as well as EtherNet/IP, to attract would-be attackers to the phony industrial prototyping company – dubbed MeTech – which on its website said it serves large military, avionics, and manufacturing clients.

One of the first attacks Hilt and his fellow Trend Micro researchers Federico Maggi, Charles Perine, Lord Remorin, Martin Rösler, and Rainer Vosseler, spotted was a RAT, but it turned out to be for Monero cryptocurrency mining, not a nation-state cyberattack. Even so, the cryptojacking attack heavily drained the server capacity.

Crysis
The network was hit with the infamous Crysis ransomware in September, and the entire process of negotiating with the attackers on the ransom kept the network down for four days. Hilt says that’s an example of how devastating everyday ransomware could be for a real ICS network. “The system was down for a week because it [the malware] spread,” he says. While the PLCs weren’t affected, they lost visibility into the plant operations while the HMI files were locked down by the attackers.

“A loss of production for four days is really bad. We had backups we could restore to, so we restored our VMs and had to rebuild our file server,” he says. But they were able to document all of the steps of the attack as well as the back-and-forth ransom negotiation, where the Trend researchers posed as one of the phony employees on MeTech’s website, Mike Wilson, who emailed the attackers’ with the subject line that read, “MeTech: THIS IS NOT COOL” and lamented: “Not cool we are running a production run of something for an important client and can’t use our robot with out that machine, also all of our files on our file server” and “give us back our files.”

Hilt says they negotiated the ransom down from $10,000 in bitcoin to $6,000 in the email exchange, but didn’t actually pay the ransom. They were able to reset the systems via their backups and eradicate the malware.

There was a second ransomware attack later that month, this time with Crysis variant Phobos, and then in November, a “fake” ransomware attack that basically was a data destruction attack. In that later attack, the hacker wrote scripts targeting MeTech’s ABB Robotics folder, renaming them and demanding ransom. “What I found was more interesting is when we didn’t pay it,”  Hilt says, noting that the attacker deleted the entire folder, “then he wrote a script so that on startup, it opens a bunch of browsers to porn sites.”

Hilt’s theory was that attacker had hoped to resell access to the network in the Dark Web, and that when the ransom wasn’t paid, the attacker lost some potential revenue and decided to destroy the data.

White-Hat to the Rescue
It’s not uncommon for researchers to run into other researchers in the trenches of their honeypots or other work, and Trend Micro’s phony and not secured ICS network caught the attention of researcher Dan Tentler, aka @Viss on Twitter, who is well-known for finding exposed ICS equipment via Shodan searches. Tentler had reported his findings of exposed ICS equipment to the ICS-CERT and “all of the appropriate parties,” according to Trend Micro in a report on the honeypot published today.

Hilt says he and his team contacted Tentler to apprise him of their operation and to “stand down,” noting that his reporting the issue is how a real such ICS event should be handled.

“Part of the reason we see less attacks than in previous honeypots is you have people like Dan doing good work to take down critical devices exposed on the Net,” Hilt says.

Worries of possible threats to critical infrastructure rose earlier this year in the wake of US military action in Baghdad, with the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) warning US organizations of the potential for retaliatory attacks against US organizations. Iranian nation-state hacking groups “have also demonstrated a willingness to push the boundaries of their activities, which include destructive wiper malware and, potentially, cyber-enabled kinetic attacks,” CISA said in its advisory in early January.

What (Not) to Do
Hilt says MeTech’s network is a lesson on how not to secure an ICS environment.

Among the no-no’s: the VNC left open online with no password and a lack of least-privilege, shared and reused passwords across systems, and not deploying the trust feature in industrial routers, which specifies which hosts can go through the device.

The ICS honeypot network could be resurrected, however: Hilt plans to encourage the ICS community to join forces with Trend Micro to expand the research project.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “With International Tensions Flaring, Cyber Risk is Heating Up for All Businesses.”

 

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/elaborate-honeypot-factory-network-hit-with-ransomware-rat-and-cryptojacking/d/d-id/1336842?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Data Awareness Is Key to Data Security

Traditional data-leak prevention is not enough for businesses facing today’s dynamic threat landscape.

Data attacks reached an all-time high in 2019 as we continued to transform our lives digitally — moving our work, health, financial, and social information online. In response, businesses must meet hefty data and information protection regulatory and compliance requirements. There’s no room for error. Protections are required for everything from simple user mistakes, such as downloading a file on the corporate network and sending it to a personal account, to malicious insider behavior and nation-state attacks. This task and associated fines are daunting.

Governments worldwide are also addressing these challenges by mandating new data protection regulations and privacy acts, including the Global Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Regulations are introducing stricter information protection standards and unprecedented fines companies must plan for and comply with — up to 4% of their annual revenue — for handling business and customer data.

To keep up with these regulations and the global demand for security and privacy, compliance and data risk officer roles are increasing. They create policies and implement tools to track how data is collected, used, managed and stored across its life cycle so businesses remain compliant and earn customers’ trust.

Security and Compliance Are Two Different Worlds
Even with heightened focus on reducing risk, security and compliance teams have different backgrounds and responsibilities, and historically they have not worked together, which means they don’t always understand the other’s business needs.

When it comes to information protection and compliance, most companies focus on thwarting data leaks by locking down data within their perimeter, which can be a device, file server, or network boundary. Data leakage prevention (DLP) identifies sensitive content and defines policies to prevent data egress across the network, devices, and applications.

In parallel, companies’ security teams operate disconnected threat protection solutions — EPP, EDR, SEG, CASB, UEBA, NTA, etc. — designed to prevent, detect, and respond to attacks on companies’ intellectual property. But often these tools — separate from the information protection and DLP tools — don’t know where this intellectual property and sensitive content resides.

Most data protection solutions focus on prevention and ignore a key aspect of risk management and compliance: attackers’ access to sensitive data, which can reside on devices, applications, and/or in the cloud. Threat protection solutions, by contrast, identify attackers in the network but ignore the key aspect of security incidents: the sensitivity of data accessed during an attack.

So, how should we as an industry eliminate the walls between them to deliver a higher level of protection?

Create a Better Security Posture
Unifying security and compliance under a new model of data-aware threat protection will enable businesses to create trust while reducing risk to users and data. By integrating and sharing signals between the DLP and threat protection solutions, companies can determine the business context and impact of each security incident, and the actual risk to each piece of sensitive data. Security teams and data officers can then work in tandem, instead of in silos, to respond to and address incidents faster and more reliably.

This new data-aware threat-protection model has four key advantages:

Risk-based incident prioritization: Security operators typically prioritize incident response based on severity, but that neglects the overall business impact. Data classification awareness by threat protection solutions contributes to how alerts, incidents, and vulnerabilities are prioritized. It helps better determine the risk of the activity, which influences its prioritization. An alert on a corporate device that stores sensitive data is more important than an alert on a device that doesn’t. Even if the security threat on its own is lower, sensitive data in a compromised environment is a reason to act — fast.

More precise threat hunting: By tracing each attacker action and intertwining it with data classification context, analysts can better understand attackers’ motivations and searches. This also arms hunters with the ability to reference data severity. For example, analysts can create a hunting query to address a request like, “Get all PowerShell processes that accessed a sensitive Word doc.” Such context also enables better hunting for data exfiltration threats by understanding whether activity is malicious or benign. For example, reading a file, copying a file to another folder, or taking a screen capture are legitimate actions most times. However, sensitive data is different. Reading such a file may indicate anomalous access to sensitive data, copying a file may be part of staging for exfiltration, and screen capturing may be a way to steal sensitive data.

Automatic remediation across security and compliance boundaries: Automation allows often understaffed security and compliance teams to do more and react more quickly. But missing the incident’s context makes all response playbooks the same. Data classification awareness allows defenders to become more effective by defining customized response actions based on data sensitivity. For example, automatically locking access to sensitive data on at-risk devices until the risk is mitigated or blocking a process performing anomalous access from accessing sensitive files until it’s determined whether the activity is benign or malicious.

More effective security posture management: Security and compliance teams should not just respond to data leaks or data exfiltration incidents after they occur; they should think about being proactive to reduce leaks. Visibility is key. Do you know where your sensitive data is, where it’s stored? Knowing that and combining the compliance (data sensitivity) and security (risk) disciplines enable us to proactively reduce the chance and impact of data breaches. For example, you can prioritize patching devices with sensitive documents, or force two-factor authentication to access sensitive document folders.

Old-school data leakage prevention is not enough for businesses facing a dynamic threat landscape. Adversaries are sophisticated, and no matter how high the wall, they will find a way around. Then, it’s game over. Trust is lost. The industry should recognize that data-aware threat protection is essential to proactively protecting customers’ data and establishing trust and consistency across privacy and security.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “With International Tensions Flaring, Cyber-Risk Is Heating Up for All Businesses.”

Moti Gindi is the Corporate Vice President for Microsoft Defender Advanced Threat Protection (ATP). In his role, he manages an engineering team that is responsible for Microsoft’s endpoint security, specifically Microsoft Defender ATP (recently recognized as a leader in … View Full Bio

Article source: https://www.darkreading.com/risk/data-awareness-is-key-to-data-security/a/d-id/1336823?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Nearly 75% of SD-WAN Owners Lack Confidence Post-Digital Transformation

More businesses think SD-WAN will reduce WAN costs, but only 37% think SD-WANs will help defend against malware and other threats.

Nearly three-quarters (74%) of businesses deploying SD-WAN have “significantly” less confidence in their networks following digital transformation initiatives, a new study finds.

Researchers with Cato Networks polled 1,333 IT professionals around the world to learn about their budgets, challenges, and readiness to change their digital environments. Nearly half (43%) work for companies with more than 2,500 employees, all have some cloud presence, and 82% work for organizations with at least two physical data centers. More than half (56%) expect their networking budget to increase in the next year; 73% say the same for their security budgets.

Thirty-five percent of respondents have an SD-WAN deployment and 30% are planning to deploy SD-WAN within the next year. Researchers found most businesses adopt SD-WAN for financial reasons: Fifty-six percent say excessive WAN costs are their primary reason to switch. Other drivers include better Internet access (55%), improved last-mile availability (50%), need for more bandwidth (48%), broader WAN modernization effort (45%), and cloud migration (42%).

Respondents’ low confidence in their networks following digital transformation could be linked to the issues they find during the transformation process, researchers say. “The results suggest that only as organizations roll out digital initiatives, they uncover the weaknesses in their networks,” the study states. The high costs of MPLS drive many to adopt SD-WAN; however, once an organization requires cloud or mobile access, IT pros realize their network limitations.

Cloud connectivity was a major issue among respondents. Sixty percent say cloud applications will be the most critical to their organizations over the next 12 months, more so than applications hosted in private data centers. However, 69% of SD-WAN owners expressed lower confidence in cloud connectivity following digital transformation efforts.

Researchers also learned SD-WAN does little from a security perspective. While most (66%) respondents cite malware and ransomware as their primary security challenge for 2020, only 37% say their SD-WAN help defend locations from these types of threats.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “With International Tensions Flaring, Cyber Risk is Heating Up for All Businesses.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/nearly-75--of-sd-wan-owners-lack-confidence-post-digital-transformation/d/d-id/1336844?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ransomware Upgrades with Credential-Stealing Tricks

The latest version of the FTCode ransomware can steal credentials from five popular browsers and email clients.

The nightmare continues for victims of FTCode ransomware. In addition to encrypting critical information, the PowerShell malware now steals user credentials from common web browsers and email clients.

According to researchers Rajdeepsinh Dodia, Amandeep Kumar, and Atinderpal Singh from Zscaler ThreatLabZ, FTCode version 1117.1 can skim user credentials from Internet Explorer, Firefox, and Chrome as well as email clients Thunderbird and Outlook. The new version uses a different method to steal credentials in each of the targeted applications, something the researchers point to as being one of the advantages of the scripting language in which FTCode is written.

For more, read here and here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “With International Tensions Flaring, Cyber-Risk Is Heating Up for All Businesses.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/ransomware-upgrades-with-credential-stealing-tricks/d/d-id/1336846?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

What do online file sharers want with 70,000 Tinder images?

A researcher has discovered thousands of Tinder users’ images publicly available for free online.

Aaron DeVera, a cybersecurity researcher who works for security company White Ops and also for the NYC Cyber Sexual Assault Taskforce, uncovered a collection of over 70,000 photographs harvested from the dating app Tinder, on several undisclosed websites. Contrary to some press reports, the images are available for free rather than for sale, DeVera said, adding that they found them via a P2P torrent site.

The number of photos doesn’t necessarily represent the number of people affected, as Tinder users may have more than one picture. The data also contained around 16,000 unique Tinder user IDs.

DeVera also took issue with online reports saying that Tinder was hacked, arguing that the service was probably scraped using an automated script:

In my own testing, I observed that I could retrieve my own profile pictures outside the context of the app. The perpetrator of the dump likely did something similar on a larger, automated scale.

What would someone want with these images? Training facial recognition for some nefarious scheme? Possibly. People have taken faces from the site before to build facial recognition data sets. In 2017, Google subsidiary Kaggle scraped 40,000 images from Tinder using the company’s API. The researcher involved uploaded his script to GitHub, although it was subsequently hit by a DMCA takedown notice. He also released the image set under the most liberal Creative Commons license, releasing it into the public domain.

However, DeVera has other ideas:

This dump is actually very valuable for fraudsters seeking to operate a persona account on any online platform.

Hackers could create fake online accounts using the images and lure unsuspecting victims into scams.

We were sceptical about this because adversarial generative networks enable people to create convincing deepfake images at scale. The site ThisPersonDoesNotExist, launched as a research project, generates such images for free. However, DeVera pointed out that deepfakes still have notable problems.

First, the fraudster is limited to only a single picture of the unique face. They’re going to be hard pressed to find a similar face that isn’t indexed by reverse image searches like Google, Yandex, TinEye.

The online Tinder dump contains multiple candid shots for each user, and it’s a non-indexed platform meaning that those images are unlikely to turn up in a reverse image search.

There’s another gotcha facing those considering deepfakes for fraudulent accounts, they point out:

There is a well-known detection method for any photo generated with This Person Does Not Exist. Many people who work in information security are aware of this method, and it is at the point where any fraudster looking to build a better online persona would risk detection by using it.

In some cases, people have used photos from third-party services to create fake Twitter accounts. In 2018, Canadian Facebook user Sarah Frey complained to Tinder after someone stole photos from her Facebook page, which was not open to the public, and used them to create a fake account on the dating service. Tinder told her that as the photos were from a third-party site, it couldn’t handle her complaint.

Tinder has hopefully changed its tune since then. It now features a page asking people to contact it if someone has created a fake Tinder profile using their pictures.

We asked Tinder how this happened, what measures it was taking to prevent it happening again, and how users should protect themselves. The company responded:

It is a violation of our terms to copy or use any members’ images or profile data outside of Tinder. We work hard to keep our members and their information safe. We know that this work is ever evolving for the industry as a whole and we are constantly identifying and implementing new best practices and measures to make it more difficult for anyone to commit a violation like this.

DeVera had more concrete advice for sites serious about protecting user content:

Tinder could further harden against out of context access to their static image repository. This might be accomplished by time-to-live tokens or uniquely generated session cookies generated by authorised app sessions.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HtlXzKkoA5E/

Leave your admin interface’s TLS cert and private key in your router firmware in 2020? Just Netgear things

Netgear left in its router firmware key ingredients needed to intercept and tamper with secure connections to its equipment’s web-based admin interfaces.

Specifically, valid, signed TLS certificates with private keys were embedded in the software, which was available to download for free by anyone, and also shipped with Netgear devices. This data can be used to create HTTPS certs that browsers trust, and can be used in miscreant-in-the-middle attacks to eavesdrop on and alter encrypted connections to the routers’ built-in web-based control panel.

In other words, the data can be used to potentially hijack people’s routers. It’s partly an embarrassing leak, and partly indicative of manufacturers trading off security, user friendliness, cost, and effort.

Security mavens Nick Starke and Tom Pohl found the materials on January 14, and publicly disclosed their findings five days later, over the weekend.

The blunder is a result in Netgear’s approach to security and user convenience. When configuring their kit, owners of Netgear equipment are expected to visit https://routerlogin.net or https://routerlogin.com. The network’s router tries to ensure those domain names resolve to the device’s IP address on the local network. So, rather than have people enter 192.168.1.1 or similar, they can just use that memorable domain name.

To establish an HTTPS connection, and avoid complaints from browsers about using insecure HTTP and untrusted certs, the router has to produce a valid HTTPS cert for routerlogin.net or routerlogin.com that is trusted by browsers. To cryptographically prove the cert is legit when a connection is established, the router needs to use the certificate’s private key. This key is stored unsecured in the firmware, allowing anyone to extract and abuse it.

Netgear doesn’t want to provide an HTTP-only admin interface, to avoid warnings from browsers of insecure connections and to thwart network eavesdroppers, we presume. But if it uses HTTPS, the built-in web server needs to prove its cert is legit, and thus needs its private key. So either Netgear switches to using per-device private-public keys, or stores the private key in a secure HSM in the router, or just uses HTTP, or it has to come up with some other solution. You can follow that debate here.

Anyone in the world could have retrieved these keys

“These certificates are trusted by browsers on all platforms, but will surely be added to revocation lists shortly,” noted Starke and Pohl.

“The firmware images that contained these certificates along with their private keys were publicly available for download through Netgear’s support website, without authentication; thus anyone in the world could have retrieved these keys.”

Netgear did not respond to a request for comment on the report.

We note that while there is a certificate and private key for the routerlogin interface, there is another set for mini-app.funjsq.com, which appears to be a method for playing games online in China.

In addition to exposing the vulnerability in Netgear equipment, the infosec bods also took issue with the way the networking giant deals with security flaws. In particular, its policy of keeping bug reports quiet.

Two people writing code

Yeah, says Google Project Zero, when you think about it, going public with exploit deets immediately after a patch is emitted isn’t such a great idea

READ MORE

“We are aware that Netgear has public bug bounty programs. However, at current date those programs do not allow public disclosure under any circumstances,” the duo explained.

“We as researchers felt that the public should know about these certificate leaks in order to adequately protect themselves and that the certificates in question should be revoked so that major browsers do not trust them any longer. We could not guarantee either if we had used the existing bug bounty programs.”

The decision brings up a debate that has plagued developers and security researchers alike for years: how best to handle disclosure.

On one side, there is the argument that keeping bugs under wraps minimizes the chances they will fall into the wrong hands. On the other side, there is the belief that getting issues into the open increases awareness and allows everyone to work on fixing and patching a bug.

In this case, Starke and Pohl went with the latter approach, informing the company last Tuesday and going public after hearing nothing useful back from either the router maker nor the organizer of its bug bounty. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/20/netgear_exposed_certificates/

Leave your admin interface’s TLS cert and private key in your router firmware in 2020? Just Netgear things

Netgear left in its router firmware key ingredients needed to intercept and tamper with secure connections to its equipment’s web-based admin interfaces.

Specifically, valid, signed TLS certificates with private keys were embedded in the software, which was available to download for free by anyone, and also shipped with Netgear devices. This data can be used to create HTTPS certs that browsers trust, and can be used in miscreant-in-the-middle attacks to eavesdrop on and alter encrypted connections to the routers’ built-in web-based control panel.

In other words, the data can be used to potentially hijack people’s routers. It’s partly an embarrassing leak, and partly indicative of manufacturers trading off security, user friendliness, cost, and effort.

Security mavens Nick Starke and Tom Pohl found the materials on January 14, and publicly disclosed their findings five days later, over the weekend.

The blunder is a result in Netgear’s approach to security and user convenience. When configuring their kit, owners of Netgear equipment are expected to visit https://routerlogin.net or https://routerlogin.com. The network’s router tries to ensure those domain names resolve to the device’s IP address on the local network. So, rather than have people enter 192.168.1.1 or similar, they can just use that memorable domain name.

To establish an HTTPS connection, and avoid complaints from browsers about using insecure HTTP and untrusted certs, the router has to produce a valid HTTPS cert for routerlogin.net or routerlogin.com that is trusted by browsers. To cryptographically prove the cert is legit when a connection is established, the router needs to use the certificate’s private key. This key is stored unsecured in the firmware, allowing anyone to extract and abuse it.

Netgear doesn’t want to provide an HTTP-only admin interface, to avoid warnings from browsers of insecure connections and to thwart network eavesdroppers, we presume. But if it uses HTTPS, the built-in web server needs to prove its cert is legit, and thus needs its private key. So either Netgear switches to using per-device private-public keys, or stores the private key in a secure HSM in the router, or just uses HTTP, or it has to come up with some other solution. You can follow that debate here.

Anyone in the world could have retrieved these keys

“These certificates are trusted by browsers on all platforms, but will surely be added to revocation lists shortly,” noted Starke and Pohl.

“The firmware images that contained these certificates along with their private keys were publicly available for download through Netgear’s support website, without authentication; thus anyone in the world could have retrieved these keys.”

Netgear did not respond to a request for comment on the report.

We note that while there is a certificate and private key for the routerlogin interface, there is another set for mini-app.funjsq.com, which appears to be a method for playing games online in China.

In addition to exposing the vulnerability in Netgear equipment, the infosec bods also took issue with the way the networking giant deals with security flaws. In particular, its policy of keeping bug reports quiet.

Two people writing code

Yeah, says Google Project Zero, when you think about it, going public with exploit deets immediately after a patch is emitted isn’t such a great idea

READ MORE

“We are aware that Netgear has public bug bounty programs. However, at current date those programs do not allow public disclosure under any circumstances,” the duo explained.

“We as researchers felt that the public should know about these certificate leaks in order to adequately protect themselves and that the certificates in question should be revoked so that major browsers do not trust them any longer. We could not guarantee either if we had used the existing bug bounty programs.”

The decision brings up a debate that has plagued developers and security researchers alike for years: how best to handle disclosure.

On one side, there is the argument that keeping bugs under wraps minimizes the chances they will fall into the wrong hands. On the other side, there is the belief that getting issues into the open increases awareness and allows everyone to work on fixing and patching a bug.

In this case, Starke and Pohl went with the latter approach, informing the company last Tuesday and going public after hearing nothing useful back from either the router maker nor the organizer of its bug bounty. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/20/netgear_exposed_certificates/

As miscreants prey on thousands of vulnerable boxes, Citrix finally emits patches to fill in hijacking holes in Gateway and ADC

Citrix has rushed out official fixes for the well-publicised vuln in some of its server products after miscreants were seen deploying their own custom patches that left a backdoor open for later exploitation.

As previously reported, vulnerabilities in Citrix Application Delivery Encoder and Citrix Gateway could allow remote attackers to carry out unauthenticated code execution.

In other words, baddies not on your network could get into it and start running all kinds of malicious software. And there are thousands upon thousands of vulnerable machines facing the public internet.

Now patches are available for some of the affected products – and sysadmins ought to be installing them pronto.

Some versions of Citrix Application Delivery Controller (ADC), formerly known as NetScaler ADC, and Citrix Gateway, formerly known as NetScaler Gateway, and “certain deployments of two older versions of our Citrix SD-WAN WANOP product versions 10.2.6 and version 11.0.3” are affected by the vulns, according to Citrix.

The vulns themselves were allocated the CVE number CVE-2019-19781. The patches are said to be good for virtual instances of Citrix Gateway 11.1 and 12 as well as Citrix ADC 11.1 and 12.0.

Citrix’s Fermin Serna said in a statement: “We urge customers to immediately install these fixes. There are several important points to keep in mind in doing so. These fixes are for the indicated versions only, if you have multiple ADC versions in production, you must apply the correct version fix to each system.”

Fresh patches for other Citrix ADC versions as well as SD-WAN WANOP are expected on 24 January, the company said in its statement.

As reported last week, miscreants have begun remotely patching affected devices – ironically, using the vuln itself to gain remote access to do so – but are leaving themselves a backdoor for continued illicit access later on. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/20/citrix_patches_vulns_gateway_adc/