STE WILLIAMS

iTerm2 issues emergency update after MOSS finds a fatal flaw in its terminal code

The author of popular macOS open source terminal emulator iTerm2 has rushed out a new version (v3.3.6) because prior iterations have a security flaw that could allow an attacker to execute commands on a computer using the application.

The vulnerability (CVE-2019-9535) was identified through the Mozilla Open Source Support Program (MOSS), which arranged to audit iTerm2 under its remit to review open source projects for security problems. A third-party security biz, Radically Open Security, performed the audit.

In a post to an iTerm2 discussion group, developer George Nachman said, “As part of this audit, a problem was discovered which could cause iTerm2 to issue commands in response to receiving certain input.”

“This is a serious security issue because in some circumstances it could allow an attacker to execute commands on your machine when you view a file or otherwise receive input they have crafted in iTerm2.”

Nachman, in response to an email from The Register, said he couldn’t provide more details at the moment and that researchers are working on a technical write-up for next week. He said that the flaw affects between 100,000 and 200,000 users of the software.

Mozilla didn’t immediately respond to a request to explain the flaw in greater detail. The commit that fixes the flaw provides some insight into the problem, at least for those familiar with Objective-C code.

According to Mozilla security engineer Tom Ritter, the vulnerability arises from the tmux integration feature in iTerm2 and has been present for at least seven years. The tmux application is a terminal multiplexer that allows multiple terminals to be created and controlled from a single window.

iterm

Meet Hyper.is – the terminal written in HTML, JS and CSS

READ MORE

The CERT Coordination Center’s vulnerability notice says that the flaw can be exploited using command-line utilities that print attacker controlled content to the terminal screen.

“Potential attack vectors include connecting via ssh to a malicious server, using curl to fetch a malicious website, or using tail -f to follow a logfile containing some malicious content,” CERT/CC says.

Mozilla created a proof-of-concept video demonstrating how an ssh connection to an attacker-controlled server could launch a Calculator app as a placeholder for malicious code.

iTerm2 should eventually prompt users to upgrade, but those with v3.3.5 or earlier should download the update directly without waiting. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/10/iterm2_flaw_moss/

A Realistic Threat Model for the Masses

For many people, overly restrictive advice about passwords and other security practices is doing more harm than good. Here’s why.

I’m one of those people who has made fun of password books for being an example of horrible security practices. But I’m starting to realize that I was wrong. For some people, overly restrictive security advice is doing more harm than good. For those who can’t easily navigate more secure solutions, writing a password down and keeping it locked up in your home is far better than reusing passwords.

While threat modeling is something we often ask businesses to do to determine the level of risk posed by certain vulnerabilities, we seldom ask individuals to do the same thing. Instead, we just give everyone the same sort of blanket suggestions, often implying that it’s necessary for everyone to protect themselves against state-sponsored attackers.

This can lead to people devaluing threats that realistically could come from within their own homes, such as in the case of domestic abuse, which in this day and age almost invariably includes monitoring the victim, including online. Bottom line: When hurdles are so massive that using computers becomes impossibly difficult, people give up, opting instead to do as little as possible to protect their data and devices.

This is definitely a case where “perfect is the enemy of good.”By dissuading people from using more convenient and usable security methods, we’re discouraging them from taking any meaningful steps toward a safer Internet experience. Here are a few other examples of things I frequently hear security practitioners warn laypeople against:

Biometrics
Authentication will be a recurring theme here: Usernames and passwords are really not sufficient by themselves anymore, especially because people do such a poor job with them on the whole. We all have dozens (if not hundreds) of accounts that require a login, and it is simply not reasonable to expect people to remember strong, unique passwords for that many accounts.

Many people “solve” this problem by choosing crummy passwords, or reusing one password for every account, both of which are horrible solutions to the problem. Many of us suggest password managers, which can be a great solution for a lot of people. But there’s also those security practitioners who pooh-pooh this option because it’s “imperfect” to have a single point of failure, or because sometimes password manager products have vulnerabilities, or whatever other frustrating reason.

While biometric authentication can certainly be circumvented, it beats the heck out of using a weak password, reusing a password, or using no password at all.

SMS for 2FA
We’ve all heard about the various ways that two-factor authentication (2FA) can be broken by a sufficiently determined adversary. That’s something for security practitioners to be aware of, so we don’t get complacent. But for the average person, a 99% success rate against account takeovers is really quite sufficient. We all need to be using 2FA wherever it’s available, in the most secure form we can get. If that form is “just” SMS, it’s a whole lot better than just using a username and password alone.

Legacy Security Products
Regardless of what operating system people are on, or what particular type of security product works best for their situation, everyone needs something that helps protect against malicious code. If I had a nickel for every time I heard someone say that “antivirus is dead” because it doesn’t detect 100% of new attacks, I’d be having a very fancy lunch right now. I won’t even get into the arguments over “next-gen” versus established anti-malware products, or the old trope that “Macs don’t get malware,” or instances where security products were found to have vulnerabilities.

Charging Cables
The caution against using someone else’s charging cables came up again just recently. While I would probably still advise people against charging in any old charging station — particularly ones in public spaces — I wouldn’t go so far as to say you should never borrow a charging cable at all. If you trust someone well enough to travel with them, or work with them or something along those lines, you should be OK if you borrow their charging cable.

Frequent Password Updating
“Passwords are like underwear. They should be kept private and changed frequently.” I’m delighted to announce that we all need to stop using this underwear analogy. NIST announced two years ago that we need to move away from asking people to periodically change their passwords, or enforcing complexity requirements. While we’re at it, can we also stop preventing people from copying and pasting passwords, too? I honestly don’t know what threat this is supposed to prevent; in practical application, it only seems to prevent the use of password managers.

I suspect the instinct to tell people why they shouldn’t use “imperfect” security practices is, at least in part, to demonstrate how l33t we are. But this practice is making the Internet a much less usable place, and we need to bring that to an end. People should use whatever methods allow them to move the needle toward a safer surfing experience meaningfully, given their own particular threat model.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Can the Girl Scouts Save the Moon from Cyberattack?

Lysa Myers began her tenure in malware research labs in the weeks before the Melissa virus outbreak in 1999. She has watched both the malware landscape and the security technologies used to prevent threats from growing and changing dramatically. Because keeping up with all … View Full Bio

Article source: https://www.darkreading.com/endpoint/a-realistic-threat-model-for-the-masses/a/d-id/1335997?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

USB Drive Security Still Lags

While USB drives are frequent pieces of business hardware, a new report says that one-third of US businesses have no policy governing their use.

Though nearly nine out of 10 US businesses use USB drives in their IT operation, less than half use any monitoring or encryption to protect the data on those highly portable devices. And the trends are getting worse: Only 47% have a policy for lost or stolen drives compared with 50% with such policies in 2017.

A new report, sponsored by Apricorn, says that 36% of organizations have no written policy concerning USB devices at all. Employees of the companies, however, think that securing the devices is important, with 91% saying that all USB drives should be encrypted.

In one bit of positive news from the research, there was a significant drop in the percentage of companies making regular use of unencrypted USB drives, with 58% doing so in the most recent study compared with 82% in 2017.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Can the Girl Scouts Save the Moon from Cyberattack?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/mobile/usb-drive-security-still-lags/d/d-id/1336047?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security Tool Sprawl Reaches Tipping Point

How a new open source initiative for interoperable security tools and a wave of consolidation could finally provide some relief for overwhelmed security analysts and SOCs.

The typical security team today continues to struggle with the same frustrating and potentially dangerous problem: a sea of security tools that churn out waves of alerts and siloed data that often requires manual correlation — or get altogether dismissed by overburdened security analysts.

“If it takes a SOC analyst more than three clicks to make a decision, he/she is going to move on. They have thousands of other alerts” waiting for them, says Jill Cagliostro, product strategist for security firm Anomali.

That frightening — but understandable — conundrum for security analysts who under so much pressure that they literally pitch some alerts that take too much time to investigate underscores the perils and real possibility of missing that one needle in a haystack in security operations centers (SOCs) today. At the root of the alert overload, of course, is a mix of multiple security tools from various vendors — most of which don’t work together and security analysts don’t even have time to fully master.

Organizations on average run some 25 to 49 security tools from up to 10 different vendors, according to the Enterprise Strategy Group (ESG), and 40% of organizations are so taxed, according to 451 Research, that they can’t act upon at least a quarter of their security alerts. And in many cases, that’s leading to organizations literally shutting off some alerting functions, SOC vendor CriticalStart found.

“There have been a lot of research studies that find the whole issue of interoperability and scalability is largely ignored, so as a result the technologies don’t actually work together and you have more [tools] than you need,” Larry Ponemon, president of the Ponemon Group, said in an interview with Dark Reading in July. “So many things are generating reports [and alerts] … you are in a state of information overload pretty quickly.”

But the tipping point may finally be near. A gradual wave of security-tool consolidation and aggregation — thanks in part to some strategic acquisitions — as well as a new vendor effort led by IBM and McAfee for an open source set of specifications for tool interoperability, could finally streamline and integrate tools and, ultimately, workloads for SOC analysts.

The newly formed Open Security Consortium (OCA), part of the OASIS open source standards organization, will come up a common way for security tools to present data and communicate with and message one another. “Essentially, the goals of the alliance are interoperability, and collaboration around various different standards, tools, procedures, and open source libraries to enable that interoperability,” says Jason Keirstead, chief architect for IBM Security Threat Management.

The alliance isn’t all about creating new standards, Keirstead says, although new ones could emerge eventually. “It’s around collaborating on how we interoperate with each other.”

OCA — which also includes members Advanced Cyber Security Corp., Corsa, CrowdStrike, CyberArk, Cybereason, DFLabs, EclecticIQ, Electric Power Research Institute, Fortinet, Indegy, New Context, ReversingLabs, SafeBreach, Syncurity, ThreatQuotient, and Tufin — initially announced its first two protocols, existing work from its co-founders IBM and McAfee. The first is IBM’s open source data library STIX-Shifter, based on the STIX2 data model standard, which grabs threat information from various data repositories and converts it to a common format for all security tools that adopt STIX-Shifter. OCA also released McAfee’s OpenDXL Standard Ontology, which supports the OpenDXL (based on the Data Exchange Layer standard) messaging standard for communicating and sharing security information among different security products.

The OCA’s open source releases are available to all security vendors, even nonmembers of the consortium, as well as enterprises that want to incorporate the technologies. The goal, according to the OCA, is to easily integrate security detection, threat hunting, analytics, and other tools so they can operate together “out of the box.”

“It’s less about combining [security tool] screens and more about assuring the multiple tools a customer has all interoperate with each other and [enterprises] don’t have to spend so much time maintaining those integrations,” IBM’s Keirstead says. “A customer can swap out any one vendor and add a competitor’s and they will work seamlessly.”

Several security experts welcomed the OCA’s effort. “I think it’s a step in the right direction,” says Jon Oltsik, senior principal analyst with Enterprise Strategy Group. Security organizations for years have been collecting and storing security data in various places and trying to analyze the same data across different tools, he says. And an open source integration layer effort lowers vendors’ RD burden, he adds.

Even so, Oltsik says he wonders why more large organizations themselves aren’t driving such an effort rather than the vendors. “One thing that concerns me is you would think the demand side would be driving this versus the supply side,” such as large financial firms, he says. “I’d like to see some big buy-side organizations” calling for vendors to support these open source standards if they want to sell to them, he says.

Current Consolidation Situation

MSSPs also face some of the same challenges as enterprise SOCs when it comes to integrating and streamlining tools. Kevin Hanes, COO at Secureworks, says the OCA effort for data “normalization” is a positive step by the industry. It’s not an easy task today, he admits: “We have solved that through a variety of ways, with us doing the hard work to bring the normalization to our platform,” Hanes says. “The more that can be solved at a higher plane … that helps everyone.”

It’s common for startups to get funding to focus on a specific “pain point” in security and then roll out these very focused tools, he notes. But these and other tools then don’t actually work together, he says. 

The OCA effort comes at a time when several security tool vendors already have been adding products and features that aggregate others’ products, as well as the consolidation of security orchestration and automation (SOAR) into bigger platforms. Splunk now owns SOAR vendor Phantom, and Palo Alto Networks owns SOAR vendor Demisto, for example, and Elastic recently acquired endpoint security firm Endgame. Experts say more technology acquisitions and integrations are on the horizon.

“There’s some pretty significant consolidation happening in the market right now,” says James Carder, CISO at LogRhythm. “The reason being, I think, is that SIEM as promised decades ago was the be-all, end-all, single pane of glass for the modern SOC. Now there’s SOAR, endpoint security, network components, and all those pieces that are in the SOC.”

Carder says vendors are trying to consolidate SOC tools, including endpoint, SIEM, and SOAR, into single platforms, and build appropriate integration among the tools. “That’s a trend we’re seeing now in the SOC itself.”

LogRhythm is doing that with its updated SIEM platform, NexGen SIEM Platform, which combines SOAR, log management, security analytics, and network monitoring, for example, he says. “We may look at other acquisitions that could bolster [it] and give a SOC-in-a-box” offering, he says.

The OCA security-tool interoperability effort is a “sound” approach, Carder says. “Having a standard taxonomy and language and method for all different security technologies out there is a dream state of the industry where you don’t have to build these special integrations with” multiple products, he says.

Even so, the industry is a long way from achieving that reality, he notes. There also are the non-security applications that have security ties to consider, he says, such as physical security systems like cameras or badging systems in an organization, and even human resources applications. For example, if a user logs in from an atypical location and suspicious network activity ensues, an HR app can’t necessarily be queried to automatically check if he or she is on vacation, or if the user’s credentials have been compromised. “You’re still building one-off integration” with products outside security, Carder explains.

Some recently announced security tool integrations also demonstrate the pressure for vendors to unite disparate security tools. Security management platform vendor ReliaQuest, for example, acquired Threatcare earlier this month and plans to add its attack simulation technology to its GreyMatter security platform.

{Continued on Next Page}

 

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/security-tool-sprawl-reaches-tipping-point/d/d-id/1336048?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Virginia a Hot Spot For Cybersecurity Jobs

State has highest number of people in information security roles and the most current job openings, Comparitech study finds.

Virginia currently ranks as one of the hottest states for cybersecurity professionals from a job opportunity and salary standpoint.

New analysis placed the state in top spot for the number of people currently employed in cybersecurity roles (14,180), close to the top spot for average annual salaries for cybersecurity professionals ($111,780) and for number of current job opportunities (4,570).

Virginia also leads other states in employment per 1,000 jobs with 3.7 jobs out of 1,000 being cybersecurity-related. Over the next five years, the number of cybersecurity jobs in the state is expected to grow over 32%.

U.K-based Comparitech’s “2019 US Cybersecurity Salary Employment Study” used 10 different and equally weighted criteria to identify the best and the worst states in the US for cybersecurity professionals.

The metrics that Comparitech considered for its study included state annual salaries for cybersecurity roles; the number of people currently employed in cybersecurity jobs; the number of advertised cybersecurity jobs; and 10-year projections for cybersecurity roles in each state.

The study showed that in 2018, the national average salary for an information security professional was $92,789. Comparitech found that average annual cybersecurity salaries in each state are greater—sometimes substantially so—than average annual salaries for all other employment, in every single US state. The state with the biggest difference in salaries was New Mexico where the average annual salary of $106,360 for a cybersecurity professional was 80.34% higher than the state average for other employment.

The numbers are a reflection of the high level of concern over data breaches and cyber risk in general—and the premiums that organizations are willing to pay for professionals who can help them manage it.

For purposes of the study, Comparitech considered “information security analysts” jobs, which it defined as jobs involving the planning and implementation of cybersecurity measures, installation of security technologies, investigation of breaches and management of security vulnerabilities.

Some states such as Arkansas, Indiana, Wisconsin, and Wyoming don’t have “information security analysts” roles in their 10-year employment projections. In these cases, Comparitech used the closet description available for its projection scores says, Paul Bischoff, lead researcher at Comparitech. “The roles also vary somewhat between companies, so it’s perhaps best to think of this as a category of job rather than a specific job,” he says.

Top 5 States

Comparitech’s analysis showed the top five states for cybersecurity jobs are Virginia, Texas, Colorado, New York and North Carolina. The average annual salaries for cybersecurity roles in each of these states topped $100,000 with New York having the highest average at $122,000.

While organizations in Virginia currently employ more cybersecurity professionals than in any other state, it is California that currently has the most job openings for them, with over 5,000 at the time the Comparitech report was compiled.

Interestingly, some of the highest-scoring states in several of the categories that Comparitech analyzed were states not normally associated with a lot of high-tech activity. Utah, for instance, leads all other states in terms of projected cybersecurity job growth in the long term—likely because it is starting with a much smaller base. Over the next 10 years the number of security jobs in the state will increase by 50%, Compartech found.

Similarly, it was cybersecurity professionals in Arkansas—a state not usually associated with high-tech—that had the biggest increase in average annual salaries over the past five years. Even so, the $81,710 that security professional’s in the state average is lower than a majority of other states.  Security professionals in Kansas saw their average annual salary increase by over 10.5% to $86,160 between 2017 and 2018—the fastest one-year growth rate among all states.

On the flipside, among the states that fared the worst in Comparitech’s study were Vermont, Indiana, Montana, and Maine. The average cybersecurity salaries in each of these states were substantially lower than the national average.

Cybersecurity professionals in Montana — which number only 120 — earned a relatively paltry $64,790 in 2018 versus the national average of more $92,000. In Maine, people in information security jobs earned on average 25% less in 2018 than they did in 2017.

Puerto Rico placed at the bottom of the list of the states and territories that Comparitech evaluated in the study. Salaries for information security professionals in the territory averaged a dismal $42,440 in 2018 — a 9.8% decline from 2013.

Several factors account for the salary differences between states, besides just supply and demand. “Cost of living comes to mind,” Bischoff notes. “There might also be a difference based on the type of employer,” he says.  Salaries, for example, can differ significantly based on whether someone is working in government, a large corporation, or small business. “We did not examine these factors in the study,” Bischoff says.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Can the Girl Scouts Save the Moon from Cyberattack?

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/virginia-a-hot-spot-for-cybersecurity-jobs/d/d-id/1336050?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Deepfakes have doubled, overwhelmingly targeting women

OK, let’s pull deepfakes back from the nail-biting, perhaps hyperbolic, definitely hyperventilating, supposed threats to politicians and focus on who’s really being victimized.

Unsurprisingly enough, according to a new report, that would be women.

96% of the deepfakes being created in the first half of the year were pornography, mostly being nonconsensual, mostly casting celebrities – without compensation to the actors, let alone their permission.

The report, titled The State of Deepfakes, was issued last month by Deeptrace: an Amsterdam-based company that uses deep learning and computer vision for detecting and monitoring deepfakes and which says its mission is “to protect individuals and organizations from the damaging impacts of AI-generated synthetic media.”

According to Deeptrace, the number of deepfake videos almost doubled over the seven months leading up to July 2019, to 14,678. The growth is supported by the increased commodification of tools and services that enable non-experts to churn out deepfakes.

One recent example was DeepNude, an app that used a family of dueling computer programs known as generative adversarial networks (GANs): machine learning systems that pit neural networks against each other in order to generate convincing photos of people who don’t exist. DeepNude not only advanced the technology, it also put it into an app that anybody could use to strip off (mostly women’s) clothes so as to generate a deepfake nudie within 30 seconds.

We saw another faceswapping app, Zao, rocket to the top of China’s app stores last month, sparking a privacy backlash and just as quickly getting itself banned from China’s top messaging app service, WeChat.

While Deeptrace says most deepfakes are coming from English-speaking countries, it says it’s not surprising that it’s seeing “a significant contribution to the creation and use of synthetic media tools” from web users in China and South Korea.

Deeptrace says that non-consensual deepfake pornography accounted for 96% of the total number of deepfake videos online. Since February 2018 when the first porn deepfake site was registered, the top four deepfake porn sites received more than 134 million views on videos targeting hundreds of female celebrities worldwide, the firm said. That illustrates what will surprise approximately 0% of people: that deepfake porn has a healthy market.

History lesson

As Deeptrace tells it, the term ‘deepfake’ was first coined by the Reddit user u/deepfakes, who created a Reddit forum of the same name on 2 November 2017. This forum was dedicated to the creation and use of deep learning software for synthetically faceswapping female celebrities into pornographic videos.

Reddit banned /r/Deepfakes in February 2018 – along with Pornhub and Twitter – but the faceswap source code, having been donated to the open-source community and uploaded on GitHub, seeded multiple project forks, with programmers continually improving quality, efficiency, and usability of new code libraries.

Since then, we’ve seen faceswapping apps as well as one app for synthetic voice cloning (and one business get scammed by a deepfake CEO voice that talked an underling into a fraudulent $243,000 transfer).

Most of the apps require programming ability, plus a powerful graphics processor to operate effectively. Even here, though, the technology is growing more accessible, with several detailed tutorials being created with step-by-step guides for using the most popular deepfake apps, and recent updates having improved the accessibility of several GUIs.

Deeptrace says there are now also service portals for generating and selling custom deepfakes. In most cases, customers have to upload photos or videos of their chosen subjects for deepfake generation. One service portal Deeptrace identified required 250 photos of the target subject and two days of processing to generate the deepfake. The prices of the services vary, depending on the quality and duration of the video requested, but can cost as little as $2.99 per deepfake video generated, Deeptrace says.

The DeepNude app got pushed offline and has actually turned into a case study when it comes to deepfake service portals. In spite of the authors saying that they’d “greatly underestimated the volume of download requests” and crying out that “the world is not ready for DeepNude,” the world showed that it was actually hot-as-a-hornet ready.

The open-source code was subsequently cracked, independently repackaged and distributed through various online channels, such as open-source repositories and torrenting websites, and has spawned the opening of two new service portals offering allegedly improved versions of the original DeepNude. Charges range from $1 per photo to $20 for a month’s unlimited access.

Oh, I guess the world is ready for DeepNudes, said the original creators, who were also ready to line their pockets, given that they put DeepNude up for sale on 19 July 2019 for $30,000 via an online business marketplace, where it sold to an anonymous buyer.

Well, yikes, Deeptrace said. That was a disaster in the making – at least to women, if not for the $30K richer DeepNude creators:

The moment DeepNude was made available to download it was out of the creators’ control, and is now highly difficult to remove from circulation. The software will likely continue to spread and mutate like a virus, making a popular tool for creating non-consensual deepfake pornography of women easily accessible and difficult to counter.

Verified deepfakes include an art project that turned Facebook CEO Mark Zuckerberg into Mark Zucker-borg: the CEO’s evil deepfake twin who implied that he’s in total control of billions of people’s stolen data and ready to control the future.

As well, we’ve seen enhanced fake digital identities used in fraud, infiltration and espionage.

Besides the voice deepfake, we’ve also seen LinkedIn deepfake personas: one such was ”Katie Jones”, an extremely well-connected redhead and purportedly a Russia and Eurasia Fellow at the top think-tank Center for Strategic and International Studies (CSIS) who was eager to add you to her professional network of people to spy on.

The top 10 women most exploited in deepfakes

Deeptrace didn’t publish the names of the women most often cast in nonconsensual deepfake porn, but they did list them by nationality and profession. Most are from Western countries, including a British actress who appeared in 257 nonconsensual porn videos.

But the second and third most frequently targeted women, as well as the most frequently viewed one, are South Korean K-pop singers.

The conclusions

Deepfakes are posing a range of threats, Deeptrace concludes. Just the awareness of deepfakes alone is destabilizing political processes, given that the credibility of videos featuring politicians and public figures is slipping – even in the absence of any forensic evidence that they’ve been manipulated.

The tools have been commodified, which means that we’ll likely see increased use of deepfakes by scammers looking to boost the credibility of their social engineering fraud, and by fake personas as tools to conduct espionage on platforms such as LinkedIn.

What deepfakes are really about

Political intrigue and falsified identities as a means to conduct espionage or fraud are scary prospects, but in the greater scheme of things, they’re just a drop in the bucket when it comes to the harm being done by deepfakes.

Henry Ajder, head of research analysis at Deeptrace, told the BBC that much of the discussion of deepfakes misses the mark. The real victims aren’t corporations or governments, but rather women:

The debate is all about the politics or fraud and a near-term threat, but a lot of people are forgetting that deepfake pornography is a very real, very current phenomenon that is harming a lot of women.

Now’s the time to act, the Deeptrace report said:

The speed of the developments surrounding deepfakes means this landscape is constantly shifting, with rapidly materializing threats resulting in increased scale and impact.

Some of the internet’s biggest players are already on board with that and have been for a while. Google, for example, recently produced and released thousands of deepfakes in order to aid detection efforts.

Yet another data set of deepfakes is in the works, this one from Facebook. Last month, the platform announced that it was launching a $10m deepfake detection project, and is going to make the data set available to researchers.

That’s all well and good, but at least one expert has questioned whether deepfake-detection technology is a worthwhile effort. The BBC talked to Katja Bego, principal researcher at innovation foundation Nesta, who noted that it’s not much use to flag a video as fake after it’s already gone viral:

A viral video can reach an audience of millions and make headlines within a matter of hours. A technological arbiter telling us the video was doctored after the fact might simply be too little too late.

Besides the too little, too late critique of detection technology lies a simple socioeconomic fact that the BBC pointed out: namely, with hundreds or thousands of women being victimized, how likely is it that they’ll be able to afford to hire specialists who can pick apart the deepfakes that are being used to exploit them?

Deepface technology is growing and flourishing due to commodification and open-sourcing of code that’s led to push-button apps and service portals. Will the same forces lead to push-button apps and portals that can strip a deepfake video down to expose it as a fake?

Let’s hope the makers of fraud detection are thinking along those lines, as in, how can we use these detection technologies to undo the harm being done to the real victims? Deeptrace, I’m talking to you. What technologies will you, and others in this field, bring to the majority of deepfake victim… in a way that they can afford?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OKH0ZUc2ZQE/

October Patch Tuesday: Microsoft fixes critical remote desktop bug

Microsoft fixed 59 vulnerabilities in October’s Patch Tuesday, including several critical remote code execution (RCE) flaws.

One of the most significant was a flaw (CVE-2019-1333) in the company’s Remote Desktop Client that enables a malicious server to gain control of a Windows computer connecting to it. An attacker could accomplish this using social engineering, DNS poisoning, a man-in-the-middle attack, or by compromising a legitimate server, Microsoft warned. Once they compromised the client, they could execute arbitrary code on it.

Another critical RCE vulnerability affected the MS XML parser in Windows 8.1, Windows 10, Windows Server 2012 through 2019, and RT 8.1. An attacker can trigger the CVE-2019-1060 flaw through a malicious website that invokes the parser in a browser.

A memory corruption bug in Edge’s Chakra scripting engine (CVE-2019-1366) also enables a malicious website to trigger RCE, operating at the user’s account privileges, while an RCE vulnerability in Azure Stack, Microsoft’s on-premises extension of its Azure cloud service, escapes the sandbox by running arbitrary code with the NT AUTHORITYsystem account.

The company also patched a critical RCE bug in VBScript that lets an attacker corrupt memory and take control of the system, usually by sending an ActiveX control via a website or Office document. Hopefully bugs in VBScript will become less important over time now that the company has deprecated the language.

Other notable bugs ranked important that Microsoft patched this week included a spoofing vulnerability in Microsoft Edge, and an IIS Server elevation of privilege vulnerability (CVE-2019-1365) that could enable an attacker to escape the IIS sandbox with a web request.

There was also a flaw in the Windows Secure Boot feature that would let an attacker expose protected kernel memory by accessing debugging functionality that should be protected. They’d have to get physical access to the machine to take advantage of this bug, labelled CVE-2019-1368.

On-premises users of the Dynamics 365 business finance and operations system should patch the CVE-2019-1375 cross-site scripting bug that lets an attacker hijack user sessions.

Among the dozens of other bugs that Redmond patched this week was an elevation of privilege vulnerability in the Windows Update client. It could allow an attacker to take control of the function that updates the Windows operating system and install, change or delete programs at will. They’d have to be logged into the system first, though.

Also included in the patch were monthly rollups for the CVE-2019-1367 critical memory corruption bug in Internet Explorer that could execute an attacker’s arbitrary code in the current user context. It affects IE 9, 10, and 11. The same monthly rollup features an update for the CVE-2019-1255 bug. We reported both of these last month and the patches have been available since 23 September 2019. However, the initial IE zero-day patch was confusing and caused problems, according to reports.

It was a quiet month for Adobe, with no patches or advisories.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jWWY6mMadts/

How the Software-Defined Perimeter Is Redefining Access Control

In a world where traditional network boundaries no longer exist, VPNs are showing their age.

Virtual private networks (VPNs) have been around for over two decades, providing secure, encrypted tunnels for communications and data. While there are multiple types of VPNs — including SSL-VPNs and IPSec, to name two — the basic idea is the same regardless of the implementation. With a VPN, a secure IP transport tunnel is created that is intended to provide assurance that the data is safe because access is encrypted.

The concept of the software-defined perimeter (SDP) is somewhat newer, originally coming onto the scene in 2013, under the initial direction of the Cloud Security Alliance (CSA). With the SDP model, rather than just trusting an encrypted tunnel to be safe because it uses Transport Layer Security (TLS), there is no assumption of trust — hence the use of the term “zero trust” by many vendors in connection with SDP.

In a typical SDP architecture, there are multiple points where any and every connection is validated and inspected to help prove authenticity and limit risk. Typically, in the SDP model there is a controller that defines the policies by which clients can connect and get access to different resources. The gateway component helps to direct traffic to the right data center or cloud resources. Finally, devices and services make use of an SDP client which connects and requests access from the controller to resources. Some SDP implementations are agentless.

SDP vs. VPN
The basic premise under which VPNs were originally built and deployed is that there is an enterprise perimeter, protected ostensibly with perimeter security devices such as IDS/IPS and firewalls. A VPN enables a remote user or business partner to tunnel through the perimeter to get access to what’s inside of an enterprise, providing local access privileges, even when remote.

The reality of the modern IT enterprise is that the perimeter no longer exists, with staff, contractors and partners working on campus locations, remotely and in the cloud and all over the world. That’s the world that SDP was born into and is aimed to solve.

VPNs today are still widely used and remain useful for certain types of remote access and mobile worker needs, but they involve a certain amount of implicit or granted trust. The enterprise network trusts that someone that has the right VPN credentials should have those credentials and is allowed access. Now if that VPN user happens to turn out to be a malicious user or the credentials were stolen by an unauthorized person that now has access to a local network — that’s kind of a problem, and a problem that VPNs by design don’t really solve all that well, if at all.

An SDP or zero-trust model can be used within the modern perimeter-less enterprise to help secure remote, mobile, and cloud users as well as workloads. SDP isn’t just about having a secure tunnel — it’s about validation and authorization. Instead of just trusting that a tunnel is secure, there are checks to validate posture, robust policies that grant access, segmentation policies to restrict access and multiple control points.

The increasing adoption of zero-trust security technologies by organizations of all sizes is an evolving trend. As organizations look to reduce risk and minimize their potential attack surface, having more points of control is often a key goal. Security professionals also typically recommend that organizations minimize the number of privileged users and grant access based on the principle of least privilege. Rather than just simply giving a VPN user full local access, system admins should restrict access based on policy and device authorization, which is a core attribute of the zero-trust model.

A well-architected zero-trust solution can also offer the potential benefit of less overhead, without the need for physical appliance or client-side agents.

Use Cases
For business users, VPNs are a familiar concept for remote access and that is not something that is likely to change in the near term. For access to a local file share within a company, or even something as simple as accessing a corporate printer, a VPN will remain a reasonable option for the next two to three years. However, as more businesses move to SDP, even the simple access of a printer will be covered.

Within companies, internal threats in the perimeter-less enterprise are as likely as external ones, a zero-trust model is a useful model to limit insider risks.

For developers and those involved in DevOps, zero trust is a more elegant and controlled approach to granting access as well as providing access to on-premises, cloud, and remote resources. Development is distributed and simply tunneling into a network is not as powerful as what zero trust can enable.

VPNs are no longer the be-all and end-all solution for securing access that they were once promised to be.

The reality of the modern Internet is that threats come from anywhere, with the potential for any device or compromised user credential to be used as a pivot point to breach a network. A zero-trust approach can go beyond just relying on encryption and credential to minimize risk and improve security. SDP moves beyond just pretending that the fiction of a hard perimeter still exists.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Can the Girl Scouts Save the Moon from Cyberattack?

Gilad Steinberg is CTO and Co-Founder of Odo Security, a provider of remote access technology. He was previously the Security RD Team Leader for the Israel Prime Minister’s Office. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/how-the-software-defined-perimeter-is-redefining-access-control-/a/d-id/1335945?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Twitter Slip-Up Spills MFA Phone Numbers, Emails to Advertisers

Email addresses and phone numbers provided to secure user accounts were accidentally shared with marketers.

Twitter account holders who provided an email address or phone number to enable multifactor authentication may have had their data used for advertising purposes, Twitter reports.

The issue lies in Tailored Audiences and Partner Audiences, an advertising system offered to help marketers better connect with people on the platform. Tailored Audiences lets companies aim ads at customers based on their own marketing lists, containing email addresses and phone numbers obtained outside Twitter. Advertisers can upload marketing lists to Twitter and advertise to accounts linked to the same email address — people who already know the brand.

Twitter found that when advertisers uploaded marketing lists, they were accidentally matched with email addresses and phone numbers provided to the social media giant for account security. As a result, users’ contact data was used in targeted ads without their knowledge.

It’s unknown how many people were affected by the mistake, Twitter reports, noting that no personal data was externally shared with its partners or third parties. This issue was fixed on September 17 and the company is no longer using security contact information for advertising.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Can the Girl Scouts Save the Moon from Cyberattack?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/twitter-slip-up-spills-mfa-phone-numbers-emails-to-advertisers/d/d-id/1336040?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TOMS hacker tells people to log off and enjoy a screenless day

TOMS seems like a really nice shoe company, and it just got hacked in a really nice way.

Motherboard Vice reports that on Sunday, a hacker going by the name Nathan emailed TOMS subscribers and told them to log off, go out and enjoy the day:

hey you, don’t look at a digital screen all day, theres a world out there that you’re missing out on.

just felt some people need that.

CEO Jim Alling acknowledged the hack in an email to customers, telling them that an unauthorized email was sent out to the TOMS community by “an individual who gained access to a TOMS account in a third-party system.”

The company is asking members of its mailing list to refrain from clicking on any links or replying to the pleasant but unauthorized and illegal message.

TOMS is investigating the incident, but Alling said that the company immediately took steps to deactivate the account and implement additional layers of account security. He said that TOMS had spent 24 hours doing “close examination” with the company’s partners, but so far, it doesn’t look like full payment card details were accessed or that TOMS’ marketing customer email list was downloaded.

Well, no, why would he have done that? That would have taken a lot of time. Plus it would have been rude, Nathan told Vice:

I had TOMS hacked for quite a while, but with a busy life and no malicious intent, it was pretty useless to have them hacked.

Of course, he could have just responsibly disclosed whatever security hole he exploited, but for reasons he didn’t give, Nathan didn’t consider that an option:

By this point responsible disclosure is not a option. So I thought I [may] as well send out a message I believe in just for fun. End purpose was to spread my message to a large amount of people.

Nathan didn’t disclose how he broke into the TOMS account, but he told Vice that it was easy. And for all the hackers out there with less benevolent intent, he had this message:

To the hackers who hack large organizations etc for malicious reasons, stop being a criminal. Its beyond f**ked up to sell people’s private information on the internet. How do you sleep at night knowing you had a negative impact on thousands or maybe millions of peoples lives? It’s just so wrong. Also you self proclaimed hackers with nothing to show for it, who are just cyberbullies with the biggest egos. It’s not cool.

What conflicting emotions this guy stirs. I completely agree with all that he said above, and I completely want him to stop having this kind of fun with his hacking, lest he get arrested by law enforcement who aren’t amused.

Nathan might well have been a nice-guy hacker, but a hack is still a hack, as Vice pointed out. He made work for TOMS’ IT crew, its security crew, its CEO, and its PR people, one imagines. If TOMS called in law enforcement, that also means work for police and/or the FBI.

Nathan acknowledged all that in what we can assume, with our limited exposure to the hacker, is pure Nathan style:

Dear TOMS, sorry for hacking you guys. No hard feelings pls?

Kids, don’t try this at home. There’s no such thing as the Mr. Rogers of the hacking set, unless you’re talking about responsible disclosure. Even if you abstain from using a security hole to go after the financial data of a company’s customers, and even if you refrain from phishing them, you’re still breaking the law by illegally accessing an account to which you don’t have authorized access.

There are laws against that. In the US, prosecutors like to use them.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/c6nlBxWC_hA/