STE WILLIAMS

Former CIA CTO Talks Meltdown and Spectre Cost, Federal Threats

Gus Hunt, former technology leader for the CIA, explains the potential long-term cost of Meltdown and Spectre.

Federal agencies and organizations don’t fully understand the cost implications of Meltdown and Spectre, says Gus Hunt, former CTO of the CIA and current managing director for Accenture Federal Services. Resolving the issues may take more time and money than anticipated.

Addressing these flaws should be top of mind for agencies and businesses, because the breadth of impact will drive the complexity of fixing the problem, Hunt continues. If patches affect performance as much as experts report, long-term effects will be significant. This is especially true for the government, which he calls the largest buyer of IT and IT services.

“From a budgetary perspective, if my performance impact is 30%, nobody has budgeted for the cost of additional hardware and capacity so [agencies] can provide services at a level people will expect,” he explains.

The tech industry has been quick to produce solutions for Meltdown and Spectre, he says. This is especially true for cloud providers, which “were most adversely affected.” However, it’s still too early to gauge the true measurable impact of these flaws.

“What worries me most is this gives an open window for the emergence of building and delivering effective exploits,” he points out. “Adversaries out there are working like mad to figure out how to take advantage of this.”

On a broader level, Hunt speaks to the quickly evolving sophistication of today’s attackers and the growing threat to federal organizations. Attackers “adopt and reuse things with remarkable speed,” he says. The moment anything is released in the wild, their knowledge is elevated. He points to the consistent, and increasingly effective, use of ransomware as an example.

“The goal for the federal government space, fundamentally it’s data and control,” Hunt says of modern cybercriminals. “Those are the two big things attackers want.”

The greatest threat to today’s government is nation-state actors intent on gaining an advantage through stealing data and information, and hiding inside systems so they can eventually leverage their power and take control of systems. For attackers targeting federal victims, the promise of system control is far more appealing than citizens’ data, he points out.

Nation-states and advanced criminals have become vertically specialized, Hunt says. If they want to spear-phish someone in the government, they’ll spend a lot of time and money figuring out exactly who to target and how to collect information on them to launch a successful attack.

Shifting cybersecurity strategies

Hunt explains what he calls the Cyber Moonshot, a strategic concept comparing cybersecurity with the once seemingly-impossible goal of landing on the moon. The idea argues achieving security will take many of the same leadership and organizational steps: leadership, a specific call to action, and a sustained investment.

“If you really look at it, our approach to solving cybersecurity has really been a piecemeal, patchwork-quilt approach of slapping something in place,” he explains. “We haven’t really taken a strategic, coherent focus to drive it across the board.”

Prior to landing on the moon, humans had the tech they needed but hadn’t put it all together to solve the problem. Hunt says today’s security industry is similar. “We get how things happen, we get what goes on, and we have a variety of solutions in place … but we haven’t acted together to apply them in a way that changes the game,” he says.

However, there is a key difference between the two: the moon landing was a finite goal. Cybersecurity threats, practices, and technology, on the other hand, are constantly evolving.

“Absolute security is absolutely impossible,” Hunt admits. “There’s always going to be a vulnerability someplace; Meltdown and Spectre are classic examples of that.”

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/former-cia-cto-talks-meltdown-and-spectre-cost-federal-threats/d/d-id/1330923?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Electronic voting box makers want kit stripped from eBay – and out of hackers’ hands

Shmoocon Vendor intimidation, default passwords, official state seals for sale. Yes, we’re talking about computer-powered election machines.

The organizers of last year’s DEF CON Voting Village – a corner of the annual infosec conference where peeps easily hacked into electronic ballot boxes – are preparing for a similar penetration-testing session at this year’s event in August.

There are some hurdles to clear, though.

Speaking at the Shmoocon conference in the US capital last week, Finnish programmer and village organizer Harri Hursti said the team was having trouble getting voting machines to compromise for this year’s hackfest, in part because manufacturers weren’t keen to sell kit that could expose their failings.

In some cases, the box makers sent letters to people flogging election systems on eBay, claiming selling the hardware was illegal, which isn’t true. His team is still scouring the web for voting gear.

“One e-cycling company had bought 1,300 voting machines, which it acquired when the ceiling of the warehouse in which they were being stored collapsed,” Hursti said. “We found the company had already sold 400 of the machines, in some cases back to counties for voting duties.”

One of the machines was duly bought for the hacking competition. The seller is also touting packets of 25 official election machine seals for the state of Michigan for less than $5.

“You’d think you could only buy these if you had a government ID and were in the state of Michigan,” Hursti said. “But no, anyone can buy these.”

Hursti is pretty well known for finding ways into voting machines, and meddle with systems as an ordinary poll worker.

RTFM

Meanwhile at Shmoocon, we learned that Margaret MacAlpine, founding partner at Nordic Innovation Labs and another member of the DEF CON Voting Village team, found complete lists of the default admin passwords for electronic ballot boxes in their training manuals.

In one tome, election officials are instructed not to change the default password, and if someone had already, to reset passwords to the defaults. This manual covered machines used to count 18 per cent of the votes in US elections, we were told.

SAVE our souls

The sad levels of security in America’s voting infrastructure have worried politicians, and in October the bipartisan Securing America’s Voting Equipment (SAVE) Act was introduced. The legislation, if passed, would require election machines to be audited and officials trained to deal with with the latest credible security threats.

Voting village organizer Matt Blaze, an associate professor of computer and information science at the University of Pennsylvania, said that the proposed law was “a beautiful piece of legislation,” and should be supported. Given the intransigences within Congress, however, it may be a while before it gets through.

But it is needed, he argued, as there was already evidence that Russian hackers had been busy attacking election systems – not the voting machines themselves but the computers used to house voter rolls and tabulate the results.

“We’ll find out how much hacking went on in the history books, assuming they are allowed to be written in the future,” he told Shmoocon attendees. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/23/electronic_voting_machine_update/

Prepare for the challenges ahead at CyberThreat18

Promo You may think you are ahead of the game when it comes to IT security, but are you ready for the threats fomented by the fertile imagination of cyber-criminals, or the unsuspected dangers lurking in areas you may not even have considered?

A new event coming to London aims to bring security practitioners up to date with the fast-changing security landscape and the latest techniques being used to ward off attacks from every direction.

Hosted by UK government-run National Cyber Security Centre – a part of GCHQ – and the security network SANS Institute, CyberThreat18 runs 27-28 February at the QEII Conference Centre in Westminster.

On offer is a packed schedule of talks by security experts and industry leaders alongside opportunities to test your skills in challenges such as hackathons against the latest products.

The programme also hints darkly at “a surprise with something very cool, fun and technically brilliant” on day two.

Among the other sessions scheduled are:

  • Hunting for Lateral Movement: Foundation, Attacker Actions, and Repeatable Methodology to Detect
  • Threat Intelligence in Practice: Operation Cloud Hopper and the After Effects
  • Hunting Memory Anomalies
  • Think Your VPN is Secure? Think Again…
  • Hunting Pastebin for Fun and for Profit
  • There’s a raft of great speakers lined up for the two day event, you can find out more about them and get yourself a ticket right here.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/26/prepare_for_the_challenges_ahead_at_cyberthreat18/

Selling Cloud-Based Cybersecurity to a Skeptic

What’s This?

When it comes to security, organizations don’t need to look at cloud as an either/or proposition. But there are misconceptions that need to be addressed.

Nearly five years ago, a study conducted by the MIT Sloan Management Review found that the vast majority of business managers surveyed believed that “achieving digital transformation” – the process of virtualizing operations and migrating toward the cloud – was critical to their organizations. Yet the same report showed that 63% of respondents believed their organization was too slow to embrace technological change, primarily due to a lack of communication about the strategic benefits of cloud adoption.

While in recent years the adoption of cloud-based communication and productivity tools has picked up among businesses — hybrid cloud adoption increased from 19% to 57% of organizations surveyed in a recent McAfee cloud trends report — many companies are stillskeptical about embracing cloud-based cybersecurity solutions, even as the benefits of cloud services are becoming more widely acknowledged. Still, misconceptions remain. Here are three key objections, and how to dispel them. 

Objection One: My Data Will Be Safer On-Premises.
When the servers that manage company data move from an on-premises data center into a cloud environment, security teams often feel a loss of control due to their lack of physical proximity to sensitive corporate data. Consequently, before blindly trusting a cloud provider, companies need to vet a potential cloud’s security posture by asking probing questions, for example:

  • What compliance certifications has the cloud earned?
  • Can cloud provider meet industry compliance regulations?
  • What is the disaster recovery plan at the data center?
  • How is individual customer data isolated?
  • What encryption policies does the cloud employ?

Every data center and cloud provider should have clear answers to these questions before they are even considered. Even then, security teams should be mindful of the specific requirements of their own organizations and make sure the cloud services they need are available to them.

Objection Two: Do I Have To Go All In On Cloud?
Organizations don’t need to look at cloud in an either/or context. The next generation of cloud security platforms decouple the physical from the cloud, enabling organizations to meet regulatory compliance for data isolation while leveraging the cloud for remote sites and mobile users without increasing resource overhead.

In this context, organizations can leverage as much or as little cloud as they’d like. If they need certain traffic and data isolated to headquarters, organizations can direct that information through local appliances rather than redirect them to cloud-based solutions. Mixing-and-matching cloud-delivered and appliance-based security tools is also a boon for remote workers, as traffic that doesn’t need to necessarily be backhauled to an appliance at headquarters will experience less latency when processed directly through the cloud. Flexibility is at the core of these tools by not restricting customers to solutions that might be an ill fit.

Objection Three: Migration Will Be Too Disruptive
The truth is, the foundational infrastructure of the cloud is quite mature, having been developed and improved upon since the dawn of the Internet. We simply now call it the cloud, and the benefits of adoption have taken a while to funnel up to critical business decision makers. Teams need to simply do their research and find the least disruptive cloud security solution for their business – one that can scale to their needs appropriately and can be implemented seamlessly rather than upend an entire network infrastructure. 

Paul Martini is the CEO, co-founder and chief architect of iboss, where he pioneered the award-winning iboss Distributed Gateway Platform, a web gateway as a service. Paul has been recognized for his leadership and innovation, receiving the Ernst Young Entrepreneur of The … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/iboss/selling-cloud-based-cybersecurity-to-a-skeptic-/a/d-id/1330895?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hardware Security: Why Fixing Meltdown & Spectre Is So Tough

Hardware-based security is very difficult to break but, once broken, catastrophically difficult to fix. Software-based security is easier to break but also much easier to fix. Now what?

The security world has been rocked by Meltdown and Spectre, two critical hardware security exploits affecting every device from smartphones to desktops to cloud servers. One lesson to learn here is that hardware security alone is not a panacea.

Memory isolation is arguably the most important security feature in modern computer architecture. For example, a malware-infested game should not be able to get access to your banking app’s login and password. These exploits demonstrate that isolation is now fundamentally broken.

Meltdown and Spectre did not come out of thin air. They are based on a long line of research on micro-architectural attacks that have been going on for over 15 years. There have been dozens of researchers doing work in this space that have led to this result. But, unless you were a part of that community, or paying very close attention, you probably had no idea that something like this was even possible.

How do these attacks work? Modern CPUs are designed to run faster using a technique called speculative execution. In this technique, the CPU looks a few steps ahead to predict what memory it might need to access and grabs that memory ahead of time, so that it is ready to go when a downstream instruction finally executes. Meltdown and Spectre effectively allow one program to read the contents of another program’s memory.

The problem is that the only direct fix for these vulnerabilities is replacing the CPU. Not only would that be exorbitantly expensive, but almost every modern CPU has this flaw, so it’s not even a viable option. We may have to wait for completely redesigned CPU architectures before we have systems fundamentally secure against Meltdown and Spectre.

The good news is that there is a workaround to patch the operating system to compensate for this hardware vulnerability, but don’t breathe a sigh of relief quite yet. First, it’s been reported that these workarounds can result in as much as a 30% impact on performance depending on the program. Second, this affects pretty much every computer and many smartphones and tablets out there, so companies are scrambling to understand their risk exposure, which of their systems are most vulnerable, and how to feasibly deploy patches in any reasonable amount of time.

The Case for Software Security
Most of us take for granted that the safest design principle is to put the most important security functionality under the control of hardware. This includes relying on the OS kernel to be isolated from user programs, relying on hardware roots of trust to bootstrap the security of your system, or pushing cryptographic keys into hardware, where they are hidden from normal programs.

What Meltdown and Spectre have shown us is that hardware-based security is no panacea. Hardware is vulnerable. It can fail. And when it does, the results are ugly and painful.

But then, you might ask, what is the option? Do everything in software? Isn’t software fundamentally easy to break? The answer is that everything is relative.

While attacking software does not require as much effort, it is certainly not an easy task either. It’s hard enough for a hacker to understand well-documented and designed software, much less software with sophisticated anti-reverse engineering and tamper resistance shields. It can still take thousands of hours to find exploitable software vulnerabilities in such programs, even by the most talented hackers.

Hardware-based security and software-based security sit on two ends of a spectrum. On one end, hardware-based security, is very difficult to break, but once broken, it is catastrophically difficult to fix. On the other end, software-based security, is easier to break, but also much easier to fix.

Not a Binary Choice
This makes for an interesting choice. Is it better to deal with software patches on a regular basis, or put all your chips on hardware and hope there won’t be an expensive catastrophic incident down the road? In the case of Meltdown/Spectre, the solution was to roll out a software fix to a hardware failure. In some sense, we are lucky that that option was available.

The choice need not be binary. From a risk management perspective, software security can be an effective hedge against hardware security failures to cushion disasters like Meltdown and Spectre. This is just another way of applying the age-old security maxim of defense in depth.

Consider the issue of protecting cryptographic keys. The hardware solution is to put keys in secure elements, Trusted Platform Modules or Trusted Execution Environments. We’ve already seen that this kind of hardware can be hacked. An alternative software solution is white-box cryptography, which protects sensitive cryptographic keys by mathematically embedding them within the implementations. Even if a hacker gets to access the code during execution, security properties of white-box cryptography ensure that the keys will still be hidden from the hacker. While it is possible to attack these techniques, it is difficult to achieve. Attackers have had to resort to sophisticated side-channel attacks to be successful. But the security industry has responded with successful countermeasures to defeat these attacks.  

The last thing we want in the security world is to come across a devastating bug and only have partial solutions with potentially substantial performance costs. We need to stop thinking about hardware security as completely impenetrable. Decision makers should seriously consider using software security mechanisms to hedge against catastrophic hardware failures.

Related Content:

Bill Horne leads Intertrust’s Secure Systems group. He has authored over 50 peer-reviewed publications in the areas of security and machine learning, and holds 33 granted patents and 44 patents pending. Horne holds a B.S. in electrical engineering from the University of … View Full Bio

Article source: https://www.darkreading.com/risk/hardware-security-why-fixing-meltdown-and-spectre-is-so-tough/a/d-id/1330908?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Endpoint and Mobile Top Security Spending at 57% of Businesses

Businesses say data-at-rest security tools are most effective at preventing breaches, but spend most of their budgets securing endpoint and mobile devices.

There is a disconnect between businesses’ ideal security practices and their actual strategies. Some 77% of companies cite data-at-rest security tools as the most effective for preventing breaches but fall toward the bottom (40%) of security spending priorities, new data shows.

In its 2018 Data Threat Report, Thales teamed up with 451 Research to poll 1,200 senior security execs around the world. They discovered 94% of respondents use sensitive data in the cloud, big data, IoT, container, blockchain, and/or mobile environments. Forty-four percent say they feel “very” or “extremely” vulnerable to data security threats.

For 57% of businesses, the bulk of security budgets goes toward endpoint and mobile security technologies, followed by analysis and correlation tools (50%). The disconnect extends to encryption, which many cite as important but don’t allocate spending toward encryption tech.

Forty-two percent of respondents use more than 50 SaaS applications, 57% use three or more IaaS vendors, and 53% use three or more PaaS environments. Nearly half (44%) cite encryption as the top tool for increased cloud usage; 35% say it’s a necessary part of big data adoption. Encryption is also cited as the top tool for securing IoT (48%) and container (41%) deployments.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/endpoint-and-mobile-top-security-spending-at-57--of-businesses-/d/d-id/1330915?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Tips for Building a Data Privacy Culture

Experts say it’s not enough to just post data classification guidelines and revisit the topic once a year. Companies have to build in privacy by design. PreviousNext

(Image: Alfa Photo, via Shutterstock)

Given the expanding threat landscape, security professionals may think that the public at large doesn’t have a good grip on what counts as sensitive information.

But MediaPro’s 2018 Eye On Privacy Report shows that the industry has made some progress.

For example, 89% of US employees rank Social Security numbers as most sensitive on a scale of 1 to 5, with 5 being the most sensitive. And another 76% rank credit card information as most sensitive.

Other evidence that employees are more aware than in the past: 87% chose to correctly store a project proposal for a new client and design specifications for a new product in a locked drawer. And nearly three-quarters of all respondents chose to either destroy an old password hint and an ex-employee’s tax form from three decades ago in a secure shredder.

“While we’ve made progress, I have to wonder about the 11% who didn’t rate a Social Security number as most sensitive,” says Tom Pendergast, chief strategist for security, privacy and compliance at MediaPro. “It would seem to me that the Equifax case from last year would have sufficiently alarmed people.”

In honor of Data Privacy Day on January 28, here are key steps for creating a corporate culture of data privacy, based on interviews with MediaPro’s Pendergast and Russell Schrader, the new executive director of the National Cyber Security Alliance. 

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/operations/6-tips-for-building-a-data-privacy-culture-/d/d-id/1330914?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Intel CEO: New Products that Tackle Meltdown, Spectre Threats Coming this Year

In an earnings call yesterday, Intel CEO Brian Krzanich says security remains a ‘priority’ for the microprocessor company.

Intel CEO Brian Krzanich told analysts in the company’s earnings call yesterday that Intel will unveil new products “later this year” that mitigate the Meltdown and Spectre vulnerabilities.

“Our near term focus is on delivering high quality mitigations to protect our customers infrastructure from these exploits. We’re working to incorporate silicon-based changes to future products that will directly address the Spectre and Meltdown threats in hardware. And those products will begin appearing later this year,” Krzanich said. 

Intel has been under fire in the wake of recently discovered Meltdown and Spectre  hardware vulnerabilities in most of its modern processors, which allow for so-called side-channel attacks. With Meltdown, sensitive information in the kernel memory is at risk of being accessed nefariously; with Spectre, a user application could read the kernel memory as well as that of another application. The end result: an attacker could read sensitive system memory containing passwords, encryption keys, and emails — and use that information to help craft a local attack.

In a post early this week, Intel called for customers and OEMs to halt installation of patches for its Broadwell and Haswell microprocessors after widespread reports of spontaneous rebooting of systems affixed with the new patches. Intel said it plans to issue a fix for the Meltdown-Spectre vulnerabilites.

Meanwhile, Krzanich told analysts on the earnings call: “Security has always been a priority for us and these events reinforce our continuous mission to develop the world’s most secured products. This will be an ongoing journey, but we’re committed to the task and I’m confident we’re up to the challenge. To keep you informed, we’ve created a dedicated website and we’re approaching this work with customer-first urgency. I’ve assigned some of the very best minds at Intel to work through this and we’re making progress.” 

Read more here and from an exerpt from the call transcript, here

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/intel-ceo-new-products-that-tackle-meltdown-spectre-threats-coming-this-year/d/d-id/1330920?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Amazon Twitch declares “Game Over” for bots

In June 2016, popular Amazon-owned, live streaming service Twitch announced that it was filing charges against seven of the most active sellers of viewbots: the bots used to artificially pump up view count, follower count, and chat activity on a Twitch channel.

Puffing up the number of people who follow livestreams of Twitch users’ video gameplay was a tactic exploited by a “small minority,” Twitch said at the time, but the bot sellers have created “a very real problem that has damaging effects across our entire community.”

On Monday, a California court agreed. According to court documents filed on 22 January, Twitchbot purveyors Michael and Katherine Anjomi have been ordered to permanently shut down their software and to forfeit the domains they used to market it: shoptwitch.com, twitchshop.com, and twitchstreams.org.

According to the original complaint Twitch filed in June, those domains are where the Anjomis sold, among other things, view bots, follower bots, chat bots, and channel view services.

Twitch explained that the bots are sometimes used by broadcasters who believe that the perception of higher viewership and social activity will put them on the fast-track to success or to Twitch partnership, which requires streamers to have a consistently high number of viewers and followers. Membership also allows gameplay broadcasters to charge other users a monthly subscription fee for special perks and to control the length and frequency of ads (from which they get a cut of revenues) that show up on their streams.

But the gamer community is a feisty lot: at other times, Twitch said, the bots are used to harass other broadcasters in order to attempt to deny them partnership or to get their channel suspended.

Twitch maintains technologies to detect and remove fake viewers, and it relies on moderation, support, and partnership teams to investigate reports of artificially inflated viewer and follower counts, as well as fake chat activity, it says.

The bot sellers’ response to all that anti-bot-ism: “Pfft!” According to court documents, the Anjomis, like other bot sellers, bragged about designing their services to slip past Twitch’s detection. For example, they claimed to use “legitimate accounts, all with avatars and bio descriptions, to follow your stream.”

The Anjomis also described how they added followers and channel views slowly, over a few days, “for the most organic appearance possible.” They boasted that their service “is proven to be undetected by stream service providers, and we stand behind it 100%.”

Well, sorry about that, Twitchbot buyers: 100% guarantee or not, you’re likely out of luck when it comes to getting your money back. The Anjomis have been ordered to pay Twitch a total of $1,371,139, according to Kotaku. That amount includes $55,000 for damages, while the remaining $1,316,139 represents the profits the couple made from illicitly feeding off of Twitch’s popularity.

Twitch, which is owned by Amazon, had complained that the Anjomis were charging big bucks to artificially swell a Twitch channel’s audience. They advertised packages ranging from $26.99 per week for 100 live viewers, 50+ chatters, 100 followers, and 500 channel views, on up to the princely sum of $759.99 per month for a package of 2,000 viewers, 1,000 chatters, 4,000 followers and 20,000 channel views.

In 2016, the Anjomis claimed that none of their 6,000+ users had ever been suspended or banned for using their bots. The bots had, in fact, been programmed to hide from detection by using a different IP address for each fake viewer. To mask bot-employing channels with a high number of viewers but low engagement, the Anjomis posted fake chat messages.

That’s not much fun for real viewers, Twitch said. And if they’re not having fun, we’re not having fun:

Instead of engaging in interesting social interactions on Twitch chat, they may encounter bots spewing lists of random words. As a result, Twitch may lose its carefully developed reputation as the premier service for quality social video game content, the ability to attract and retain users, and the goodwill of the community.

The Anjomis didn’t mount a defense of their business. The judge ruled in Twitch’s favor on the grounds of trademark infringement, unfair competition, breach of contract, and violation of the Anti-Cybersquatting Consumer Protection Act.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wqaSxklIQyQ/

AI fake porn could cast any of us

In the case of revenge porn, people often ask: If the photos weren’t taken in the first place, how could ex-partners, or hackers who steal nude photos, post them?

Unfortunately, there’s now an answer to that rhetorical question. Forget fake news. We are now in the age of fake porn.

Fake, as in, famous people’s faces – or, for that matter, anybody’s face –  near-seamlessly stitched onto porn videos. As Motherboard reports, you can now find actress Jessica Alba’s face on porn performer Melanie Rios’ body, actress Daisy Ridley’s face on another porn performer’s body and Emma Watson’s face on an actress’s nude body, all on Celeb Jihad – a celebrity porn site that regularly posts celebrity nudes, including stolen/hacked ones.

Here’s Celeb Jihad crowing about a clip of a woman showering:

The never-before-seen video above is from my private collection, and appears to feature Emma Watson fully nude…

The word “appears” is key. It is, rather, an example what’s being called a deepfake.

Motherboard came across the “hobby” of swapping celebrities’ faces onto porn stars’ bodies in December, when it discovered a redditor named “deepfakes” who had made multiple convincing porn videos, including one of “Wonder Woman” star Gal Gadot apparently having sex with her stepbrother.

He also created porn videos with publicly available video footage of Maisie Williams, Scarlett Johansson, and Taylor Swift, among others. Deepfakes posted the hardcore porn videos to Reddit.

He used a machine learning algorithm, his home computer and open-source code.

At that point, the results were a little crude: for example, the image of Gadot’s face didn’t track correctly, Motherboard reported. You’d have to look closely to discern that it was a fake, though – at first blush, it’s quite believable.

Since then, production of AI-assisted fake porn has “exploded,” Motherboard reports. Thousands of people are doing it, and the results are ever more difficult to spot as fakes. You don’t need expertise with sophisticated artificial intelligence (AI) technologies at this point, either, given that another redditor – deepfakeapp – created an app named FakeApp that’s designed to be used by those without computer science training.

All the tools one needs to make these videos are free, readily available, and accompanied with instructions that walk novices through the process.

We haven’t seen anything yet. Coming weeks will bring deepfakes to all with the mere press of a button, deepfakeapp promised Motherboard in an email exchange.

I think the current version of the app is a good start, but I hope to streamline it even more in the coming days and weeks. Eventually, I want to improve it to the point where prospective users can simply select a video on their computer, download a neural network correlated to a certain face from a publicly available library, and swap the video with a different face with the press of one button.

From the subreddit’s wiki:

This app is intended to allow users to move through the full deepfake creation pipeline – creating training data, training a model, and creating fakes with that model – without the need to install Python and other dependencies or parse code.

As the news about deepfakes has spread to major news outlets, so too has condemnation of the rapidly advancing technology. Motherboard spoke with Deborah Johnson, Professor Emeritus of Applied Ethics at the University of Virginia’s school of engineering, who said that neural-network generated fake videos are undoubtedly going to get so good, it will be impossible to tell the difference between the AI produced fakes and the real thing.

You could argue that what’s new is the degree to which it can be done, or the believability, we’re getting to the point where we can’t distinguish what’s real – but then, we didn’t before. What is new is the fact that it’s now available to everybody, or will be… It’s destabilizing. The whole business of trust and reliability is undermined by this stuff.

One redditor took to the subreddit to address the “distain [sic] and ire” that’s arisen. Part of his argument was an admission that yes, the practice is a moral affront:

To those who condemn the practices of this community, we sympathize with you. What we do here isn’t wholesome or honorable, it’s derogatory, vulgar, and blindsiding to the women that deepfakes works on.

(Note that the technology would of course work on any gender: As Motherboard reports, one user named Z3ROCOOL22 combined video of Hitler with that of Argentina President Mauricio Macri.)

Gravity_Horse goes on to justify deepfake technology. One of his rationales: be thankful that this technology is in the hands of the good guys, not the creepy sextortionists:

While it might seem like the LEAST moral thing to be doing with this technology, I think most of us would rather it be in the hands of benign porn creators shaping the technology to become more focused on internet culture, rather than in the hands of malicious blackhats who would use this technology exclusively to manipulate, extort, and blackmail.

Gravity_Horse, are you serious? What in the world makes you think that a freely available app that’s truly God’s gift to sextortionists will be passed over by malicious actors?

Gravity_Horse also argues that this is nothing new. Photoshop has been around for years, he says, and it hasn’t resulted in courts being flooded with faked evidence or in the creation of believable revenge porn photos. Technology can create believable fakes, but it can also detect the forgeries, he argues.

Finally, Gravity_Horse argues that, ironically, as deepfakes proliferate and become mainstream…

Revenge porn actually becomes LESS effective, not more effective. If anything can be real, nothing is real. Even legitimate homemade sex movies used as revenge porn can be waved off as fakes as this system becomes more relevant.

Lord, I hope you’re right about that, because this genie is out of the bottle.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/x5nQg8anElI/