STE WILLIAMS

Talking to the Board about Cybersecurity

A chief financial officer shares five winning strategies for an effective board-level conversation about right-sizing risk.

As enterprises have become increasingly reliant on technology for every aspect of operations, technical executives such as CIOs and CISOs have found themselves in a completely new operations center: the boardroom. This move from the security operations center can present a significant challenge. Board members are often not well-versed in technology or security best practices, let alone jargon. At the same time, CISOs often lack the business experience to speak in terms that the board can understand, defaulting to technical discussions that the board can’t parse.

This breakdown in communication can have a cascade effect. Board members might fail to fully understand the security risks posed by a certain initiative. Or, with the growing number of costly and embarrassing security breaches, they might overemphasize caution and risk mitigation at the expense of implementing important technical advancements.

As a long-time executive in the technology industry, I’ve spent my fair share of time in boardrooms. I know how boards view risk, and how to effectively communicate about it. Below are five top CISO strategies for an effective board-level conversation about right-sizing risk.

Strategy 1: Manage the “Fear Factor”
Headline-grabbing breaches can draw a lot of attention from business stakeholders and board members who want to avoid finding themselves in similar circumstances. But not all breaches are created equal. Some breaches, like those due to misconfigured cloud services or ransomware attacks, are incredibly common. Others, such as those involving service provider employee malfeasance, attract a lot of attention but are vanishingly rare.

For CISOs, managing the fear factor is the first step toward successful interactions with the board. It’s important to come prepared to address concerns around the uncommon attack vectors while also putting those risks in perspective. At the same time, keeping the board and business stakeholders focused on the highest risks and most likely attack scenarios helps ensure that security resources go toward controlling what can be controlled.

Strategy 2: Lead with Resilience
Speaking of control, telling the board that the organization is 100% secure is setting the security team up to fail, and setting up the CISO for a new acronym: Career Is Soon Over. That’s why, in addition to putting risk in perspective, CISOs need to come prepared to talk about resiliency — how the organization will recover in the event of a breach, what measures are in place to react quickly, and how the security team can effectively investigate and use that knowledge to move forward in a more secure and intelligent way.

Strategy 3: Mind the Gap
CISOs should be prepared to communicate their security posture in terms of the key gaps in enterprise security programs and their coverage model. For this purpose, leveraging security frameworks can be effective. While business stakeholders may not grasp the technical nuances of frameworks such as CIS Controls, MITRE ATTCK, and NIST, these frameworks provide a programmatic, logical, and standardized way to evaluate the completeness of a security program against industry benchmarks.

Using these frameworks, CISOs can provide a contextual overview of the technologies they have in place (such as next-gen firewall, SIEM, and endpoint protection) as well as the technologies they plan to implement to close gaps in their architecture (such as cloud access security brokers and network detection and response).

Strategy 4: Focus on Business Risks and Rewards
One of the most critical success factors for CISOs in a board setting is mapping the priorities of the security team to core business objectives. While the security team might consider having zero expired SSL certificates a major achievement, the board likely has no understanding of the business implications of this effort. For CISOs, presenting this information in the context of business objectives can make all the difference. In the case of SSL certificates, that means talking about the implications for consumer trust, service reliability, search engine rankings, and website engagement and conversion. Framed this way, maintaining SSL certificates is a driver of important business outcomes.

While SSL certificates are just one example, if CISOs brief the board through the lens of the organization’s top objectives and how they are supporting them, the conversation will be much more productive.

Strategy 5: Build a Road Map to “Yes”
Even as enterprise budgets get poured into security initiatives, at the board level security is often seen as a necessary evil, and sometimes an outright impediment to business operations. Almost everyone has a story about how some “draconian” security requirement prevented them from using a technology that to help them perform better in their job. This pain point gave rise to an entire category known as “Shadow IT” — itself a massive security headache.

For the board, these anecdotes can make members feel like security is diametrically opposed to innovation. This is why it’s critical for CISOs to come prepared with a road map for getting to “yes.” If CISOs, together with CIOs, can demonstrate a clear understanding of business requirements and objectives, and talk about what security measures need to be in place to achieve them, it reframes the conversation around when not if.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Security 101: What Is a Man-in-the-Middle Attack?

Bill Ruckelshaus is an experienced public company executive with a passion for technology-driven businesses – ranging from VC-backed pre-IPO firms, to profitable companies with $500M + in annual revenue. Bill is a hands-on executive with experience in strategy, … View Full Bio

Article source: https://www.darkreading.com/risk/talking-to-the-board-about-cybersecurity/a/d-id/1336587?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Higher Degree, Higher Salary? Not for Some Security Pros

Turns out, skill beats experience and an academic degree doesn’t guarantee higher compensation for five security positions.

In the rapidly growing cybersecurity industry, some positions don’t offer a clear-cut path to a higher salary. An academic degree and years of experience, considered a promising combination in traditional industries, don’t guarantee security employees a bigger paycheck.

Cynet researchers polled 1,324 security practitioners this quarter to learn about industry salaries and the factors shaping them. Their data provided sufficient insight to profile five positions: security analyst/threat intelligence specialist, penetration tester, network security engineer, security architect/cloud security architect, and security manager/director.

Some findings validated the team’s suspicions. For example, they weren’t surprised to learn banking and finance usually lead in security compensation, says Yiftach Keshet, director of product marketing for Cynet. In the financial sector, 4% of respondents reported salaries of $111,000 to $130,000, 2% earn $131,000 to 150,000, and 2% earn $271,000 to $290,000. Healthcare also has salaries on the high end, with 17% who earn $111,000 to 130,000.

Location also had a tremendous impact on salary. Security analysts in North America report a significantly higher salary than in EMEA and APAC: More than 80% earn between $71,000 and $110,000 compared with less than 35% in EMEA and 21% in APAC earning the same. The highest-paid position recorded was security director, with top-tier earners making $290,000 or more.

Still, some findings caught the researchers off-guard. “I was surprised to find out that an academic degree can have a relatively low impact on compensation,” Keshet says. “That was surprising, especially in geographies like the United States and Europe.”

For some security roles, demonstrable skills are more valuable than academic degrees. Consider a level-one SOC analyst tasked with triaging alerts. The standard SOC is typically flooded with alerts, driving businesses’ concern about alert fatigue. A strong SOC analyst will be someone who can address a certain capacity of alerts in a day and can write automated rules to discern between events that have to be escalated and those that can be handled locally.

These skills are easily measurable. When a candidate applies for an entry-level SOC role, it’s easy to see what they know how to do and how they do it. The same goes for a pen tester or network security engineer, who are tasked with testing an organization’s defenses and maintaining network defenses, respectively. Sixty percent of pen testers with an academic degree made less than $50,000, while 60% of pen testers without an academic degree made the same amount. A larger percentage of pen testers without a degree made between $51,000 to $70,000 and $91,000 to $110,000 compared with their degree-earning counterparts.

The same can be said of network security engineering, where a greater percentage of employees without degrees reported salaries on the higher end of the spectrum than employees with degrees.

“Personally, I think it’s good news,” Keshet says about prioritizing skills for higher compensation. “If we eliminate degree or specification of experience, basically we’re left with skill. Companies care more about what their security personnel can do rather than their formal certification.”

Some of these skills may not solely come from security experience. Researchers found employees who pivoted from an IT role into a cybersecurity role tend to earn more than peers who started out in cybersecurity. In his personal experience, Keshet says, a solid background in IT better prepares someone to take a deep dive into security.

While a degree wasn’t necessary to increase salaries for the five positions analyzed, he notes it is required for executive positions. “For a CISO, it definitely matters,” Keshet says. Most CISOs have a security background but typically have an MBA or other advanced degree, he explains.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Disarming Disinformation

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/higher-degree-higher-salary-not-for-some-security-pros-/d/d-id/1336644?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook Fixes WhatsApp Group Chat Security Issue

Flaw allowed attackers to repeatedly crash group chat and force users to uninstall and reinstall app, Check Point says.

Facebook has fixed a bug in its WhatsApp chat platform that gave attackers a way to send a malicious group-chat message capable of repeatedly crashing the entire application for all members of a targeted chat group.

To regain access to the application, the victim would have had to uninstall and reinstall WhatsApp. Without re-installation, the user couldn’t return to the chat group because the app would repeatedly crash with each attempt.

The targeted group itself would have to be deleted and restarted, resulting in a complete loss of group chat history, Check Point said.

“The crash-loop is a killing of the app that is unstoppable,” says Ekram Ahmed, head of public relations at Check Point. “In the first cycle, the app is crashed. Then the user tries to regenerate the app. The app crashes again without any warning. It’s a consistent loop that crashes the app – on and on,” he says.

This is the second time in recent months that Check Point has identified an issue in WhatsApp. At Black Hat USA this August, researchers from the company showed how an attacker could intercept and manipulate WhatsApp messages in an individual or group setting to spread fake news and create other problems.  

Check Point researchers used a Web-debugging tool to intercept and decrypt the communication that happens between WhatsApp and WhatsApp Web when a user launches the desktop version of the app. By replacing some of the parameters in that communication, the researchers showed how they could change the content of chat messages and impersonate others.

At the time, Facebook described the issue as having nothing to do with the security of the end-to-end encryption on its messaging platform. The company has instead said the issue is similar to someone altering the contents of an email message. More than 500 million people worldwide on average are active on WhatsApp daily, according to Statista.

The latest — and now patched — exploit involves the same communication between the mobile and Web version of WhatsApp. In this case, the researchers found that by examining and manipulating one specific message parameter containing a message sender’s phone number, they could cause the app to crash for all members in a chat group.

An attacker would first need to gain access to a target group and assume the identity of a group member, which in this case could be accomplished by manipulating the message parameter containing the user’s phone number, Ahmed says. WhatsApp allows for up to 256 members to be part of a single group.

The attacker could then edit other specific message parameters and create a malicious message that is sent to all members in a targeted group, causing the crash-loop.

Check Point reported the issue to WhatsApp’s bug bounty program in August and the issue was quickly resolved, the security vendor said. A fix for the flaw is available in WhatsApp version 2.19.58 and users should manually apply it as soon as possible, Check Point advised.

Erich Kron, security awareness advocate at KnowBe4, said that while the bug is destructive and inconvenient, it at least does not enable the content of conversations or personal data to be exposed. Apple Store currently does not have the new fixed version of WhatsApp available for download, he noted, but users should keep checking and apply the patch as soon as it becomes available.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Disarming Disinformation

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/facebook-fixes-whatsapp-group-chat-security-issue/d/d-id/1336645?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mozilla mandates 2FA security for Firefox developers

Mozilla last week fired off an important memo to all Firefox extension developers telling them to turn on authentication (2FA) on their addons.mozilla.org (AMO) accounts.

This is a good move but also surprisingly late in the day.

Mozilla extensions have been around since not long after the browser appeared in 2004, and have been available to all Firefox users from 2014.

In 2018, the company added multi-factor authentication to accounts, with users able to choose from any one of a long list of Time-based One-Time Password (TOTP) authentication apps.

This, in effect, means that extension developers have been securing their accounts using only an email address and password for most of the browser’s existence.

It’s a glaring security weakness Mozilla has belatedly decided to plug. Mozilla’s Caitlin Neiman wrote:

Starting in early 2020, extension developers will be required to have 2FA enabled on AMO. This is intended to help prevent malicious actors from taking control of legitimate add-ons and their users. 2FA will not be required for submissions that use AMO’s upload API.

Rogue extensions

Turning on better authentication is an inherently good idea but is there more to it than that? Extensions and add-ons can be used to target Firefox users in three ways:

  1. Criminals setting up legitimate accounts to spread rogue extensions.
  2. Criminals distributing rogue extensions from third-party sites and socially engineering Firefox users to install them.
  3. Legitimate developer accounts that get hacked to sneak malicious extensions into the official Firefox add-ons store.

The first of these has been a low-level issue since Mozilla moved from manual to a more automated review process in 2017 in an effort to speed up development. Rogues get pulled down quickly when the company detects them, but this is after the fact. The second has also been an occasional issue.

Perhaps mindful of similar incidents on Google’s Chrome store, Mozilla has finally ticked developer 2FA off its security to-do list.

So, a few weeks from now, logging into a developer account won’t be possible without 2FA – a big change for developers who perhaps don’t pay as much attention to their creations as they should.

That means they could, in theory, be locked out completely, which is why Mozilla recommends they print out recovery codes for such an eventuality.

2FA for everyone

More generally turning on 2FA for all your accounts that offer it is something everyone can do. Good security isn’t just something for developers.

If you’d like to learn more about two-factor authentication (2FA), we’ve got you covered:

(Audio player above not working? Download or listen on Soundcloud.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/I5Jz7uFWAdM/

Researchers discover weakness in IoT digital certificates

IoT devices are using weak digital certificates that could expose them to attack, according to a study released over the weekend.

Researchers at online digital certificate management services company Keyfactor studied millions of digital certificates found online which were produced using the RSA algorithm. They found that 1 in every 172 certificates was crackable because of insecure random number generation.

RSA’s encryption algorithm is the basis for modern asymmetric encryption, which uses a pair of keys (a public and private key) to encrypt information and prove the sender’s identity. Part of the public key production involves multiplying two prime numbers (known as factors). It is computationally prohibitive to calculate the two prime numbers in reverse from the result. You can only decrypt the information by combining the private key (known only to the owner) and the public key.

If two public keys share a common factor, it becomes a lot easier to discover their other factors by calculating the Greatest Common Divisor (GCD) for their results.

The best way to avoid this vulnerability is to ensure that the numbers used to create the public key are as random as possible to avoid duplication. Highly random keys with few duplicates are known as high-entropy keys, but producing them requires two things: lots of random input data, and the computing power to turn that input data into a key.

Your desktop computer or laptop has computing power in spades. Unfortunately, the devices that make up the vast Internet of Things (IoT), which far outnumber desktop computers and run everything from petrol pumps to street lights, often don’t. The sensors and other devices connected to the IoT often rely on very low power to operate, which makes it more difficult to generate high entropy. The result is a lot of devices with common factors.

The researchers built a database of over 60 million RSA keys available on the internet, and then used logs produced by Google’s Certificate Transparency project to find another 100 million. After analysing the keys for shared factors, they found that at least 435,000 of them shared factors, representing one in every 172 certificates.

The researchers didn’t just identify the certificates with shared factors; they used the GCD algorithm to calculate the second unique factor for each of these keys, effectively cracking the certificate wide open. Keyfactor researcher JD Kilgallin, who wrote the report, explained that in many cases he was also able to trace the certificates to specific devices on the internet.

Still, it must have taken lots of computing power to do all that, right? Wrong. Or, more accurately, right, but that power is a lot cheaper these days. The industry already knew about this weakness, and Kilgallin points to several other studies in his report, ranging from 2012 to 2016. But this is the first time that someone has analysed so many keys, he said.

The biggest contribution that we can have made over the previous publications on the topic is the ease with which this can be pulled off with modern resources.

The company broke the keys using Microsoft’s Azure cloud service in a day for around $3,000.

So, does this mean that the RSA algorithm is insecure? Not at all. Ron Rivest, one of the algorithm’s three inventors, told Naked Security:

It looks like an implementation issue.

RSA, the company that Rivest helped create to commercialise the RSA algorithm, no longer owns it. The patent expired in September 2000, and BSAFE, one of the most popular implementations of the original algorithm, is in the public domain. Still, RSA CTO Dr. Zulfikar Ramzan has some views on this research. He told us:

… there are a variety of techniques from increasing the number of entropy sources in a device to waiting until enough entropy is gathered, to embedding high entropy key material during manufacturing that can help tremendously. While there are potentially design constraints to consider, this problem of starting with good cryptographic keys is well understood and feasible to solve with today’s technology.

Kilgallin is sceptical about pre-loading devices with keys during manufacturing, because it opens up devices to supply chain attacks in which an untrustworthy manufacturer or logistics company tampers with the keys en route.

Certificates also expire, he points out, meaning that they’d have to be re-generated periodically on the device anyway. An alternative, he suggests, is to get better random input during an onboard key generation process. Because IoT devices are network connected, they can easily get true random data from various sources, he says. That would let them generate higher-entropy keys even with limited computing power and memory.

Nadia Heninger, an associate professor at the University of California in San Diego, conducted two of the research studies cited in the Keyfactor report. She suggests that the problem with low-entropy IoT devices is about more than just low computing power:

There was a specific problem with the Linux RNG [random number generator] failing to seed itself promptly after boot on headless devices that was patched in the Linux kernel in 2012. This was the flaw that seemed to lead to most of the vulnerable keys. It seems that a lot of device manufacturers seem to use old kernel versions so I expect the problems won’t really go away anytime soon.

The upshot of these conclusions is that these problems have been known for eight years, and that IoT device vendors could easily solve this problem if they just got a clue.

Heninger continued:

These are not “high-value” keys – most of them are self-signed, so if an attacker wanted to man-in-the-middle the HTTPS connection they could anyway.

What should IoT users do to keep their devices safe? If your device comes with a default password or key, change it to something hard to guess.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zJlF9M3_vf4/

Ransomware-seized New Orleans declares state of emergency

On Friday, the US city of New Orleans became the latest local government to be held hostage to ransomware.

The ongoing attack caused Mayor LaToya Cantrell to declare a state of emergency. During a press conference on Friday, the mayor confirmed that it was a ransomware attack, and that its activity started around 5 a.m. that morning.

The city spotted the suspicious activity on its networks around 11 a.m., at which point it basically turned itself off.

According to NOLA Ready – the city’s emergency preparedness campaign, managed by the Office of Homeland Security Emergency Preparedness – the city powered down all of its servers, took down all NOLA.gov websites and told employees to power down their computers, unplug devices, and disconnect from Wi-Fi. Emergency communications weren’t affected, according to NOLA Ready, with the 911 emergency and the 311 city service phone lines still operational.

The city pulled local, state, and federal authorities into a (still pending) investigation of the incident. As of last night, the city was still working to recover data from the attack but planned to be open as usual.

Did NOLA get Ryuk-ed?

Cantrell has confirmed that this is a ransomware attack, but that no ransom demand has yet been made. Federal and state investigators have been called in to help with the investigation.

Bleeping Computer reported that, based on what look like memory dumps of suspicious executables that were uploaded to the VirusTotal scanning service on Saturday, the day after the attack, it looks like it was done by the unfortunately very active threat actors behind the Ryuk ransomware.

Security researcher Colin Cowie, of Red Flare Security, found that one of the sets of files contained numerous references to New Orleans and Ryuk.

Cowie shared one of the memory dumps with Bleeping Computer. It’s for an executable named yoletby.exe that contains both references to the Ryuk ransomware as well as references to the City of New Orleans, including domain names, domain controllers, internal IP addresses, user names, and file shares.

After digging around in the file names, Bleeping Computer also found an executable that it confirmed was Ryuk. Inside that executable there’s a string that refers to the New Orleans City Hall, the publication reported.

As of Monday, New Orleans hadn’t confirmed whether or not Ryuk was used in the attack. However, it wouldn’t be surprising if it were indeed Ryuk, given how active its threat actors are.

Ryuk is an especially pernicious ransomware variant. Recently, among a long list of nasty acts, it’s been used to prey on our elders: last month, a Ryuk attack froze health record access at 110 nursing homes. It was also recently used in a ransomware attack that affected hundreds of veterinary hospitals.

Since appearing in 2018, variants of Ryuk (named after a character in the manga series Death Note) have also been blamed for numerous attacks on US state and local governments, including the city of New Bedford in Massachusetts.

How to protect yourself from ransomware

  • Pick strong passwords. And don’t re-use passwords, ever.
  • Make regular backups. They could be your last line of defense against a six-figure ransom demand. Be sure to keep them offsite where attackers can’t find them.
  • Patch early, patch often. Ransomware like WannaCry and NotPetya relied on unpatched vulnerabilities to spread around the globe.
  • Lock down RDP. Criminal gangs exploit weak RDP credentials to launch targeted ransomware attacks. Turn off RDP if you don’t need it, and use rate limiting, 2FA or a VPN if you do.
  • Use anti-ransomware protection. Sophos Intercept X and XG Firewall are designed to work hand in hand to combat ransomware and its effects. Individuals can protect themselves with Sophos Home.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_IFQ_O3lT1A/

Destroyed: A method of destroying Whatsapp group chats forever, say infosec bods of vuln patch

Security investigators say they have uncovered a vulnerability in WhatsApp that will gladden the heart of anyone who’s ever wondered how to permanently wipe that incriminating group chat.

Researchers from infosec biz Check Point say they have found a flaw that lets a helpful malicious so-and-so “deliver a destructive group chat message that causes a swift and complete crash of the entire WhatsApp application for all members of the group chat.”

Not only that, but the crash is “so severe that users are forced to uninstall and reinstall WhatsApp on their device”. Having done so, they will find that the group chat “cannot be restored after the crash occurs and would need to be deleted in order to stop the crash-loop,” thus “causing the loss of all the group’s chat history, indefinitely.”

The bad good news is Whatsapp has already deleted patched this helpful feature vulnerability. Version 2.19.246 and later are not vulnerable to crashing the app and destroying your group chats through Check Point’s method.

According to Check Point research, Nicolas Cage a “bad actor” gains entry to the target group and then edits “specific message parameters” using their web browser’s debug tool. This triggers the unstoppable crash loop.

Using an example featuring Chrome’s built-in DevTools, Check Point provided a video to illustrate the bug:

Youtube Video

Whatsapp thanked Check Point in a statement for reporting the vuln through its bug bounty programme.

“WhatsApp greatly values the work of the technology community to help us maintain strong security for our users globally,” said Whatsapp software engineer Ehren Kret. “Thanks to the responsible submission from Check Point to our bug bounty program, we quickly resolved this issue for all WhatsApp apps in mid-September. We have also recently added new controls to prevent people from being added to unwanted groups to avoid communication with untrusted parties all together.”

Giving the Facebook-owned chat app’s operators a pat on the head, Check Point’s Oded Vanunu beamed: “WhatsApp responded quickly and responsibly to deploy the mitigation against exploitation of this vulnerability.”

Back in May this year, Whatsapp was the subject of a zero-day exploit that allowed the remote injection of spyware onto a target’s phone through the use of a booby-trapped voice call that didn’t even need to be answered. A duly enraged Facebook filed a US lawsuit against noted spyware purveyor NSO Group in October.

Last year Check Point discovered that it was possible to manipulate Whatsapp messages. Today’s disclosures build on its earlier work, the company said. ®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/17/whatsapp_group_chat_crash_vulnerability/

Disarming Disinformation: Why CISOs Must Fight Back Against False Info

Misinformation and disinformation campaigns are just as detrimental to businesses as they are to national elections. Here’s what’s at stake in 2020 and what infosec teams can do about them.

(Image: freshidea/Adobe Stock)

The UK company had been in business only a few months and was already receiving praise from the press, including an article in one well-known publication. But that seeming good luck didn’t last: Within a month, malicious — and false — stories started appearing that said the staffing firm had hired out a woman to work at a strip club. 

The company was the victim of a misinformation campaign. Luckily, the business was fake, part of an experiment run by intelligence firm Recorded Future.

To gauge the effectiveness of commercial disinformation campaigns, Recorded Future sought out services to bolster — or undermine — the fictitious company’s reputation. In less than a month, and for a total of $6,050, the company hired two Russian services to spread disinformation using a surprisingly extensive online infrastructure, ranging from social media accounts to online writers, to spread information, says Roman Sannikov, director of analyst services at Recorded Future. The list of publications in which the services claimed to be able to place stories ran the gamut from fake news sites to a top international news service.

“Companies need to be hyper-aware of what is being said on social media and really try to address any kind of disinformation when they find it,” Sannikov says. “The gist of our research was really how these threat actors use these different types of resources to create an echo chamber of disinformation. And once it gets going, it is much harder to address.”

Beyond Politics
Disinformation has become a major focus in the political arena. In 2018, the US government indicted 13 Russian nationals and three organizations for their efforts — using political advertisements, social media, and e-mail — to sway the 2016 US presidential election.

Yet such campaigns are not just useful in national politics. Disinformation campaigns are enabled and made more efficient by the data collection and capabilities of modern advertising networks. While companies like Cambridge Analytica have pushed the boundaries too far, even the legal abilities of advertising networks can be used to do great harm.

“The targeting models that have allowed advertisers to reach new audiences are being abused by these hucksters that are trying to spread false narratives,” says Sean Sposito, senior analyst for cybersecurity at Javelin Strategy Research. “The advertising industry has built a great infrastructure for targeting, but it’s also a great channel to subvert for disinformation.”

Disinformation has already harmed companies. In 2018, members of the beauty community revealed that influencers paid to promote a company’s products had been paid extra money to criticize competitors’ products. The Securities and Exchange Commission (SEC) has filed numerous charges against hedge funds and stock manipulators for taking short positions on particular firms and then spreading false information about the firm. In September 2018, for example, the SEC charged Lemelson Capital Management LLC and its principal, Gregory Lemelson, with such an attack against San Diego-based Ligand Pharmaceuticals.

At the RSA Conference in 2019, Cisco chief security and trust officer John N. Stewart warned that disinformation did not just matter to elections, but to businesses as well. “Disinformation is being used as a tool to influence people—and it’s working,” Stewart said.

Even true information, if put within a specific narrative, can harm companies as well. The portrayal of Kaspersky as a firm beholden to Russia and of Chinese technology giant Huawei as a national security risk has had significant impacts on both those companies.

So how can companies prevent disinformation from affecting them in 2020 and beyond? Experts point to three strategies.

 (Continued on next page)

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full BioPreviousNext

Article source: https://www.darkreading.com/theedge/disarming-disinformation-why-cisos-must-fight-back-against-false-info-/b/d-id/1336617?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook employees’ payroll data nabbed in car smash-and-grab

Facebook has again lost data on thousands of people, but this time, it’s the old-fashioned, smash-and-grab kind of data breach, done by a thief to an employee’s car.

Bloomberg Technology reported on Friday that a thief broke into an employee’s car and made off with payroll data for 29,000 current and former US Facebook workers.

The thief took unencrypted hard drives – drives that never should have been there – from a bag in the employee’s car.

Facebook said in an email to employees on Friday morning that the drives included payroll data, including employee names, bank account numbers and the last four digits of about 29,000 taxpayer IDs of employees who worked for Facebook in the US during 2018. The drives also contained other financial information, including salaries, bonus amounts, and some equity details.

A spokesperson told Bloomberg Technology that so far, the company hasn’t seen anybody try to exploit the employees’ data through identity theft.

The thief broke into the car on 17 November. Facebook supposedly realized the hard drives were missing three days later. On 29 November, a “forensic investigation” confirmed what type of information was on the drives. Facebook gave employees a heads-up about the theft on 13 December.

The Facebook spokeswoman said the police were duly notified:

We worked with law enforcement as they investigated a recent car break-in and theft of an employee’s bag containing company equipment with employee payroll information stored on it. We have seen no evidence of abuse and believe this was a smash and grab crime rather than an attempt to steal employee information.

And as far as the payroll employee responsible for leaving unencrypted drives in their car goes, they were duly disciplined, the spokeswoman said. As it is, the payroll employee hadn’t been authorized to take the drives out of their office. The Facebook spokeswoman didn’t give details of how the employee was disciplined:

We have taken appropriate disciplinary action. We won’t be discussing individual personnel details.

Readers, how would you discipline an employee who tosses unencrypted drives into a bag and leaves it in their car?

I think I’d sit them down in front of Mark Stockley’s 2014 article about security mistakes that small companies make and how to avoid them, point out that not encrypting drives is goof numero uno, underline the word “small,” remind them that Facebook is no pipsqueak and that all of its employees hence should, theoretically, know a whole lot better than small companies.

And then, finally, wag my finger at Facebook IT, which might try harder when it comes to training employees to encrypt drives and to abstain from removing them from the office.

All of which is, of course, hypothetical. We don’t know what kind of drives the thief got away with. Nor have the drives been retrieved yet.

In its email, Facebook encouraged employees to notify their banks and offered them a two-year subscription to an identity theft monitoring service, Bloomberg said.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SYgU8CTDbEs/

London’s Met Police splash the cash on e-learning ‘cyber’ training for 4k staffers

The Metropolitan Police Service dispatched more than 4,000 staff to attend so-called “cyber” training courses over the past two years.

The e-learning course “Cyber Crime and Digital Policing – First Responder” was completed by 4,534 employees. Over half were student officers although three detective chief inspectors and four detective inspectors also attended.

Over the same period – 2017/2018 and 2018/2019 – 4,444 officers and staff logged on to the “Cyber Crime and Digital Policing: Introduction” course, again half of these were new recruits.

Freedom of Information figures obtained by Nimbus Hosting also found 5,804 officers and staff enrolled on the course: “Digital Communications, Social Media, Cyber Crime and Policing.”

Campaigner Duwayne Brooks said: “Building a police force equipped with the latest digital skills is critical for improving community relations in the fight against crime. These new recruits are likely to come from more diverse backgrounds than their predecessors, possessing important insights and knowledge into local communities. By harnessing social media platforms and the latest technology, modern policing can tackle crime in close partnership with the wider public, winning the hearts and minds of young people and the disadvantaged.”

The Met employs 42,000 officers and staff and sucks up 25 per cent of the total police budget for England and Wales.

Last month an Freedom of Information request to forces across England and Wales revealed that 237 officers and staff were disciplined for misusing police IT systems in the last two years. The Met disciplined 18 staff and one was sacked for misuse of the Crime Reporting Information System.

The London cops provided little insight into course content, although we’d like to have been a fly on the wall for the social media course. What do you lot reckon one of the first police forces to trial automatic facial recognition – and also, as of June this year, a new Microsoft cloudy Office 365 convert – should get to grips with first, “cyber” wise? ®

Sponsored:
How to Process, Wrangle, Analyze and Visualize your Data with Three Complementary Tools

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/17/met_police_spending_big_on_cyber_training/