STE WILLIAMS

90% of CISOs Would Cut Pay for Better Work-Life Balance

Businesses receive $30,000 of ‘free’ CISO time as security leaders report job-related stress taking a toll on their health and relationships.

CISOs are willing to sacrifice an average of $9,642, or 7.76% of their salaries, for better work-life balance – an elusive goal among those whose employers demand more of their time and effort.

In a study conducted by Vanson Bourne and commissioned by Nominet, researchers interviewed 400 CISOs and 400 C-suite executives to learn more about the toll of continued stress on the mental health and personal lives of security leaders, who have increasingly reported poor work-life balance and little board-level support. They discovered most (88%) CISOs they surveyed are moderately or tremendously stressed, slightly down from 91% in 2019.

Nearly half (48%) of CISOs say work stress has had a detrimental effect on their mental health, nearly double the 27% who said the same last year. Thirty-one percent report the stress has affected their physical health, 40% say it has affected relationships with partners and children, and almost one-third say it has affected their ability to do their jobs. Ninety percent of CISOs would take a pay cut if it meant they could have a more even work-life balance.

There is no single source to CISOs’ stress, but excessive hours are a major factor. Almost all CISO respondents (95%) work more hours than contracted, with an average of 10 extra hours per week. Eighty-seven percent say their employers expect them to work additional hours. Only 2% of CISOs say they can “switch off” when they leave the office, and 83% report they spend at least half of their evenings and weekends thinking about their jobs.

“At my level, at even more junior levels, there’s an expectation that we’re always on,” says Nominet vice president of cybersecurity Stuart Reed. “There is this notion of never really switching off for any long period of time.” All of these extra hours add up: Ten extra hours of work each week amounts to $30,319 in extra time CISOs give their organizations each year.

Security leaders are expected to wear many hats during those hours. “CISOs are very much expected to be experts not just from a technical perspective, but being able to translate those technical concepts into the business risk or business strategy concepts,” Reed says. “The very blended nature of their role means they are potentially taking on the responsibility of more than one person’s job.”

It’s impossible to decouple CISOs’ stress from the evolving threat landscape. Mainstream news coverage of major cyberattacks puts an ever-growing spotlight on the CISO, explains Gary Foote, CIO of the Haas Formula One racing team, who also handles security for his employer. As soon as an organization gets media attention for a data breach, it escalates to the board level.

“That gets their attention, and they’re going down to the CISO and saying, ‘You have to make sure this doesn’t happen to us,'” Foote says. “A good amount of C-suite executives will see an attack as inevitable, but there will always be a significant portion that don’t.” Nominet’s study found 24% of CISOs report their boards don’t view security breaches as inevitable.

Bonding with the Board
Researchers discovered a telling gap between CISOs and the C-suite when it comes to CISO responsibilities and expectations. The board does take cybersecurity seriously – 47% say it’s a “great” concern – and 74% say their security teams are moderately or tremendously stressed.

The C-suite may recognize the importance of cybersecurity and appreciate CISOs’ stress, but it doesn’t translate into greater CISO support. Just about all (97%) of the C-suite say the security team could improve on delivering value for the amount of budget they receive. This indicates that despite their additional hours worked, the C-suite thinks they should still be doing more.

Demonstrating return on investment has long been a challenge for security teams. A low investment in cybersecurity could result in zero incidents; a high investment may still result in a breach. It’s difficult to prove return on investment when the measure of success is a breach that doesn’t happen. The challenge, says Foote, is trying to relay this to a corporate board.

Both CISOs (37%) and the C-suite (31%) say the CISO is ultimately responsible for responding to a data breach. Nearly 30% of CISOs say the executive team would fire the responsible party in the event of a breach; 31% of C-suite respondents confirmed this. Twenty percent of CISOs say they would be fired whether or not they were responsible for the incident.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “What Is a Privileged Access Workstation (PAW)?.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/90--of-cisos-would-cut-pay-for-better-work-life-balance/d/d-id/1336995?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Researchers Reveal How Smart Lightbulbs Can Be Hacked to Attack

New exploit builds on previous research involving Philips Hue Smart Bulbs.

Most people installing smart lightbulbs in their homes or offices are unlikely to see the devices as providing a potential entry point for cybercriminals into their networks. But new research from Check Point has uncovered precisely that possibility.

In a report released this week, researchers described how attackers could break into a home- or office network and install malware, by exploiting a security flaw in a communication protocol used in Philips Hue Smart Bulbs on the network.

“From our perspective, the main takeaway from this research is emphasizing that IoT devices, even the most simple and mundane ones, could be attacked and taken over by attackers,” says Eyal Itkin, security researcher at Check Point.

Check Point’s exploit builds on previous work from 2017 where researchers showed how they could take complete control of a large number of Philips Hue smart bulbs—such as those that might be deployed in a modern city—by infecting just one of them. Philips since has addressed the vulnerability that allowed malware to propagate from one infected smart bulb to the next.

But another implementation issue that allows attackers to take control of a Philips Hue smart bulb and install malware on it via an over-the-air firmware update, has not been fixed. Check Point researchers found that by exploiting that issue—and another security vulnerability they discovered in the Zigbee implementation of the Philips Hue smart-bulb control-bridge (CVE-2020-6007)—they could launch attacks on the network to which the bridge is connected.

Zigbee is a widely used smart-home protocol. Multiple other smart home products use the protocol including Amazon Echo, Samsung SmartThings, and Belkin WeMo. With Philips Hue smart bulbs, the bridge uses Zigbee to communicate with and control the bulb. But there are other smart bulbs that don’t require a bridge at all and instead operate over Bluetooth or WiFi and are managed through a Zigbee-capable digital assistant.

“The attack grants the attacker access to the computer network to which the bridge is connected,” Itkin says.

In a home scenario, an attacker could use the exploit to spread malware or to spy on home computers and other connected devices. “In an office environment, it would probably be the first step in an attempt to attack the organization, steal documents from it, or prepare a dedicated ransomware attack on sensitive servers inside the network,” he says.

In Check Point’s attack, the researchers first took control of a Philips Hue lightbulb, using the previously discovered vulnerability from 2017, and installed malicious firmware on it. They then demonstrated how an attacker could control the lightbulb—by constantly changing its colors, and its brightness for instance—to get users to delete the errant bulb from their app and reset it.

When the control bridge rediscovers the bulb and the user adds it back to their network, the malicious firmware exploits the Zigbee protocol vulnerability on it to install malware on the bridge. The malware then connects back to the attacker and using a known exploit—like EternalBlue—the attackers can then infiltrate the target network from the bridge, Check Point said.

Complex But Exploitable Flaw

The exploit only works if a user deletes a compromised bulb and instructs the control bridge to re-discover it: “Without the user issuing a command to search for new lightbulbs, the bridge won’t be accessible to our now-owned lightbulb, and we won’t be able to launch the attack,” Itkin says.

Specifically, the vulnerability Check Point discovered is only accessible when the bridge is adding or commissioning a new lightbulb to the network, he says.

The vulnerability that Check Point discovered is rated as “complex” to exploit because of the tight constraints in the Zigbee protocol around message sizes and timing. An attacker must be relatively close to the target network in order to take initial control of a bulb.

The 2017 research showed how attackers could take control of a user’s Philips Smart Hue lightbulb from over 1,300 feet (400m). If launched from a distance, the attack requires a directed antenna and sensitive receiving equipment to intercept Zigbee messages between the bulb and control bridge, Itkin says. “In a classic scenario, the attack could be performed from a van that parks down the street.”

Check Point n November 2019 notified Philips and Signify, which owns the Hue brand, about the threat it found. Signify has issued a patch for the flaw, which is now available on their site. “The Philips Hue Bridge has automatic updates by default and the firmware should be downloaded and installed automatically,” Itkin notes. They should also check the mobile app and verify that the firmware version has been updated to 1935144040, he says.

Pavel Novikov, head of the telecom security research team at Positive Technologies, says security in the Zigbee protocol is implemented via mandatory encryption. But when a device is connected to the Zigbee hub for the first time, there is a moment when encryption is not used, and the device and network are vulnerable to interception.

“Unfortunately, this architectural vulnerability cannot be fixed,” he says. All users can do is be aware of it and take pay attention when devices are paired. “If your device has dropped out of the network, don’t rush to bind it again, because this could be the start of a hacker attack.”

For enterprise organizations, Check Point’s research is another example of how IoT is continuing to expand the attack surface, said Mike Riemer, global chief security architect at Pulse Secure. “Many IoT devices have open default settings and require configuration and patch hygiene,” he said. Organizations need to implement a Zero Trust approach to security and ensure that all connected devices are visible, verified, properly monitored, and segregated, he said.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “What Is a Privileged Access Workstation (PAW)?.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/iot/researchers-reveal-how-smart-lightbulbs-can-be-hacked-to-attack/d/d-id/1336993?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Twitter bans deepfakes, but only those ‘likely to cause harm’

On Tuesday, Twitter rolled out its plans to handle deepfakes and other forms of disinformation.

Namely, starting on 5 March, “synthetic or manipulated” media that could cause harm will be banned. Harmful media includes that which threatens people’s physical safety, risks mass violence or widespread civil unrest, or stifles free expression or participation in civic events by individuals or groups, including by stalking, targeted content that tries to silence someone, voter suppression or intimidation.

Twitter also says it may label non-malicious, non-threatening disinformation in order to provide more context.

Among the criteria it will use to determine whether media have been “significantly and deceptively altered or fabricated” are these factors:

  • Whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing;
  • Any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed; and
  • Whether media depicting a real person has been fabricated or simulated.

Twitter says it will also consider whether media’s context could confuse people, lead to misunderstandings, or suggest a deliberate intent to deceive people about the nature or origin of the content: for example, by falsely claiming that it depicts reality.

‘Drunken’ Pelosi video would get a label

In a call with reporters on Tuesday, Twitter’s head of site integrity, Yoel Roth, said that Twitter’s focus under the new policy is “to look at the outcome, not how it was achieved.” That’s in stark contrast to Facebook, which sparked outrage when it announced its own deepfakes policy a month ago.

For Facebook, it’s all about the techniques, not the end result. Namely, Facebook banned some doctored videos, but only the ones made with fancy-schmancy technologies, such as artificial intelligence (AI), in a way that an average person wouldn’t easily spot.

What Facebook’s new policy doesn’t cover: videos made with simple video-editing software, or what disinformation researchers call “cheapfakes” or “shallowfakes.”

Given the latitude Facebook’s new deepfakes policy gives to satire, parody, or videos altered with simple/cheapo technologies, some pretty infamous, and widely shared, cheapfakes will be given a pass and left on the platform.

That means that a video that, say, got slowed down by 75% – as was the one that made House Speaker Nancy Pelosi look drunk or ill – passes muster.

When Facebook announced its deepfake policy, it confirmed to Reuters that the shallowfake Pelosi video wasn’t going anywhere. In spite of the thrashing critics gave Facebook for refusing to delete the video – which went viral after being posted in May 2019 – Facebook said in a statement that it didn’t meet the standards of the new policy, since it wasn’t created with AI:

The doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed. Only videos generated by artificial intelligence to depict people saying fictional things will be taken down.

Facebook said it would label the video as false, but that it wouldn’t be removed, given that “only videos generated by artificial intelligence to depict people saying fictional things will be taken down.”

Under its new policy, Twitter will similarly apply a “false” warning label to any photos or videos that have been “significantly and deceptively altered or fabricated,” although it won’t differentiate between the technologies used to manipulate a piece of media. Deepfake, shallowfake, cheapfake: they’re all liable to be labelled, regardless of the sophistication (or lack thereof) of the tools used to create them.

In the call with reporters on Tuesday, Roth said that Twitter would generally apply a warning label to the Pelosi video under the new approach, but added that the content could be removed if the text in the tweet or other contextual signals suggested it was likely to cause harm.

How will it sniff out boloney?

Twitter hasn’t specified what technologies it’s planning to use to ferret out manipulated media. As Reuters reports, Roth and Del Harvey, the company’s vice president of trust and safety, said during the call that Twitter will consider user reports and that it will reach out to “third party experts” for help in identifying edited content.

We don’t know if Twitter’s going to go down this route, but help in detecting fakery may not be far off: also on Tuesday, Alphabet’s Jigsaw subsidiary unveiled a tool to help journalists spot doctored images.

According to a blog post by Jigsaw CEO and founder Jared Cohen, the free tool, called Assembler, is going to pull together a number of image manipulation detectors from various academics that are already in use.

Each of those detectors is designed to spot specific types of manipulation, including copy-paste or tweaks to image brightness.

Jigsaw, a Google company that works on cutting-edge technology, also built two new detectors to test on the Assembler platform. One, the StyleGAN detector, is designed to detect deepfakes. It uses machine learning to differentiate between images of real people vs. the deepfake images produced by StyleGAN: a type of generative adversarial network (GAN) used in deepfake architecture.

Jigsaw’s second detector tool – the ensemble model – is trained using combined signals from each of the individual detectors, allowing it to analyze an image for multiple types of manipulation simultaneously. Cohen said in his post that because it can identify multiple image manipulation types, the results of the ensemble model are, on average, more accurate than what’s produced by any individual detector.

Assembler is now being tested in newsrooms and fact-checking organizations around the globe, including Agence France-Presse, Animal Politico, Code for Africa, Les Décodeurs du Monde, and Rappler.

Jigsaw isn’t planning to release the tool to the public.

Cohen said that disinformation is a complex problem, and there’s no silver bullet to kill it.

We observed an evolution in how disinformation was being used to manipulate elections, wage war and disrupt civil society. But as the tactics of disinformation were evolving, so too were the technologies used to detect and ultimately stop disinformation.

Jigsaw plans to keep working at it, though, and plans to share its findings over the coming months. In fact, it’s introduced a new way to share the insights it’s getting from its interdisciplinary team of researchers, engineers, designers, policy experts, and creative thinkers: a research publication called The Current that illuminates complex problems such as disinformation, as tackled through an interdisciplinary approach.

Here’s The Current’s first issue: a deep dive on disinformation campaigns.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6IJPjMtC2sA/

Update now – WhatsApp flaw gave attackers access to local files

Does WhatsApp have a lot of vulnerabilities or are there simply a lot of people looking for them?

Ask PerimeterX researcher Gal Weizman, who last year set about poking the world’s most popular messaging platform to see whether he could turn up any new weaknesses.

Sure enough, this week we learned that he uncovered a clutch of vulnerabilities that led him to a tasty cross-site scripting (XSS) flaw affecting WhatsApp desktop for Windows and macOS when paired with WhatsApp for iPhone.

Patched this week as CVE-2019-18426, it’s the sort of weakness iPhone WhatsApp desktop users will be glad to see the back of.

The immediate problem was caused by a gap in WhatsApp’s Content Security Policy (CSP), a security layer used to protect against common types of attack, including XSS.

Using modified JavaScript in a specially crafted message, an attacker could exploit this to feed victims phishing and malware links in weblink previews in ways that would be invisible to the victim.

According to Weizman, this is probably remotely exploitable although the users would still need to click on the link for an attack to succeed.

However, it could also be used to gain read permission to the local file system, that is the ability to access and open files and, potentially, for remote code execution (RCE).

Game over

An underlying problem is that WhatsApp desktop uses older versions of Google’s Chromium framework, written using the cross-platform Electron platform. This is a convenient way to develop web applications that also work on desktop computers. But, as PerimeterX’s summary of the research says, these are:

Susceptible to these code injections, although newer versions of Google Chrome have protections against such JavaScript modifications. Other browsers such as Safari are still wide open to these vulnerabilities.

Even so, better rules in the software’s CSP would have mitigated much of the XSS, as would have updating Electron, said Weizman:

When Chromium is being updated, your Electron-based app must get updated as well, otherwise you leave your users vulnerable to serious exploits for no good reason!

Vulnerable versions of WhatsApp Desktop prior to v0.3.9309 paired with WhatsApp for iPhone versions prior to 2.20.10.

It’s not the first time WhatsApp’s required a patch to fix its security. Recent incidents have included an MP4 flaw that could have led to an RCE, and another involving malicious Gifs with the same effect on Android.

Last May, a severe WhatsApp zero-day was being exploited by a nation state group to attempt to install spyware on targets simply by phoning them. In 2018, Google researchers revealed a flaw that could have compromised a device, again via a simple call.

Arguably, the problem here isn’t WhatsApp but the complex nature of modern messaging applications coupled to the willingness of researchers (and malicious actors) to hunt for them in the world’s number one communications app.

For all its much-vaunted security features, attackers have a strong incentive to look inside the app’s guts for security holes that could undermine this. If you’re a WhatsApp user, remember that this won’t change soon.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/z97jnHoyCbQ/

S2 Ep25: You’ve seen WHAT on public Trello boards? – Naked Security Podcast

Over the past couple of years, Sophos’ Director of Security Craig Jones has discovered a worrying amount of personal data on public Trello boards. Mark says companies shouldn’t microchip their employees and Duck discusses a bug that could have blown a hole in OpenSMTPD.

Host Anna Brading is joined by Sophos experts Paul Ducklin, Mark Stockley and, special guest, Craig Jones.

Listen now!

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BS3Xx4OqI0A/

Researchers reckon 500k PCs infested with malware after dodgy downloads install even more nasties from Bitbucket

We don’t know who needs to hear this, but don’t download cracked commercial software. Researchers claim more than 500,000 PCs have been left wriggling with malware after a cracked app went on to retrieve further nasties from Bitbucket repos.

Security company Cybereason has studied a campaign to deliver “an arsenal of malware” including credential stealers, cryptocurrency miners, ransomware and crypto-coin pinchers.

“It is also able to take pictures using the camera [and] take screenshots,” wrote researchers Lior Rochberger and Assaf Dahan.

How this stuff was managed and coordinated without bringing the user’s machine to a standstill is not specifically mentioned, but the duo added that “the combination of so many different types of malware exfiltrating so many different types of data can leave organisations unworkable”.

Users generally start their journey to hell, according to the paper, by “downloading a cracked version of commercial software like Adobe Photoshop, Microsoft Office, and others”. There is an insatiable appetite for free versions of expensive software, it seems, and search engines are happy to help. We searched Bing for “Download Adobe” and right at the top of the page were videos with guides to illegal downloads; no, we did not test these for malware but it would not be surprising if they came with some unwanted extras.

How malware proliferates by downloading from Bitbucket repositories

How malware proliferates by downloading from Bitbucket repositories (click to enlarge)

Rochberger and Dahan reckon that some such downloads create a connection to Bitbucket repositories to install “additional payloads”. Bitbucket is a code-management platform from Atlassian. There is no suggestion that Bitbucket itself has any specific vulnerabilities, but the claim is that serving malware from legitimate sites such as this – or others like Github, Dropbox and Google Drive – makes it harder for security software to detect. In addition, the researchers said the repositories are “updated almost constantly by the threat actor” in order to evade antivirus signature lists.

As is common, there is a marketing element to the report, with the researchers recommending an “iterative security process” to defend against this kind of attack.

Despite the researchers’ “Hole in the bucket” headline, the real story here is the risks inherent in users trying to get commercial software for free. Atlassian was quick to remove the malicious repositories reported to them, but the scale of services like this is such that preventing further occurrences is likely to be unrealistic. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/06/500k_pcs_infected_with_malware_delivered_via_cracked_commercial_software_and_bitbucket_repositories/

What Is a Privileged Access Workstation (PAW)?

Ask the Experts — about a technological game of keep-away that protects the most precious resources from the greatest dangers.

Question: What is a privileged access workstation? And how does a PAW work?

Tal Zamir, co-founder and CEO of HysolateWorkstations used by privileged users can easily become an attacker’s shortcut into the heart of the enterprise. One best practice for protecting privileged user devices is providing each such user a dedicated operating system that is exclusively used for privileged access — a concept known as privileged access workstations (PAW).

Privileged access workstations are the actual devices people areusing when they access those privileged accounts. Microsoft recommends that users access privileged accounts from a dedicated device or operating system that is only used for privileged activities.

Privileged access management refers to tools that manage privileged access (password vaults, access controls, privileged access monitoring, etc.). These solutions lock down who has access to privileged accounts, how long they have access, what they can do with that access, etc. 

So to bring them together, the best practice is for a user to have a dedicated workstation (privileged access workstation) for privileged use. Upon logging into that workstation, the user would access privileged accounts through a privileged access management platform that would manage all of the access rights.

This dedicated workstation or OS mustn’t be used for Web browsing, email, and other risky apps, and it should have strict app whitelisting. It shouldn’t connect to risky external Wi-Fi networks or to external USB devices. Privileged servers must not accept connections from a non-privileged OS.

You must also keep the user’s experience in mind. To avoid forcing users to use two separate laptops, consider leveraging virtualization technologies (e.g., VirtualBox/Hyper-V) that allow a single laptop to run two isolated operating systems side-by-side, one for productivity and one for privileged access. Also consider solutions dedicated to the concept of PAW.

Related Content:

 

The Edge is Dark Reading’s home for features, threat data and in-depth perspectives on cybersecurity. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/what-is-a-privileged-access-workstation-(paw)/b/d-id/1336944?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Can We Make Election Technology Secure?

In Iowa this week, a smartphone app for reporting presidential caucus results debuted. It did not go well.

November 5, 2019: In Northampton County, Pennsylvania, a candidate for judge, Abe Kassis, came up with just 164 votes out of 55,000 cast — a statistical absurdity. After hand scanning the ballots from the county’s new ESS ExpressVoteXL machines, Kassis emerged as winner. This was no case of “disinformation warfare” — so what happened?

February 3, 2020: In Iowa, a smartphone app for reporting caucus results debuted. It did not go well.

Our elections have been menaced by social media deception, voter registration scandals, conspiracy theories, and polarization of the electorate. While these issues must be confronted, we can’t ignore the growing threat posed by security gaps in the election equipment that records, counts, and transmits votes.

Even if we could solve the complex social engineering problems, we should all ask, “How secure are the physical machines being used in the 2020 American elections?” 

Vulnerability of Election Technology
Let’s start with some common problems presented by modern-day election machines.

  • Single point of failure. A compromise or malfunction of election technology could decide a presidential election.
  • Between elections. Election devices might be compromised while they are stored between elections.
  • Corrupt updates. Any pathway for installing new software in voting machines before each election, including USB ports, may allow corrupt updates to render the system untrustworthy.
  • Weak system design. Without clear guidelines and thorough, expert evaluation, the election system is likely susceptible to many expected and unexpected attacks.
  • Misplaced trust. Technology is not a magic bullet. Even voting equipment from leading brands has delivered wildly wrong results in real elections. Election administrators need to safeguard the election without relying too heavily on third parties or technologies they don’t control.

It takes a lot of work to lock down a complex voting system to the point where you’d bet the children’s college fund — or the future of society — on its safety. Has that work been done? Not entirely, as shown by these not-so-fun facts about election devices in the US.

  • Many voting machines are 10 to 20 years old.
  • Voting machine manufacturers are not subject to any federally mandated security standards.
  • Federal testing standards have not been updated since 2005, when few machines were digital.
  • Many voting systems connect to the Internet or have open USB ports.
  • Some newer voting machines have failed to record voter choices correctly and have features that actually defeat accuracy tests.

A Quick Look at Modern Voting Systems

Simplified view of the chain of voting devices.  Graphic by Ives Brant, TrustiPhi

There are — in the typical case — four classes of election machines. The chain usually begins with the device where each election’s new ballot is designed, usually as a set of instructions to the legions of voting machines, which print out each voter’s ballot.  

The ballot is then placed in a scanner, which reads the bar code/Qcode on each ballot and sends the results upstream. After voters make their choices, the ballots are printed and then scanned. Finally, the scanners send their results to a tabulator of results. The tabulator is usually situated outside the polling place at a central location

Critical Security Needs
A few years ago, there was much excitement about ditching paper ballots in favor of new, digital-only voting machines. The deficiencies of paperless voting became evident, and in 2019 Congress passed the SAFE act, which mandates the use of paper as a backup. Many counties and states have recently purchased new voting machines, and some of these products have security gaps.

In evaluating election technology, officials need to consider these critical security needs:

Every ballot on paper. Digital-only election systems make it hard to detect when counted votes don’t match what the voters selected.

Paper ballots must match digital results. Many voters do not scrutinize their paper ballot to be certain all their choices show up correctly. Some voting machines print the paper ballot too faintly to verify easily. Incorrect ballots (whether intentional or accidental) could go unnoticed if the distortion of results is subtle. If the paper ballot appears correct but the scannable code does not match, a hand recount of all ballots would reveal the hack.

Stop unauthorized device modification. Many election committees use tamper-proof seals to protect hardware in storage. That’s not foolproof, according to Professor Steve Bellovin of Columbia University, who told us, “Even the seals used on nuclear devices can be non-destructively removed and replaced.”

Check every election device. We asked J. Alex Halderman, a noted election cybersecurity expert, if election machines are checked when they are brought out of mothballs. He answered, “To my knowledge, no state has done rigorous forensics on their voting machines to see if they have been compromised.”

What About Judge Kassis and his 164 votes?
In that 2019 Pennsylvania election with preposterous results, 30% of the touchscreens were deemed “misconfigured,” but there’s no explanation of how an eventual winner was credited with just one-third of 1% of the votes cast in over 100 precincts. Possible causes include defective ballot design, scanning bugs, and/or final tabulation. The county’s election board issued a no-confidence vote in the machines, but time is too short to replace them. The same machines will be used in other cities and key swing districts that will affect the outcome of the 2020 presidential election.

Ives Brant, former editor in chief of Tornado Insider and Oracle Integrator magazines, and head of marketing at TrustiPhi, also contributed to this article.

Part 2 in this series: 5 Measures to Harden Election Technology

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

 

Ari Singer, CTO at TrustiPhi and long-time security architect with over 20 years in the trusted computing space, is former chair of the IEEE P1363 working group and acted as security editor/author of IEEE 802.15.3, IEEE 802.15.4, and EESS #1. He chaired the Trusted Computing … View Full Bio

Article source: https://www.darkreading.com/risk/how-can-we-make-election-technology-secure/a/d-id/1336975?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Invisible Pixel Patterns Can Communicate Data Covertly

University researchers show that changing the brightness of monitor pixels can communicate data from air-gapped systems in a way not visible to human eyes.

Computers disconnected from the Internet can still be used to transmit information by using slight changes to pixels on the screen that are otherwise not visible to humans, a team of researchers from Ben-Gurion University (BGU) of the Negev and Shamoon College of Engineering stated in a paper published on February 4.

The research project, called BRIGHTNESS, assumes that an attacker wants to exfiltrate data from a compromised machine not connected to any network and uses changes in the red values of a collection of pixels to communicate information to any video camera in the vicinity. Such display-to-camera (D2C) communication is a subject of study among academic cybersecurity researchers, but creating a system that is not perceptible to humans is novel.

The groups that have to worry about such threats are not just limited to government facilities, says Mordechai Guri, the head of research and development at BGU’s Cyber-Security Research Center and one of the authors of the paper.

“The attack is practical in certain scenarios,” he says. “In the finance sector, for example, exfiltrating cryptocurrencies’ private keys — which is equal to own[ing] the wallet — from a secure, isolated computer that signs the transactions” is one possible scenario.

Attacks against highly secure systems not connected to a network — known as air-gapped systems — have been a topic of both study and practical attacks for more than two decades. Attacks using information gleaned from electromagnetic emanations, often referred to as TEMPEST attacks, date back the 1990s and even, by some accounts, to even precomputer times.

Monitor screens, hard-drive activity LEDs, network-activity LEDs, and keyboard clicks have all been used to steal information, and in some cases, create a covert communications channel. In 2016, for example, researchers from Tel Aviv University were able to extract the decryption key from a laptop using its emanations. Other attackers have used heat from one system to communicate with another.

In the latest project, the BGU researchers found that, by adjusting the red component of a set of pixels by 3%, they could achieve bit rates of between 5 and 10 bits per second, depending on the distance the camera was from the monitor. In addition, two cameras — a security camera and a webcam — had similar performance, but a smartphone camera could only extract an average of 1 bit per second, according to the report.

Theoretically, the techniques could extract tens of bits per second, Guri says.

“The maximal bit-rate may reach 30 bits/sec [or] more, if more advanced modulation methods are used,” he says. For example, an attacker could “use more than 2 brightness levels and more than 1 color.”

Are the changes truly invisible to the human eye? The researchers conducted the experiment in a controlled level of ambient lighting and waited until the subjects adapted to the light level. In addition, the frequency at which a blinking image appears to be a steady-state image — a threshold known as the critical fusion frequency (CFF) — varies depending on the ambient lighting, the researchers said.

“The sensitivity of the visual system gradually adapts as one moves from a darker or brighter environment,” they researchers wrote, adding that “particularly with low levels of illumination, increasing the duration can increase the likelihood that the stimulus [blinking image] will be detected.”

The prerequisite that an air-gapped computer be already compromised is not that rare, Tal Zamir, founder and chief technology officer of Hysolate, a maker of endpoint-security solutions, said in a statement.

“This is not uncommon, as one of the challenges with physically air-gapped solutions is the inability for the user to be productive, and many times, they look for workarounds in order to get their tasks completed — and there lies the introduction of risk into the environment,” he said. “Security and productivity have always been seen as a constant balancing act, where the traditional mindset believes that in order for one to thrive the other must suffer.”

Moreover, while the attack is mainly a worry for super-secure facilities that have sensitive or top-secret data on air-gapped systems, the attack could also be used to avoid communicating data over, for example, a heavily monitored network.

Yet, for most companies, hiding covert data in network packets is a far more likely way to secretly communicate, Guri says.

“The traditional network-based covert channels are the issue to watch today,” he says. “Finding hidden information within Internet protocols, SSL, HTTPS, emails, and so on, is a challenge by itself.”

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/invisible-pixel-patterns-can-communicate-data-covertly/d/d-id/1336987?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

A Matter of Trust

Has working in the cybersecurity industry affected your ability to trust? Take the poll now.

The Edge is Dark Reading’s home for features, threat data and in-depth perspectives on cybersecurity. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/a-matter-of-trust/b/d-id/1336989?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple