STE WILLIAMS

5 Measures to Harden Election Technology

Voting machinery needs hardware-level security. The stakes are the ultimate, and the attackers among the world’s most capable.

Part 2 of a two-part series.

The Iowa caucus isn’t the first time that election technology failed spectacularly. As the New York Times reported, a November 2019 election in Northampton County, Pennsylvania, made history by being so lopsided that nobody believed the results. The actual winner (after a count of the paper ballots) was initially credited with just 164 out of 55,000-odd votes in the electronic tally. It’s still unclear whether the cause was a defect in voting hardware or software, or the result of a hack.

In Part 1 of this series, we looked at common vulnerabilities of voting machines, scanners, and the overall voting system. In Part 2, we examine five concrete measures to make our election technology a harder target.

Measure 1: Use single-purpose systems. Less complexity means better security. Voting machines should be purpose-built, capable of filling out ballots, but nothing else. They should support two key functions: voting, and secure device management. They should employ a secure boot process, either loading an OS and voting application or loading an environment that allows secure, verified updates. All commercial off-the-shelf operating systems and software should be locked down, to prevent access to physical interfaces (e.g., USB), network connections, and other interfaces.

Measure 2: Build in defense-in-depth. Manufacturers — of all endpoints, not just voting devices — now recognize that redundancy and multiple layers of security are needed. So-called defense in depth helps make security infrastructure much more difficult to attack because it removes single points of failure.

Measure 3: Limit privileges. A critical, often overlooked security tool is to minimize privileges. This includes system users, software developers, and hardware vendors. Election officials should be able to verify the entire system and ensure that no vendors, employees, or contractors can subvert elections.

Measure 4: Use multiple counting systems and cross-checks. Election officials and voters need multiple ways to verify the election. Election equipment should provide both digital audit trails and a physical, human-verifiable paper ballot. If, for instance, the voting machine reports its own total vote tally, the voter is given a paper ballot to check before submitting to the tallying system and the tallying system reports the totals from each voting machine, the election administrator could compare three independent pieces of data (from the physical ballots, the voting machine, and the tally system). Having the user double-check the physical ballot helps to ensure the votes are counted as the voters intended.

Graphic by Ives Brant, TrustiPhi

Measure 5: Layered Security Measures for Election Devices
To achieve secure voting, layer in security against tampering, rogue software, and devices that could insert fake voting results. Require clearly printed paper ballots and ask every voter to check their ballot carefully before scanning its code. These measures are not foolproof, but they’re difficult to hack through.

Election Hardware Security Basics
Strong hardware-based security — with four foundational capabilities — in election machines should underpin the above-described solutions.

  • Authentication
  • Authorization
  • Attestation
  • Resiliency

The good news: These requirements for secure interdevice communication apply generally to all connected devices — and technologies exist now to provide these capabilities.

Authentication: Are you the device you claim to be?    
Each election device should provide strong (cryptographic) evidence to confirm its identify as the correct source of its data. Any machine providing critical data such as ballot designs, completed ballots or tabulated results should be authenticated to verify it is not an imposter.

Authorization: Does your device have privileges to talk to me?
Only authorized users should be permitted to manage election equipment — that’s a given. In addition, each device should have a defined role in the overall system. A currently authorized voting machine generally is allowed to provide data to a scanner, but only at the same physical polling place. The central tabulator should accept data from scanners but not directly from voting machines.

Attestation: How do I know you are not compromised?
Attestation of device integrity is a verification that the sending device has not been compromised. If an election machine has been hacked at the hardware level, or targeted with malware, it should “turn itself in” — or be unable to attest that it is still safe to use.

Hardware resiliency: How quickly can a device recover from attacks?
Resiliency is an important new method for tackling security issues for election equipment and across the Internet of Things. It’s of great importance that devices can recover quickly from attacks. If an infiltrator compromises a device, such as a voting machine or scanner, the machine must rebound quickly — or continue to operate in a “safe mode” despite the breach.

An election outcome could be changed merely by knocking a scanner or a few voting machines out of service on the big day. When voters are kept waiting, they might just give up and go home. The election device must return to its functional state quickly. There is no perfect security, so resilience is essential.

Where Hostile Nations Would Attack
Election administrators need to take this seriously. An attack against election hardware such as voter registration systems, or anywhere along the vote and tally chain, could upend an election. Professor Steve Bellovin of Columbia University, an authority on election security, has emphasized the threat of supply chain attacks, noting that “nation-state attackers have the resources to infiltrate manufacturers of election technology and compromise the tabulating machines. Such attacks would scale the best.”

Bellovin is specifically concerned about critical vote-tallying software, which transmits results from each precinct to the county’s election board, and may have links to the news media. “This software is networked and hence subject to attack,” he says. He also worries about the ballot design software, which “sits on the election supervisors’ PCs.” Counterfeit software can create ballots that favor one candidate, confuse voters, and make the printed ballot difficult to read and verify.

Voting machinery needs hardware-level security. The stakes are the ultimate, and the attackers among the world’s most capable. Authorization, authentication, and attestation at the hardware level, along with built-in cyber resilience, will make most attacks too difficult to pull off successfully. Independent cross-checks, solid procedures, and third-party software and ballot verification, enable even higher confidence — and it’s urgently needed. The Pennsylvania election and Iowa caucus showed the need to mitigate election technology shortcomings before a catastrophic compromise occurs.

Read Part 1: How Can We Make Election Technology Secure?”

Related Content: 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

Ari Singer, CTO at TrustiPhi and long-time security architect with over 20 years in the trusted computing space, is former chair of the IEEE P1363 working group and acted as security editor/author of IEEE 802.15.3, IEEE 802.15.4, and EESS #1. He chaired the Trusted Computing … View Full Bio

Article source: https://www.darkreading.com/risk/5-measures-to-harden-election-technology-/a/d-id/1336978?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

From 1s & 0s to Wobbly Lines: The Radio Frequency (RF) Security Starter Guide

Although radio frequency energy (RF) communications are increasingly essential to modern wireless networking and IoT, the security of RF is notoriously lax.

(image by pinkeyes, via Adobe Stock)

It’s almost impossible to think about modern IT and networking without bringing radio frequency energy (RF) into the picture. That means it’s equally impossible to fully consider IT security without thinking about the implications of radio as both a Layer 1 component and a critical attack vector.

The problem for most IT and security professionals is that RF is all wibbly-wobbly and squishy. Rather than the neat, clean, on/off, one/zero of the digital domain, radio tends to be described in terms of frequencies and amplitudes, reflection and refraction, all of which are measured and described in the analog domain.

So for security professionals the question becomes, why should they take the time to learn about this mysterious transmission layer and where do they begin?

The why

“Radio has changed how corporate networks interact with the internet, meaning that almost all devices that employees bring into the office are communicating through the airwaves,” says Joseph Carson, chief security scientist at Thycotic. And in addition to the IT uses of RF, there are IoT and OT uses as well as application uses in areas like public service and communications between locations and employees.

It’s that variety of different ways in which RF can be used that make it important for security professionals to understand something of the basics of radio. “In the past, it was all about how to get an RJ45 connection to a network. Today, it is all about intercepting radio signals such as Bluetooth, WiFi, 4G and now 5G,” says Carson.

The danger

Once transmitted into space, a radio signal can be intercepted by anyone with a receiver tuned to the proper frequency. Building or buying a receiver for just about any frequency is easy, and new technology is making it even easier. As Carson says, “The biggest challenge is that most radio signals are not encrypted, and with a good software-defined radio, you can easily intercept most RFs — such as airport communications, device broadcasts, weather stations, satellites, and even emergency communication.”

Researchers have already demonstrated how RF exploits could be used to manipulate cardiac implants, heavy construction machinery, emergency alert sirens, in-flight aircraft, and much more.

Dangers are amplified when users expect radio communications to be private. Fausto Oliveira, principal security architect at Acceptto, says, “The attackers are exploiting a social expectation. People nowadays expect that public places provide wireless connectivity and the attackers take advantage of that expectation.”

There’s no question that communications over the radio of Wi-Fi is hazardous. “The best ways to stay protected against this type of threat are to use a trusted VPN software to ensure that all your connectivity is encrypted; do not connect to WiFi access points that you do not recognize; look at the content that is being presented when an access point requests for your personal data and if you spot inconsistencies or the level of detail being requested makes you feel uncomfortable, disconnect from that network,” says Oliveira.

The real danger is that similar risks can exist on other RF networks that may not have the same defensive possibilities that have been built into and bolted onto Wi-Fi. In these application-specific, IoT, OT, or cellular data network instances, knowing what the radio signals themselves bring to the infrastructure can be the key to understanding which security steps will be most effective.

So what should an infosec professional know about RF? Before launching into a brief explanation, some caution is in order.

“Radio frequency analysis and security is a complex topic that intersects several fields of information security, information theory, physics, and electrical engineering,” says Charles Ragland, security engineer at Digital Shadows.

The combination of complexity and analog nature makes certain measurements and descriptions far more intricate operations than they are in the more straightforward digital realm. What follows are basics, with places to go to find richer explanations of the details.

There are two fundamental measurements of RF and a handful of very important ones. The two fundamentals are frequency and amplitude, and they tell us a lot about what’s going on.

Frequency is the number of times the signal oscillates (goes from peak to peak) in one second. Measured in hertz, in radio applications frequencies can range from very low (3 kHz, or three thousand oscillations per second) to very high (30 GHz, or three billion oscillations per second is  the highest frequency seen in most cases, though the radio spectrum extends up to 300 GHz).

Frequency is important because signals of different frequency react with their environment in different ways (on the whole, lower frequency signals go through solid walls more easily) and because more information can be sent in a second of higher frequency signal than of lower frequency signal.

Amplitude tells us how powerful the signal is — basically, how high the peaks are. Amplitude is important because it can have a profound impact on how far from its source a signal can be received, which environments it can survive, and the impact the signal has on objects in its environment.

There are other terms that are frequently used in RF descriptions. Wavelength is related to frequency: The lower the frequency, the farther apart the peaks are in space. For example, the wavelength of 60 kHz is around 3,000 miles, while the wavelength of 2.4 GHz (the frequency of 802.11b WiFi and microwave ovens) is a bit less than 5 inches. This, as you might expect, has a profound impact on antennae for each.

Radio signals are polarized. They can be vertical, horizontal, or circular, and each is useful for different circumstances. Put in simplest terms, if the receiving antenna is in the same orientation as the transmitting antenna, the signal will tend to be received more clearly.

And then there are terms around the fact that radio signals bounce, bend, and refract through different materials and environments. These characteristics can explain why a radio signal is not being received where you hope, is being received where it shouldn’t be, and can be received by those who shouldn’t receive it.

The more

Ragland has a list of online resources he uses to help people learn about different aspects of RF communications. “Airheads forums are a great place to find tidbits of knowledge, including presentations covering the fundamentals of wireless networking,” he says, while noting that the forum is run by, and tends to focus on, Aruba networking products.

To figure out which devices use which frequencies, he recommends the Signal Identification Wiki. In addition to basic data, he says, “Information found here, along with some easy-to-purchase USB adapters can lead to all kinds of fun, like using your computer to open and close your garage door.”

And for those who want to build or buy low-cost receivers to sniff RF in different circumstances, he recommends three sites:

“The future of hacking is without a doubt going to be about listening to the airwaves and capturing them,” Carson says. The time to learn about them is now.

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/from-1s-and-0s-to-wobbly-lines-the-radio-frequency-(rf)-security-starter-guide/b/d-id/1336999?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cybercrooks busted for multimillion-dollar identity fraud

A trio of Australians has been charged with identity theft that netted AUD $11 million (USD $7.41m, £5.73m) – ill-gotten loot they allegedly ripped off by hacking into businesses and modifying their payrolls, pension payments (known as superannuation in Australia) and credit card details.

According to ABC News, police arrested the alleged cyber-robber – an unidentified 31-year-old man, formerly of Adelaide – at a library in Sydney’s Green Square earlier this week.

His alleged cyber accomplices were 32-year-old Jason Lees and 28-year-old Emily Walker, both arrested in the Adelaide suburb of Seaton. According to Walker’s Facebook profile, they’re a couple.

Jason Lees and Emily Walker, accused of money laundering and deception offenses. IMAGE: Facebook

New South Wales police reportedly said that the unidentified 31-year-old man allegedly stole more than 80 personal and financial profiles so as to use them in identity fraud in South Australia from early 2019, and then in NSW from August 2019. He’s been charged with 24 fraud-related charges in Newtown Local Court. Walker and Lees have been charged with money laundering and deception.

(What’s the difference between lies, deception and fraud, you well may ask if you’re not Australian? Under Australian criminal law, not all lies are deception, and not all deceptions amount to fraud, according to the law firm Sydney Criminal Lawyers. Here’s the law firm’s explanation.)

According to ABC News, the police prosecutor, Senior Sergeant Mike Tolson, told the court that the prosecution anticipates bringing hundreds of additional charges.

The stolen data came from businesses and organizations targeted for their employees’ data, including staff names, addresses and birthdates. The defendants allegedly used the details to set up hundreds of bank accounts into which they then allegedly deposited money.

Tolson:

All of the stolen identity has come from intruding upon businesses.

The defendants allegedly used multiple cryptocurrency accounts to launder more than $18 million, Tolson told the court:

However, one of the wallets that has been identified alone contains more than $18 million in transactions […] and multiple withdrawal accounts.

The prosecutor said that last month, police seized nine computers, their hard drives, and six mobile phones during a raid on the couple’s home. Next week, the court will consider an application for bail.

Investigators called the crimes “sophisticated and complex.” NSW Police Force Cybercrime Squad commander Detective Superintendent Matthew Craft said that it’s a timely reminder to beef up cybersecurity defenses:

Identity information is a valuable commodity on the black market and dark web, and anyone who stores this data needs to ensure it is protected.

Ripped-off payment card details – like these! – do indeed sell like hot cakes on the dark web, where carders snap them up, slap them onto new cards, and go on mad spending sprees on somebody else’s dime.

In December 2019, we also found out exactly how fast those hot cakes get sold: two hours, it turns out. That’s how long it took somebody – or something, if it turns out to have been an automated bot – to find, and use, a credit card posted by a security researcher.

Check your statements

Regularly checking your credit card and other financial statements means you’ll spot fishy charges before they cling to you.

We the consumers aren’t typically held responsible for fraudulent activity – but only when we report bad charges in a timely fashion. Don’t delay, if you don’t want to get stuck paying for somebody else’s baby lions and/or Lamborghinis.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fdxyW-Y19KM/

Wacom driver caught monitoring third-party software use

An engineer has detailed how graphics tablet company Wacom’s privacy policy allows it to collect data unconnected to its products, such as which applications users open on their computers.

In a blog, software developer Robert Heaton said he was first alerted to the behaviour when he read the company’s Experience Program Privacy Policy while installing some Wacom drivers on his computer. Wrote Heaton:

In section 3.1 of their privacy policy, Wacom wondered if it would be OK if they sent a few bits and bobs of data from my computer to Google Analytics, [including] aggregate usage data, technical session information and information about [my] hardware device.

This struck him as intrusive for a drawing tablet which is “essentially a mouse.” Why would such a thing need a privacy policy anyway?

The official answer is for the same reason many other companies’ applications do the same thing – to analyse how customers are using a product to see whether it can be improved.

The Privacy Notice posted to GitHub by Heaton relates to users in the EU and is upfront about this when it explains in a succinct 770 words what data Google Analytics collects, including things like when during the day tablets are used, and which functions are popular.

This data should not reveal real identities:

As the IP anonymize function is activated in the Tablet Driver, your IP address will, within Member States of the European Union or other contracting states of the Agreement on the European Economic Area, first be shortened by Google […]

The privacy policy for US-based users is a lot more permissive, although not all sections of this would apply when simply installing a driver.

The earliest mentions of Wacom integrating Google Analytics with tablet Windows and macOS drivers for the Intuos range appear to date back to version 6.3.27 released for Windows and macOS in late 2017.

Digging deeper

With perseverance and a lot of fiddling, Heaton was eventually able to proxy the driver’s traffic to Google Analytics to take a more detailed look at the data being collected.

Some of this was as expected – when the Wacom driver was started and stopped – which he decided was justifiable. However:

What requires more explanation is why Wacom think it’s acceptable to record every time I open a new application, including the time, a string that presumably uniquely identifies me, and the application’s name.

The latter behaviour isn’t referred to in the privacy policy, or at least it’s not mentioned explicitly.

Heaton even uncovered a killswitch function that Wacom could use to remotely turn Google Analytics collection off and on.

Justified?

In Wacom’s defence, using systems such as Google Analytics to gather data on how customers are using a product might be justified on two counts:

  1. Understanding how a product is used is helpful, in the long run, to end-users.
  2. It is possible to use the product without enabling the data collection by looking for the Wacom Experience Program setting in the Desktop Center software and unticking it – customers can opt-out if they want to.

A more general defence is that almost every software and hardware product in existence these days will have a similar feedback system, so it’s not as if Wacom is unusual. This includes many products that users trust (for example, Mozilla and Microsoft), although the latter do at least ask rather than enable through a driver installation.

Heaton’s discovery is highly unlikely to be a sinister plot to carry out surveillance on customers for commercial purposes although monitoring third-party software use is stretching things a bit.

The application nosiness could also be something to do with understanding the sort of software Wacom customers use more generally, in order to draw anonymised marketing inferences about their expertise and interests. Or perhaps someone in the Experience Program just got carried away.

What remains striking is the way tech companies seem to regard the gathering of this kind of data as something users don’t need to know about.

This can quickly become self-serving. Privacy is already tough enough without every product creating its own silo of potential problems and expecting users to keep up.

Wacom should not be singled out in this regard. Suspicion is a valid response to all privacy policies. Trust should always be earned.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/x2JzRp08MfM/

Facebook, Google, YouTube order Clearview to stop scraping faceprints

Clearview AI, the facial recognition company that’s scraped the web for three billion faceprints and sold them all (or given them away) to 600 police departments so they could identify people within seconds, has received yet more cease-and-desist letters from social media giants.

The first came from Twitter. A few weeks ago, Twitter told Clearview to stop collecting its data and to delete whatever it’s got.

Facebook has also demanded that Clearview stop scraping photos because the action violates its policies, and now Google and YouTube are likewise telling the audacious startup to stop violating their policies against data scraping.

Clearview’s take on all this? Defiance. It’s got a legal right to data scraping, it says.

In an interview on Wednesday with CBS This Morning, Clearview AI founder and CEO Hoan Ton-That told listeners to trust him. The technology is only to be used by law enforcement, and only to identify potential criminals, he said.

The artificial intelligence (AI) program can identify someone by matching photos of unknown people to their online photos and the sites where they were posted. Ton-That claims that the results are 99.6% accurate.

Besides, he said, it’s his right to collect public photos to feed his facial recognition app:

There is also a First Amendment right to public information. So the way we have built our system is to only take publicly available information and index it that way.

Not everybody agrees. Some people think that their facial images shouldn’t be gobbled up without their consent. In fact, the nation’s strictest biometrics privacy law – the Biometric Information Privacy Act (BIPA) – says doing so is illegal. Clearview is already facing a potential class action lawsuit, filed last month, for allegedly violating that law.

YouTube’s statement:

YouTube’s Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter.

As for Facebook, the company said on Tuesday that it’s demanded that Clearview stop scraping photos because the action violates its policies. Clearview’s response to Facebook’s review of its practices might trigger the social media behemoth to take action, Facebook said. Its statement:

We have serious concerns with Clearview’s practices, which is why we’ve requested information as part of our ongoing review. How they respond will determine the next steps we take.

Clearview: It’s just like Google Search – for faces

Besides claiming First-Amendment protection for access to publicly available data, Ton-That also defended Clearview as being a Google-like search engine:

Google can pull in information from all different websites. If it’s public […] and it can be inside Google’s search engine, it can be in ours as well.

Um, no, Google said, your app isn’t like our search engine at all. There’s a big difference between what we do and the way you’re shanghaiing everybody’s face images without their consent. Its statement:

Most websites want to be included in Google Search, and we give webmasters control over what information from their site is included in our search results, including the option to opt-out entirely. Clearview secretly collected image data of individuals without their consent, and in violation of rules explicitly forbidding them from doing so.

When is public information not public?

Clearview isn’t the first company to make money off of scraping sites. It’s not the first to wind up in court over it, either.

Back in 2016, hiQ, a San Francisco startup, was marketing two products, both of which depend on whatever data LinkedIn’s 500 million members have made public: Keeper, which identifies employees who might be ripe for being recruited away, and Skills Mapper, which summarizes an employee’s skills.

It, too, was going after public information, grabbing the kind of stuff you or I could get on LinkedIn without having to log in. All you need is a browser and a search engine to find the data hiQ sucks up, digests, analyzes and sells to companies who want a heads-up when their pivotal employees might have one foot out the door or that are trying to figure out how their workforce needs to be bolstered or trained.

When is public information not public? When the social media firms that collect it insist that it’s not public.

LinkedIn sent a cease-and-desist letter to hiQ, alleging that it was violating serious anti-hacking and anti-copyright violation laws: the Computer Fraud and Abuse Act (CFAA), the Digital Millennium Copyright Act (DMCA), and California Penal Code § 502(c). LinkedIn (which had been exploring how to do the same thing that hiQ had achieved) also noted that it had used technology to block hiQ from accessing its data.

A done deal? Not in the eyes of the courts. In September 2019, an appeals court told LinkedIn to back off: no more interfering with hiQ’s profiting from its users’ publicly available data. The court protected data scraping of public data: what sounds like a major legal precedent but which is a lot muddier than that. From the Electronic Frontier Foundation (EFF):

While this decision represents an important step to putting limits on using the CFAA to intimidate researchers with the legalese of cease and desist letters, the Ninth Circuit sadly left the door open to other claims, such as trespass to chattels or even copyright infringement, that might allow actors like LinkedIn to limit competition with its products.

And even with this ruling, the CFAA is subject to multiple conflicting interpretations across the federal circuits, making it likely that the Supreme Court will eventually be forced to resolve the meaning of key terms like ‘without authorization.’

Those cases of data scraping pitted the lovers of an open internet against the companies trying to control (and make money from) their own data. During the fight with hiQ, LinkedIn was accused of chilling access to information online. Some said that LinkedIn’s position would impact journalists, researchers, and watchdog organizations who rely on automated tools – including scrapers – to support their work, much of which is protected First Amendment activity.

Muddy as it was, the EFF hailed the September verdict as a win for the right to scrape public data.

But while groups such as the EFF were all for data scraping to get at publicly available data in the case of hiQ, they’re not on Clearview’s side. On Thursday, the EFF said that when it comes to biometrics, companies should be getting informed opt-in consent to collect our faceprints:

In fact, Clearview is the latest example of why we need laws that ban, or at least pause, law enforcement’s secretive use of facial recognition, according to the EFF’s surveillance litigation director, Jennifer Lynch. She cited numerous cases of what she called law enforcement’s – and Clearview’s own – abuse of facial recognition, stating:

Police abuse of facial recognition technology is not theoretical: it’s happening today. Law enforcement has already used ‘live’ face recognition on public streets and at political protests.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/X7rOITrf5_c/

Researchers transmit data covertly by altering screen brightness

The normal way to steal data from a compromised computer is to retrieve it over a network. If that computer isn’t connected to one, it gets a little trickier.

Researchers at Ben-Gurion University of the Negev have made a name for themselves figuring out how to get data out of air-gapped computers. They’ve dreamed up ways to communicate using speakers, blinking LEDs in PCs, infrared lights in surveillance cameras, and even computer fans.

Now, they’ve figured out a way to retrieve data from a disconnected computer by altering its LCD display’s pixel density just enough for a nearby camera to pick it up.

In a paper published this month, the researchers describe what they call an “optical covert channel” which cameras can detect, but which users cannot. They use one of the three colours in LCD pixels which normally combine to give the pixel a range of hues.

Their technique adjusts the red colour component in pixels on the screen by 3%, which is apparently not enough for users to notice. A camera located six metres from the 19-inch screen was nevertheless able to detect the difference.

Optical exfiltration techniques have cropped up before, they explain, but most of them have been easily detectable by users. Conversely, an attacker could theoretically use this one even while a user was working at the compromised machine.

We say “theoretically” because in practice there are a lot of challenges involved in this attack. The first is that the computer has to be compromised in the first place, which means getting to its physical location. Then, you could infect it with a USB stick, but if you’ve reached that point, presumably you could just copy the data to the stick.

The other issue is bit rate. If you’re old enough to remember dial-up internet connections, spare a thought for anyone trying to use this technique, which makes them look like broadband. The researchers achieved transmission speeds of five bits per second with their brightness-tweaking tricks. Switching LCD colours is enough to send a single bit, but it takes time to do that, and for the camera to pick it up.

Abraham Lincoln’s Gettysburg address was notoriously brief, but by our calculations, it would still take over 38 minutes to beam it from screen to camera as ASCII text. Maybe you could use a different character format, or compress it.

Someone could exploit this vulnerability using an ‘evil maid’ attack in which someone with access to the computer’s room pointed a camera at it. To counter that, the researchers suggest restricting physical access to air-gapped machines. The bit rate is also subject to the camera’s view of and distance from the screen, along with the display’s brightness, so if things weren’t positioned just right, you’d be waiting four score and seven years to retrieve any significant data.

Short pieces of information like passwords and private keys would be more tractable for a temporary camera. On the other hand, a covert optical channel could continue beaming information as long as a static camera could see the screen and the computer was turned on.

Ultimately, this is interesting academic research, with the emphasis on ‘academic’. Perhaps some three-letter agency might use this in a Mission Impossible-style scenario, but, just like the flickering pixel in the display of the researchers’ hypothetical compromised computer, we can’t really see it.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/057xpJvNRZ0/

Android owners – you’ll want to get these latest security patches, especially for this nasty Bluetooth hijack flaw

Google has posted the February security updates for Android, including for a potentially serious remote code execution flaw in Bluetooth.

Designated CVE-2020-0022, the flaw was discovered and reported by researchers with German company ERNW who say a fix has been in the works since November.

“On Android 8.0 to 9.0, a remote attacker within proximity can silently execute arbitrary code with the privileges of the Bluetooth daemon as long as Bluetooth is enabled,” the team explained.

“No user interaction is required and only the Bluetooth MAC address of the target devices has to be known. For some devices, the Bluetooth MAC address can be deduced from the WiFi MAC address.”

While they have yet to post technical details on the flaw, they report the vulnerability allows full remote code execution in older versions of Android (8, 8.1, and 9) but is slightly less dire for Android 10, as those devices merely crash. It should be pointed out that the bug is only exposed when the device has Bluetooth in discovery mode, i.e it’s trying to find a device to pair with.

In the meantime, ERNW advises those worried about the flaw to switch to wired headphones and make sure their devices are not in discovery mode in public.

If Bluetooth pwnage isn’t enough reason to patch your device, there are two dozen other bugs addressed this month for issues ranging from information disclosure to elevation of privilege. CVE-2020-0022 is the only flaw this month to allow remote code execution.

Android

Here we go again: Software nasties slip into Google Play, exploit make-me-root Android flaw for maximum pwnage

READ MORE

Six of the CVE-listed vulnerabilities (including the Bluetooth bug) are said to exist in System components. They include two information disclosure flaws and two elevation of privilege issues. Versions 8-10 of Android are affected.

The Android Framework was host to seven flaws: three allowing information disclosure, three elevation of privilege, and one denial of service bug. The Kernel component were patched for two flaws, both allowing elevation of privilege attacks.

Qualcomm components were listed as the targets for the remaining 10 CVE-listed errors. These included four flaws that were listed as ‘high’ severity risks but not detailed as they involved closed-source components.

As ever, those running Google-branded devices can get the updates immediately, while those on other vendors and carriers will need to wait for those groups to get their updates out. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/07/android_bluetooth_flaw/

Android owners – you’ll want to get these latest security patches, especially for this nasty Bluetooth hijack flaw

Google has posted the February security updates for Android, including for a potentially serious remote code execution flaw in Bluetooth.

Designated CVE-2020-0022, the flaw was discovered and reported by researchers with German company ERNW who say a fix has been in the works since November.

“On Android 8.0 to 9.0, a remote attacker within proximity can silently execute arbitrary code with the privileges of the Bluetooth daemon as long as Bluetooth is enabled,” the team explained.

“No user interaction is required and only the Bluetooth MAC address of the target devices has to be known. For some devices, the Bluetooth MAC address can be deduced from the WiFi MAC address.”

While they have yet to post technical details on the flaw, they report the vulnerability allows full remote code execution in older versions of Android (8, 8.1, and 9) but is slightly less dire for Android 10, as those devices merely crash. It should be pointed out that the bug is only exposed when the device has Bluetooth in discovery mode, i.e it’s trying to find a device to pair with.

In the meantime, ERNW advises those worried about the flaw to switch to wired headphones and make sure their devices are not in discovery mode in public.

If Bluetooth pwnage isn’t enough reason to patch your device, there are two dozen other bugs addressed this month for issues ranging from information disclosure to elevation of privilege. CVE-2020-0022 is the only flaw this month to allow remote code execution.

Android

Here we go again: Software nasties slip into Google Play, exploit make-me-root Android flaw for maximum pwnage

READ MORE

Six of the CVE-listed vulnerabilities (including the Bluetooth bug) are said to exist in System components. They include two information disclosure flaws and two elevation of privilege issues. Versions 8-10 of Android are affected.

The Android Framework was host to seven flaws: three allowing information disclosure, three elevation of privilege, and one denial of service bug. The Kernel component were patched for two flaws, both allowing elevation of privilege attacks.

Qualcomm components were listed as the targets for the remaining 10 CVE-listed errors. These included four flaws that were listed as ‘high’ severity risks but not detailed as they involved closed-source components.

As ever, those running Google-branded devices can get the updates immediately, while those on other vendors and carriers will need to wait for those groups to get their updates out. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/07/android_bluetooth_flaw/

Good: IT admins scrambled to patch 80 per cent of public-facing Citrix boxes to close nightmare hijack hole

Roughly a fifth of the public-facing Citrix devices vulnerable to the CVE-2019-19781 remote-hijacking flaw, aka Shitrix, remain unpatched and open to remote attack.

Positive Technologies today estimated that thousands of companies remain open to the takeover vulnerability in Citrix ADC and Gateway. A successful exploit would give hackers a foothold in a compromised network.

The infosec biz, whose researchers discovered and disclosed the vulnerability in December of last year, has been heading up an awareness campaign to get as many of the estimated 80,000 Citrix customers worldwide patched and protected from the flaw.

Despite a massive push by Citrix, and others, to get vulnerable machines shored up, it is believed that thousands of machines worldwide, many in the US and UK, have not yet been fixed.

“Overall, the vulnerability is being fixed quickly, but 19 per cent of companies are still at risk. The countries with the greatest numbers of vulnerable companies currently include Brazil (43 per cent of all companies where the vulnerability was originally detected), China (39 per cent), Russia (35 per cent), France (34 per cent), Italy (33 per cent), and Spain (25 per cent),” Positive reports.

hacker

‘Friendly’ hackers are seemingly fixing the Citrix server hole – and leaving a nasty present behind

READ MORE

“The USA, Great Britain, and Australia are protecting themselves quicker, but they each have 21 percent of companies still using vulnerable devices without any protection measures.”

In terms of sheer numbers of exposed customers, the US remains the worst offender (not surprising, as 38 per cent of all the vulnerable boxes worldwide sat in the US) as more than 6,500 machines remain unpatched. The UK, meanwhile, houses 1,150 or more of the yet-to-be-patched systems, and Australia is home to 750. Surprisingly low on the list were China (550 vulnerable machines) and Russia (just 100 unpatched boxes), but that may reflect poor Citrix sales in those areas.

In the grand scheme of things, the effort to get vulnerable boxes patched has been above and beyond the normal speed at which bugs get addressed. Considering how many machines are exposed to months and even years-old vulnerabilities, having 80 per cent of all boxes in the wild patched in under two months is to be commended.

That said, the remaining 20 per cent of internet-facing machines should get patched ASAP, especially as there are now plug-and-play exploits being used in the wild. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/06/citrix_boxes_patched/

Phishing Personified