STE WILLIAMS

‘Cloud Snooper’ Attack Circumvents AWS Firewall Controls

Possible nation-state supply chain attack acts like a “wolf in sheep’s clothing,” Sophos says.

RSA CONFERENCE 2020 – San Francisco – A recently spotted targeted attack employed a rootkit to sneak malicious traffic through the victim organization’s AWS firewall and drop a remote access Trojan onto its cloud-based servers.

Researchers at Sophos discovered the attack while inspecting infected Linux and Windows EC2-based cloud infrastructure servers running in Amazon Web Services (AWS). The attack, which Sophos says is likely the handiwork of a nation-state, uses a rootkit that not only gave the attackers remote control of the servers but also provided a conduit for the malware to communicate with their command-and-control servers. According to Sophos, the rootkit also could allow the C2 servers to remotely control servers physically located in the organization as well.

“The firewall policy was not negligent, but it could have been better,” said Chet Wisniewski, principal research scientist at Sophos. The attackers masked their activity by hiding it in HTTP and HTTPS traffic. “The malware was sophisticated enough that it would be hard to detect even with a tight security policy” in the AWS firewall, he said. “It was a wolf in sheep’s clothing … blending in with existing traffic.”

Sophos declined to reveal the victim organization, but said the attack appears to be a campaign to reach ultimate targets via the supply chain – with this as just one of the victims. Just who is behind the attack is unclear, but the RAT is based on source code of the Gh0st RAT, a tool associated with Chinese nation-state attackers. Sophos also found some debug messages in Chinese.

The attackers appear to reuse the same RAT for both the Linux and Windows servers. “We only observed the Linux RAT talking to one server and the Windows talking to a different control server, so we’re not sure if it’s even the same infrastructure,” Wisniewski said. The C2 has been taken down, he noted.

Just how the attackers initially hacked into the victim’s network is unclear, but Sophos suggests one possibility is that the attackers infiltrated a server via SSH. They also don’t have a lot of intel on the rootkit, such as which port it abused, nor do they know for sure what they were after. “It’s likely a supply chain attack, targeting this organization to get all of their downstream” clients or customers.

One of the rare aspects of the attack: it targeted Linux with a rootkit, which was called Snoopy. “They dropped the driver part of the rootkit, and called it Snoopy. Had it been called a legitimate file name on the Linux box, we probably wouldn’t have noticed it,” he said. And malware for Linux to date has been relatively rare, too, with mainly cryptojackers, he added.

Cloud Snooper’s techniques for now appear to be rare, but like many unique attacks like this, it’s only a matter of time before they are imitated. “Every time we see something done in a targeted attack usually by a nation-state, a couple of years later, cybercriminals” adopt similar tactics, Wisniewski said.

“This case is extremely interesting as it demonstrates the true multi-platform nature of a modern attack,” wrote Sophos researcher Sergei Shevchenko in the company’s technical report on Cloud Snooper.

Sophos recommends deploying AWS’ boundary firewall function, keeping Internet-facing servers fully patched, and hardening SSH servers, to protect against Cloud Snooper.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Prevent an AWS Cloud Bucket Data Leak.”

 

 

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/cloud/cloud-snooper-attack-circumvents-aws-firewall-controls/d/d-id/1337171?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Zyxel storage, firewall, VPN, security boxes have a give-anyone-on-the-internet-root hole: Patch right now

Zyxel’s network storage boxes, business VPN gateways, firewalls, and, er, security scanners can be remotely hijacked by any miscreant, due to a devastating security hole in the firmware.

The devices’ weblogin.cgi program fails to sanitize user input, allowing anyone who can reach one of these vulnerable machines, over the network or across the internet, can silently inject and execute arbitrary commands as a root superuser with no authentication required. That would be a total compromise. It’s a 10 out of 10 in terms of severity.

As its name suggests, weblogin.cgi is part of the built-in web-based user interface provided by the firmware, and the commands can be injected via GET or POST HTTP requests.

If a miscreant can’t directly connect to a vulnerable Zyxel device, “there are ways to trigger such crafted requests even if an attacker does not have direct connectivity to a vulnerable device,” noted Carnegie Mellon’s CERT Coordination Center in its advisory on the matter.

“For example, simply visiting a website can result in the compromise of any Zyxel device that is reachable from the client system.”

Here’s the affected equipment, which will need patching:

  • Network-connected storage devices: NAS326, NAS520, NAS540, NAS542
  • “Advanced” security firewalls: ATP100, ATP200, ATP500, ATP800
  • Security firewalls and gateways: USG20-VPN, USG20W-VPN, USG40, USG40W, USG60, USG60W, USG110, USG210, USG310, USG1100, USG1900, USG2200, VPN50, VPN100, VPN300, VPN1000, ZyWALL110, ZyWALL310, and ZyWALL1100

Fixes can be fetched and installed from Zyxel’s website. Meanwhile, the NSA210, NSA220, NSA220+, NSA221, NSA310, NSA310S, NSA320, NSA320S, NSA325 and NSA325v2 models are no longer supported, and thus no patches are available, but are still vulnerable. The security bug (CVE-2020-9054) is trivial to exploit, unfortunately.

“Command injection within a login page is about as bad as it gets and the lack of any cross-site request forgery token makes this vulnerability particularly dangerous,” Craig Young, a researcher with security house Tripwire, told The Register earlier today. “JavaScript running in the browser is enough to identify and exploit vulnerable devices on the network.”

Speaking of bad, exploit code is already on sale for $20,000 in underground forums, and the patched firmware is delivered via unencryped FTP, which can be meddled with by network eavesdroppers.

“Be cautious when updating firmware on affected devices, as the Zyxel firmware upgrade process both uses an insecure channel (FTP) for retrieving updates, and the firmware files are only verified by checksum rather than cryptographic signature,” CERT-CC warned.

“For these reasons, any attacker that has control of DNS or IP routing may be able to cause a malicious firmware to be installed on a Zyxel device.”

If you can’t patch your Zyxel device, bin it – especially if it’s facing the internet. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/26/zyxel_security_hole/

After blowing $100m to snoop on Americans’ phone call logs for four years, what did the NSA get? Just one lead

The controversial surveillance program that gave the NSA access to the phone call records of millions of Americans has cost US taxpayers $100m – and resulted in just one useful lead over four years.

That’s the upshot of a report [PDF] from the US government’s freshly revived Privacy and Civil Liberties Oversight Board (PCLOB). The panel dug into the super-snoops’ so-called Section 215 program, which is due to be renewed next month.

Those findings reflect concerns expressed by lawmakers back in November when at a Congressional hearing, the NSA was unable to give a single example of how the spy program had been useful in the fight against terrorism. At the time, Senator Dianne Feinstein (D-CA) stated bluntly: “If you can’t give us any indication of specific value, there is no reason for us to reauthorize it.”

That value appears to have been, in total, 15 intelligence reports at an overall cost of $100m between 2015 and 2019. Of the 15 reports that mentioned what the PCLOB now calls the “call detail records (CDR) program,” just two of them provided “unique information.” In other words, for the other 13 reports, use of the program reinforced what Uncle Sam’s g-men already knew. In 2018 alone, the government collected more than 434 million records covering 19 million different phone numbers.

What of those two reports? According to the PCLOB overview: “Based on one report, FBI vetted an individual, but, after vetting, determined that no further action was warranted. The second report provided unique information about a telephone number, previously known to US authorities, which led to the opening of a foreign intelligence investigation.”

A short explanation of that sole useful investigation is redacted, so it is unknown what it covered or whether it proved useful or led to a prosecution.

Stalling

So, overall, millions of Americans’ phone logs were given to the NSA at a cost of $100m, and the result was the opening of one lone probe. It is perhaps no wonder that the NSA and the FBI has spent years stalling and refusing to hand over any information about the program.

An FBI agent with the NSA logo

FYI: FBI raiding NSA’s global wiretap database to probe US peeps is probably illegal, unconstitutional, court says

READ MORE

It’s also worth noting that the NSA has not once but twice shuttered the program because it ended up with millions of records it did not have a right to see – ie: the program was twice found to have gone out of bounds and used illegally. And yet the intelligence services still want to keep the program even if the legislation supporting it, the USA Freedom Act of 2015, expires on March 15.

The Trump Administration has asked that Congress extend the law so the NSA can, if it wishes, turn the program back on at some future date.

The lengthy report is a welcome return for the PCLOB, which was turned into a zombie organization unable to do any work for several years after its previous report on NSA spying programs concluded that they were illegal and Congress was obliged to scale them back.

Snowden

Those reports themselves stemmed from the fact that the full depth of the programs was exposed by NSA IT-admin-turned-whistleblower Edward Snowden in a vast leak of information.

And yet, despite it being made clear that neither Congress nor the PCLOB were able to adequately track or oversee what was really happening with America’s spying program, little or nothing has changed and repeat efforts by some in Congress to reform those programs have been repeatedly stymied.

Not to be beaten, several senators are again trying to scale back the various NSA surveillance programs, announcing a new bill last month aimed at ending NSA blanket snooping, protecting abuse of the FISA oversight process, closing various loopholes in the secret law that the spying agency uses, and expanding scrutiny of the programs.

The PCLOB notes in its report it was only able to gain access to the information it has now shared because the intelligence services agreed to declassify at least some of the details. And the only reason the snoops did that is because they concluded the program serves no real useful purpose anymore, thanks to the widespread use of encrypted messaging apps over telephone calls. Such apps are more secure.

As to what is happening with the NSA’s other spy programs, neither the PCLOB, nor Congress, nor even the highly classified secret FISA oversight court knows. And the only way it seems likely we will find out is if there is another Edward Snowden. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/26/nsa_calllogging_program/

Wi-Fi of more than a billion PCs, phones, gadgets can be snooped on. But you’re using HTTPS, SSH, VPNs… right?

A billion-plus computers, phones, and other devices are said to suffer a chip-level security vulnerability that can be exploited by nearby miscreants to snoop on victims’ encrypted Wi-Fi traffic.

The flaw [PDF] was branded KrØØk by the bods at Euro infosec outfit ESET who discovered it. The design blunder is otherwise known as CVE-2019-15126, and is related to 2017’s KRACK technique for spying on Wi-Fi networks.

An eavesdropper doesn’t have to be logged into the target device’s wireless network to exploit KrØØk. If successful, the miscreant can take repeated snapshots of the device’s wireless traffic as if it were on an open and insecure Wi-Fi. These snapshots may contain things like URLs of requested websites, personal information in transit, and so on.

It’s not something to be totally freaking out over: someone exploiting this has to be physically near you, and you may notice your Wi-Fi being disrupted. But it’s worth knowing about.

Technical details

You can read the above report for the full briefing, though here’s a gentle overview. When connected to a protected Wi-Fi network, a device and its access point will decide upon and use a shared encryption key to secure their over-the-air communications. When the device wants to send data over the network, it queues up packets in a transmission buffer in its Wi-Fi controller chip. This chip, when ready, encrypts the buffer’s contents with the key and transmits it to the access point.

It is possible to force a device off its Wi-Fi network by sending it special disassociation packets. Anyone can send these special packets over the air to a device; you don’t need to be on the same network. When these disassociation packets are received, vulnerable Wi-Fi controllers – made by Broadcom and Cypress, and used in countless computers and gadgets – will overwrite the shared encryption key with the value zero.

Crucially, the chip will continue to empty its transmission buffer, transmitting any outstanding packets with the zeroed encryption key. Anyone within range can receive those radio transmissions and decrypt the data because the key is now known – it’s zero. Said data can include things like DNS look-ups, HTTP requests, and so on, allowing eavesdroppers to figure out what the device is up to. Repeat this process over and over to snatch more and more glimpses of a victim’s network traffic.

Network traffic already wrapped up encryption prior to transmission – such as HTTPS requests, or stuff traveling via SSH and secure VPNs – remain encrypted. It’s just the Wi-Fi encryption that’s broken.

Here’s how ESET put it on Wednesday:

After a disassociation occurs, data from the chip’s Tx [transmission] buffer will be transmitted encrypted with the all-zero TK [temporary key]. These data frames can be captured by an adversary and subsequently decrypted. This data can contain several kilobytes of potentially sensitive information.

By repeatedly triggering disassociations (effectively causing reassociations, as the session will usually reconnect), the attacker can capture more data frames.

As a result, the adversary can capture more network packets containing potentially sensitive data … similar to what they would see on an open WLAN network without WPA2.

This silicon-level screw-up is present in a ton of stuff because they all use the same families of Wi-Fi controllers. “KrØØk affects devices with Wi-Fi chips by Broadcom and Cypress that haven’t yet been patched,” ESET said. “These are the most common Wi-Fi chips used in contemporary Wi-Fi capable devices such as smartphones, tablets, laptops, and IoT gadgets.”

Among equipment confirmed to be using the vulnerable chips are Apple’s iPhone 6 or later, the 2018 MacBook Air, Google’s Nexus 5 and 6, Amazon’s Kindle and Echo gear, and the Raspberry Pi model 3. For wireless access points, the Asus RT-N12, Huawei B612S-25d, Huawei EchoLife HG8245H, and Huawei E5577Cs-321 all have the flaw. Cisco also acknowledged its wireless gear is at risk.

“We have also tested some devices with Wi-Fi chips from other manufacturers, including Qualcomm, Realtek, Ralink, Mediatek and did not see the vulnerability manifest itself,” said ESET.

Even though the security blunder lies within the Wi-Fi chips themselves, the researchers say it can be fixed at the software level. We can imagine such fixes ensure the transmit buffer is not emptied after a disassociation or a key change, and instead dumped. These controllers feature embedded CPU cores directing their operation, and presumably these can be reprogrammed to not flush transmission queues over the air with zeroed encryption keys.

To address KrØØk, therefore, users and admins should, says ESET, look out for driver or firmware updates for affected devices. ESET seems confident fixes are available, though your mileage may vary. The supply chain from the likes of Broadcom and Cypress to manufacturers of Internet-of-Things devices and other wireless-enabled equipment through to end users can be rather long and winding, and there are plenty of places for code updates to snag and never see the light of day.

In the meantime, encrypt as much network traffic as possible, especially over Wi-Fi, using HTTPS, SSH, VPNs, and so on, so that if your network-level encryption is smashed, you’re still protected from snoopers at the application layer or thereabouts. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/27/wifi_chip_bug_eset/

Sophos Boosts Threat Hunting, Managed Detection and Response Capabilities

JJ Thompson, senior director of managed threat response for Sophos digs deep into how organizations can start to make sense of the seemingly unlimited data that’s available from endpoints, cloud, and on-premises networks. And that’s a critical capability as attacker behaviors start to change.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/sophos-boosts-threat-hunting-managed-detection-and-response-capabilities/d/d-id/1337162?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US State Dept. Shares Insider Tips to Fight Insider Threats

The insider threat is a technology, security, and personnel issue, officials said in explaining an approach that addresses all three factors.

RSA Conference 2020 – San FranciscoEvery employee has the potential to become an insider threat, whether through accidental or malicious means. Organizations with the right steps in place can both prevent a person from going rogue and detect these threats before it’s too late.

At the US Department of State, everyone who has virtual or physical access to its network, facilities, or information is considered an insider, said Greg Collins, a contractor policy adviser, during an RSA Conference session this week on insider threats. “Anything that they can access and attempt to misuse is an insider threat,” Collins explained.

“It is not just a tech problem, it’s not just a security issue, and it’s not just a personnel issue,” added Jackie Atiles, insider threat program director at the State Department. When an insider threat takes place, businesses can’t go back and change what happened, but they can look back and see the indicators that were available to them in order to prevent future threats.

These markers can be spotted at all stages of the employee cycle, Collins said, a process that typically looks the same for organizations across industries and includes the following steps: hiring, vetting, training, inclusion, support, and security. He and Atiles took an insider threat scenario and viewed it through each step to pinpoint red flags indicating malicious activity.

In their example scenario – which was made up for this presentation but will likely sound familiar to many organizations – they used an employee who sends an email containing sensitive internal data to someone outside the organization. “This keeps me up at night,” Collins said. “This is something you absolutely don’t want to happen.”

But it does happen, and when it does, it’s important to first substitute the individual’s name with a unique identifier. “One thing we really stand behind is trying to prevent reputational harm,” Collins said. If insider activity has occurred but you don’t know if there was malintent, it’s best to keep the individual anonymous so as to not muddy the person’s name. Once the case has been established, you can start to backtrack and determine where, exactly, they went wrong.

In this scenario, the threat has already happened. Instead of starting the investigation process from the hiring phase, Atiles advised starting with security mechanisms in place. “IT is the last line of defense when it comes to information leaving the network,” she explained, and there are several indicators someone might do this before they hit Send. Look for trigger words: an external company name, “attachment,” or “secret.” Ask questions: What was the attachment? Is this something that has regularly occurred? Is there a reason they’re using the word “secret”?

“While security can identify the anomalies through ones and zeroes, the human element can be used to identify what the potential threats are,” Atiles explained.

Taking another step back in the cycle takes you to support, or policies and resources that are in place to ensure employees have support for professional, personal, or financial stress. If an insider accidentally breaches security rules or takes files outside the organization, it could be due to external circumstances causing them to behave differently than usual, Collins noted. By providing support to their employees, company leaders may be able to prevent this activity.

“Managers need to manage; managers need to engage,” Atiles said. “Supervisors are the best defense against insider threat behavior. There is a difference between an introverted employee who wants to alone sometimes and an isolationist who exclusively keeps to themselves all day.

She emphasized the importance of making people feel included. “As people move positions … make sure you’re building an environment that includes people and doesn’t create an insider risk from the start.” Educating managers on team building isn’t just a “feel-good” activity, Atiles noted. Employees who feel included are less likely to become a future security risk.

Employee Vetting and Training  
Properly vetting and training employees can help organizations spot threats before it’s too late.

Training can cover a range of different topics, said Collins, listing security awareness, data handling, diversity and equal employment opportunity, performance, and development as examples. You want to make sure employees regularly complete training, especially if they handle information like human resources data, medical records, financial records, and Social Security numbers.

If an employee sends an email with company data outside the organization, consider whether they completed their assigned trainings. Did they take the training? Were they compliant?

Prior to the training stage are the hiring and vetting stages of the employee cycle. “You need to vet your employees from the beginning,” Atiles said. “It’s a disservice to your own organization if you don’t know who you have working for you.”

The vetting process should be uniform, consistent with policies, and approved by general counsel, she said. It may include criminal records, financial reports, background verification, outside associations, open source information, and foreign travel and contacts. Bringing a new person onboard is your initial opportunity to make sure you’re not hiring an insider threat.

A candidate’s resume, interview, and references can be instrumental in gauging their risk. “These are huge chunks of the professional profile that makes up this individual,” she added.

The insider threat can appear in any part of the employee cycle, but by the time the threat takes place, it’s too late to detect it. Taking this structure and putting it around your organization is going to lower that potential of risk, Collins said.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Wendy Nather on How to Make Security ‘Democratization’ a Reality.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/us-state-dept-shares-insider-tips-to-fight-insider-threats/d/d-id/1337163?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Should I Answer a Nontech Exec Who Asks, ‘How Secure Are We?’

Consider this your opportunity to educate.

Question: How should I answer a nontech exec who asks, “How secure are we?”

Kurtis Minder, CEO of GroupSense: Depending on your relationship with your executive team, it might help to qualify the question first. Secure compared to what? Compared to similar companies of focus and size in the industry? Compared to NIST 171? Compared to PCI DSS? In order to measure something like this, it helps to have a reference baseline. Otherwise the answer is opaque and virtually meaningless. Regardless of the answer, it is important to convey that the threat landscape is fluid and security programs need to be also.

You should also use this type of question as an opportunity to educate. Say to the exec: “Before I answer that question, what’s your nightmare? Which systems are you most concerned about being compromised?” Depending on the answer, you can educate the executive on your company’s risk profile – what systems are most likely to be attacked, who is most likely to attack them, and what techniques are most likely to be used.

From there, you can then tell the executive everything you’ve done to mitigate that risk – but that you’re never 100% secure because all it takes is for one employee to click on the wrong link in the wrong email, and all your security measures go downhill. Next, you can emphasize how everyone in the company has a responsibility to be cybersafe and keep the company secure – including the executive questioning you.

Related Content:

 

Kurtis Minder is a driven entrepreneur developing new technologies to make the world a better place. He is currently the CEO of GroupSense, an enterprise digital risk management company.  Minder is also a frequent contributor to the startup community and serves as an … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/how-should-i-answer-a-nontech-exec-who-asks-how-secure-are-we/b/d-id/1337167?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Tufin: How to Make Better Sense of the Cloud Security Equation

CEO Reuven Harrison examines how cloud services have changed how enterprises manage their apps and data, and also offers some tips for security pros tasked with managing either hybrid- or multi-cloud implementations. Harrison also takes on Kubernetes and container security in this News Desk interview.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/tufin-how-to-make-better-sense-of-the-cloud-security-equation/d/d-id/1337165?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

LTE vulnerability allows impersonation of other mobile devices

Researchers have found a way to impersonate mobile devices on 4G and 5G mobile networks, and are calling on operators and standards bodies to fix the flaw that caused it.

Research into the vulnerability, conducted by academics at Ruhr Universität Bochum and New York University Abu Dhabi, is called Impersonation Attacks in 4G Networks (IMP4GT), although deployment requirements for 5G networks mean that it could work on those newer systems too.

The attack targets LTE networks, exploiting a vulnerability in the way that they authenticate and communicate with mobile devices. The researchers claim that they can impersonate a mobile device, enabling them to register for services in someone else’s name. Not only could an attacker use this to get free services such as data passes in someone else’s name, but they could also impersonate someone else when carrying out illegal activities on the network, they point out:

The results of our work imply that providers can no longer rely on mutual authentication for billing, access control, and legal prosecution.

It wouldn’t necessarily let you into someone’s Gmail, because you might still have strong password protection or, more sensibly, be using MFA. Neither would it let you access someone’s SMS-based 2FA messages, David Rupprecht, one of the report’s authors, told us:

Under the assumption the authentication app is correctly implemented, e.g. uses TLS for the transmission, the attacker can not access that information. Text messages are part of the control plane and are therefore not attackable.

LTE networks use a mechanism called integrity protection, which mutually authenticates a device with the nearby cellular base station using digital keys. The problem is that this integrity protection only applies to control data, which is the data used to set up telephone communications. It doesn’t always apply to the user data, which is the actual content sent between the phone and the base station.

Rupprecht and his colleagues have already proven that they can use this weakness to change data sent between the phone and the base station, redirecting communication to another destination by DNS spoofing. This all happens at layer two of the network stack (the data link layer, which transports data across the physical radio link).

The IMP4GT attack takes this vulnerability a step further by using it to manipulate data at layer three of the LTE network. This is the network layer, handling things like user authentication, tracking devices around the network, and IP traffic.

Rather than just redirecting IP packets, the new attack accesses their payload and also injects arbitrary new packets, giving the researchers control over the mobile session. They do this by mounting a man-in-the-middle attack, impersonating the base station when dealing with the mobile device, and impersonating the mobile device when talking to the base station.

The vulnerability not only affects existing 4G networks but also has implications for the 5G systems that carriers are rolling out. Companies are implementing these systems in two phases, the researchers explain. The first uses dual connectivity, where the phone uses 4G for the control plane and the 5G network for user data. The user channel doesn’t use integrity protection in this case.

The second phase of the rollout, known as the standalone phase, sees 5G networks used for both control and user data. However, this implementation only mandates user data integrity protection on user channel connections up to 64kbit/sec, which is tiny by 5G standards. Integrity protection on user channels with higher speeds than that is optional. The researchers called for mandatory full-rate integrity protection.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FZepiqGAIX0/

Apple’s iOS pasteboard leaks location data to spy apps

To most iOS users, pasteboard is simply part of the way to copy and paste data from one place to another.

You take a picture, fancy sharing it with friends, and your phone uses the pasteboard to move the image to the desired app.

Now an app developer called Mysk has discovered pasteboard’s dark side – malicious apps could exploit it to work out a user’s location even when that user has locked down app location sharing.

The weakness here is caused by the fact that, unless GPS permissions were refused, images taken with the embedded camera app on iPhones and iPads are saved with embedded GPS metadata recording where each was taken.

In the simplest scenario, an iPhone user would take a photo, copy it between apps using the pasteboard, from which a malicious app could extract location metadata while comparing it with timestamps to determine whether it was current or taken in the past.

Images taken from third-party web sources could be filtered out by comparing aspects of an image’s metadata with the device’s hardware and software properties to detect differences.

Although a malicious app should only be able to access pasteboard data while active, Mysk’s bypass was to write a demo app, KlipboardSpy, paired to a foreground widget visible in Today View, to prove the hack worked under real-world conditions. Moreover:

As the pasteboard is designed to store all types of data, the exploit is not only restricted to leaking location information. By gathering all these types of content, a malicious app can covertly build a rich profile for each user, and what it can do with this content is limitless.

That’s not only location data, then, but potentially anything the user has copied into pasteboard, including passwords and bank details.

Is this a bug or a feature?

There was a time when the ability to siphon GPS location history from smartphone images would have sounded of marginal use to a surveillance app. These days, however, image- and data-sharing has exploded, as any visit to social media will attest.

And yet when Mysk reported the issue to Apple, the response was muted:

We submitted this article and source code to Apple on January 2, 2020. After analyzing the submission, Apple informed us that they don’t see an issue with this vulnerability.

Arguably, Apple is correct because the pasteboard is working exactly as intended – it allows users to exchange data within and between applications while the latter are in the foreground.

That is, while it’s true that data can be slurped from the pasteboard in theory, that hypothetical downside is outweighed by the certainty that people need to access copy-and-paste on a daily basis.

Mysk’s view is that Apple could protect the iOS pasteboard by integrating it inside its permissions system, allowing users to grant access one app at a time, or by limiting the time apps can access it to the copy-and-paste action.

Currently, this is a theoretical weakness that, as far as anyone knows, has never been exploited. It’s likely that Apple will patch up this risk at some point as the permissions system inside iOS evolves.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/C0t83agCkD4/