STE WILLIAMS

Peer-to-Peer Vulnerability Exposes Millions of IoT Devices

A flaw in the software used to remotely access cameras and monitoring devices could allow hackers to easily take control of millions of pieces of the IoT.

Software intended to help homeowners be more secure may deliver their security devices into the hands of hackers. That’s the conclusion of research conducted into a variety of IoT devices. 

In a blog post, researcher Paul Marrapese describes the flaw in the peer-to-peer (P2P) functionality of software named iLnkP2P, software developed by Shenzhen Yunni Technology, a Chinese vendor of security cameras, webcams, and other internet-of-things (IoT) monitoring devices.

The software is intended to allow device owners to view footage and monitor activity from their smart devices on the Internet. However, Marrapese found that the software requires no authentication and no encryption; a vulnerability given CVE-2019-11220.

A blog post at Krebs on Security includes a map showing that the largest percentage (39%) of affected devices are in China, with 19% in Europe, 7% in the U.S, and the rest scattered around the globe. And while the software was developed by Shenzhen Yunni Technology, scores of different vendors and product lines use the application.

According to Marrapese, the vulnerability exists because of the “heartbeat” that many P2P apps use to establish communications with their control servers. This heartbeat will establish a link with the server, bypassing most firewall restrictions on links initiated from outside the local network to a device on the inside. If an attacker is able to enumerate the device (guess the correct UID) based on a known alphabetic prefix and six-digit number, it can use that to establish a direct connection to the device and then own the device for any number of malicious purposes. The ease with which the enumeration can be performed on these devices is described in CVE-2019-11219.

Those malicious purposes may extend beyond simple botnet recruitment or criminal purposes. Adam Meyers, vice president of intelligence at CrowdStrike, points out larger possibilities: “Given the aggressive use by the government of the People’s Republic of China of facial recognition and AI to aid in law enforcement and against political dissidents, anyone using low cost IOT devices communicating back to China should be cautious about how and where they implement this technology.”

“Most IoT devices don’t allow consumers access to modify security settings as they are set by the manufacturer,” says Terence Jackson, CISO at Thycotic. “With the proliferation of IoT, consumers need to demand better security from manufacturers and should exercise due diligence before purchasing and connecting these devices to their networks.”

That “better security” is a feature that most manufacturers in the IoT have yet to adopt, according to Colin Bastable, CEO of Lucy Security. “Security is rarely built into the plan, because convenience, easy deployment, and rapid adoption are essentials, whereas secure code is not even regarded as a ‘nice to have.'”

And the focus on convenience actually works against security, Bastable maintains. “Convenience and insecurity have a symbiotic relationship: there is a strong case for teaching consumers of all ages the basics of security and the risks of fast deployment.”

While it’s the manufacturer’s responsibility to build better security into devices, they are unlikely to do so without a push from consumers. “Until consumers demand better security around their IoT devices, manufacturers won’t have as much of an incentive to build more secure products,” says Nathan Wenzler, senior director of cybersecurity at Moss Adams. And the lack of incentive will have consequences.

“While some companies are getting the message and even building entire messaging strategies around having more secure offerings, the market dictates speed over security, and so we should expect to see more of these kinds of issues in the future, exposing customers and their families to whoever uses these vulnerabilities.”

As for this vulnerability, Marrapese writes that it’s impossible for consumers to disable the vulnerable software on most of these devices. The only real option for these, he writes, is to block outbound traffic on UDP port 32100, as that will prevent the P2P software from contacting its server. Better still, he recommends not purchasing any device that features P2P communications as part of its application suite.

Related content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/peer-to-peer-vulnerability-exposes-millions-of-iot-devices/d/d-id/1334564?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Docker Forces Password Reset for 190,000 Accounts After Breach

Organizations impacted by breach, which gave attackers illegal access to a database containing sensitive account information, need to check their container images.

The owners of some 190,000 Docker accounts will need to change their passwords and verify their container images haven’t been tampered with as the result of a recent intrusion into a Docker Hub database.

Docker discovered the unauthorized access on April 25. It said it had already notified impacted users about the incident and sent them a password-reset link.

The company said it had also unlinked Docker Hub from GitHub and Bitbucket for those using these external repositories to automatically build — or autobuild — container images. Such users will need to relink their Docker Hub accounts to these repositories in order for autobuild to work properly.

Docker described the intrusion as something that gave attackers a “brief period” of illegal access to a database containing sensitive account information, including usernames and hashed passwords. Also exposed in the breach were tokens that some Docker Hub account owners used to access their repositories on GitHub and Bitbucket. It offered no details on when the breach might have occurred and how it was discovered.

Docker said the 190,000 accounts that had been impacted in the breach represented less than 5% of the overall number of users of its Hub cloud-based container image repository. “No Official Images have been compromised,” the company said in a FAQ. Docker pointed to several additional security measures it has in place for protecting its Official Images, including GPG (GNU Privacy Guard) signatures and Notary signing for each image.

Docker Hub is a container image library that developers, software vendors, open source projects, and enterprise software teams use to store and share container images. Many organizations, including large enterprises, use images from the repository to build their containers.

Attacks on the developer pipeline can have a serious impact on application security, says Wei Lien Dang, vice president of product at container security vendor StackRox. “Tainted images can be difficult to detect, and the containers launched from them may even run as expected, except with a malicious process in the background,” Lein says.

Since Docker has so far not provided a timeline for the incident, it is unclear how long the attackers might have had access to the compromised accounts, Wei says. To be safe, users need to go back and verify the integrity of any images they might have pushed out over the past several weeks.

Chris Wysopal, CTO of Veracode, says organizations that have been notified should be looking at their logs for signs of unauthorized access, especially if write access was performed. “In this instance, it is critical to review your GitHub logs if you integrated with DockerHub because you will have given DockerHub write access to your repos,” he says.

In addition, importing production images to a private registry instead of pulling directly from a public registry can give enterprises more control and separation from events, such as the one involving Docker Hub, Wysopal says. “You can easily go and look at the timeline of the breach and see if you pulled an image during the period the public registry was compromised,” he says.

Without any evidence of confirmed malicious tampering, the main takeaway from this breach for organizations is the need to include supply chain attacks in threat modeling exercises, says Tim Erlin, vice president of product management and strategy at Tripwire.

“Organizations that have considered this type of event in their response plan won’t be panicking when an incident occurs,” Erlin notes. “The key to incident response is to be prepared, and threat modeling allows an organization to identify and prepare for the most relevant threats.”

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/docker-forces-password-reset-for-190000-accounts-after-breach/d/d-id/1334566?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Brit events and info biz Incisive Media admits open server port may have left readers deets exposed

Updated UK events and publishing outfit Incisive Media today urged subscribers to change their account passwords after it found an open port on a server had left it exposed to a buffer overflow or another remotely exploitable vuln.

“We are sorry to inform you of a potential breach of security that may have resulted in the unauthorised disclosure of your log-in details to CRN,” the company stated in an email seen by The Reg.

Incisive owns a bunch of mags in the pension and benefits, financial services and enterprise tech landscapes, and others – including Computing – are also believed to be caught up in the security snafu.

As a background to the breach, the mail revealed:

“One of our service providers stored your login details for your CRN account on their server. While this server was not publicly listed, there is an open server port which made your information vulnerable for a short period this year.”

The login details stored included the customer’s name, email address and password in an encrypted form, “no one has access to any other personal data from this breach”, the mail reassured.

“Our partners have removed the information from that server and have undertaken a full audit and introduced additional steps to ensure your data is not accessible. We have reset all passwords and you will be asked to enter new login details when you next login,” it added.

Though Incisive said it did not believe the “data breach” met the threshold to be reported to the Information Commissioner’s Office, it had informed the UK’s data watchdog anyway.

The publisher signed off with the obligatory paragraph about how it takes customers “data security and protection very seriously”. It also passed on the mail address of an in-house General Data Protection Regulation project manager.

We have contacted the ICO and Incisive for comment.

Updated

An ICO spokesperson said: “Incisive Media has made us aware of an incident and we are making enquiries.” ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/29/incisive_media_admits_that_open_server_port_may_have_left_readers_deets_exposed/

Learn to Defend Against HTTP Desync Attacks at Black Hat USA

Save the Date: Black Hat USA returns to the Mandalay Bay in Las Vegas August 3-8.

Black Hat USA is returning to the Mandalay Bay Convention Center in Las Vegas this August, and it’s already shaping up to be one of our best events yet!

This is a premier opportunity to learn about the latest cybersecurity threats, research and trends firsthand. In a newly-confirmed Black Hat USA Briefing on HTTP Desync Attacks: Smashing into the Cell Next Door, security researcher James Kettle will introduce techniques remote, unauthenticated attackers can use to splice their HTTP requests into others.

Using examples from his own case studies, Kettle will show you how attackers delicately amend victim’s requests to route them into malicious territory, invoke harmful responses, and steal credentials. Although documented over a decade ago, Kettle believes this is an attack for which the Internet is unprepared. If you come to this Briefing he’ll help you tackle this legacy by sharing a refined methodology and open source tooling for black-box detection, assessment and exploitation with minimal risk of collateral damage. These will be developed from core concepts, ensuring you leave equipped to devise your own desync techniques and tailor (or thwart) attacks against your target of choice.

You can find more details about this Briefing and many others over on the Black Hat USA Briefings page, which is regularly updated with new content.

Black Hat USA will return to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/learn-to-defend-against-http-desync-attacks-at-black-hat-usa/d/d-id/1334551?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

A Rear-View Look at GDPR: Compliance Has No Brakes

With a year of Europe’s General Data Protection Regulation under our belt, what have we learned?

There is no denying the impact of the European Union General Data Protection Regulation (GDPR), which went into effect on May 25, 2018. We were all witness — or victim — to the flurry of updated privacy policy emails and cookie consent banners that descended upon us. It was such a zeitgeist moment that “we’ve updated our privacy policy” became a punchline.

Pragmatically, the GDPR will serve as a catalyst for a new wave of privacy regulations worldwide — as we have already seen with the California Consumer Privacy Act (CCPA) and an approaching wave of state-level regulation from Washington, Hawaii, Massachusetts, New Mexico, Rhode Island, and Maryland.

GDPR has been a boon for technology vendors and legal counsel: A PricewaterhouseCoopers survey indicates that GDPR budgets have topped $10 million for 40% of respondents. A majority of businesses are realizing that there are benefits to remediation beyond compliance, according to a survey by Deloitte. CSOs are happy to use privacy regulations as evidence in support of stronger data protection, CIOs can rethink the way they architect their data, and CMOs can build stronger bonds of trust with their customers.

But it is not a rose-tinted vision for everyone. GDPR fines are no paper tiger. France levied a stunning $57 million fine against Google for its GDPR violations. Even Ireland, long-viewed as a technology safe haven, has experienced a 100% increase in privacy complaints since May 25, 2018.

The complexity of GDPR has caused some unintended side effects. According to Jeff South, a journalism professor at Virginia Commonwealth University, writing for Nieman Lab, nearly a third of the largest US news sites chose to block access to the EU because of the GDPR, as they struggled to implement compliance solutions. A lot of companies have been struggling with GDPR compliance in the past year, and many continue to do so. I speak with them regularly. Here, I share a few of the lessons I’ve learned from them below.

Compliance Is a Journey, Not a Destination
One frequent complaint is the unexpected ongoing costs for sustained compliance, even after the initial stand-up costs. Anecdotally, we all recognize the effort that companies put into updating their privacy policies and consent management banners before May 25, 2018. But this sort of compliance is only step one: readiness.

Sustained compliance is much more difficult to achieve. Dynamic business systems require new processes that evolve with the changing legal landscape; the volume of manual work involved is often overlooked.

The source of this challenge is often marketing. Consider the depth and breadth of modern marketing solutions, as illustrated by this Luma Partners marketing map, is only the tip of the iceberg. It is not uncommon for a Fortune 500 company to have more than 100 of these solutions, each storing personal data, and operating independently of each other. What happens when a data subject exercises his or her right to be deleted from these systems?

Privacy policies and cookie banners are incapable of processing data subject access requests. It takes an entire team of professionals, each assigned as owners of specific systems, to ensure a requester’s data is deleted. And it isn’t enough to simply delete the data from the service (a soft delete); these teams often need to email their processors to ensure this data is deleted from their subprocessors as well (a hard delete). Not only is this a tedious manual process (and expensive if your privacy professionals are lawyers), but like any manual process it is also error prone. If Amazon, which last year failed to disclose when a customer’s Alexa recordings were accidentally sent to a complete stranger, is not safe from these errors, who is?

The Map Is Not the Territory
Data inventories and data maps serve as the underlying foundation to process privacy requests, informing privacy teams of which systems contain personal data and where. But again, it is a tedious manual process to develop these data inventories. Many privacy management solutions still rely on manual surveys to determine who owns the data, the purpose of its collection, what type of data it is, and so forth.

And the reality is that these static data maps are just a snapshot. As quickly as they are created, they can become outdated. To sustain compliance, companies need a process to update these data inventories as new systems are purchased.

You Can Run from GDPR, but You Can’t Hide from CCPA
There were a lot of companies that were able to ignore GDPR compliance. Domestic businesses or chain stores often had no need to comply. Others changed their business model, such as those news sites that blocked access to the EU. And still others took a wait-and-see approach. But the reality is that GDPR is just the beginning — the deadline for the California Consumer Privacy Act is less than nine months away, January 1, 2020 — and there are many other states considering similar privacy laws. If there is a lesson we have learned from one year of GDPR, it is that companies need to start planning for privacy regulations today because it can take up to a year to fully prepare. In the words of Ruby Zefo, Uber’s chief privacy officer, GDPR compliance is like raising a baby: “Whether you think it is attractive or not is up to you, but you still need to take care of it.”

Related Content:

 

 

 Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Daniel Barber is CEO co-founder, DataGrail, where he drives the strategic vision and overall management of its privacy management solution. The General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA) and a worldwide trend toward privacy … View Full Bio

Article source: https://www.darkreading.com/risk/a-rear-view-look-at-gdpr-compliance-has-no-brakes/a/d-id/1334491?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Piracy streaming apps are stuffed with malware

Does the offer to “Never pay for cable again” sound tantalizing?

It shouldn’t. It should sound abhorrent, not only because of piracy being illegal and unfair to content creators, but also because researchers have found that pirated streaming devices are stuffed with malware and/or open the door for it to come streaming in.

According to a report published on Thursday, researchers have found that many of the devices are rigged with malware, be it on preinstalled apps or apps added later.

In order to assess the streaming piracy ecosystem, researchers from cybersecurity firm Dark Wolfe Consulting and the Digital Citizens Alliance (DCA) – a consumer-focused group devoted to making the internet safer –  picked up six streaming devices that use the Kodi platform.

Kodi’s a free, open-source media player… one that comes in handy to tweak and add to piracy streaming devices. Of the Kodi devices the researchers checked out, they found that 70% were repurposed or loaded with apps that access unlicensed content.

These devices are bought by people who’d rather not pay for content and who might not be aware of the extreme risks we go through when we plug them into our home or work networks. That’s a lot of people: the researchers noted that as of December, there were about 12 million active users of the app repository “TV Addons,” which runs on Kodi.

The devices are dirt cheap in comparison to a legit Apple TV or Roku streaming device and the subscription prices for shows from the likes of Netflix, Hulu or HBO. The Kodi devices – sometimes called “Kodi boxes” or “jailbroken Fire TV Sticks” – look and act like the bona fide streaming devices. You can pick them up on both underground markets on the Dark Web, or up on the sunny side of the street in places like Facebook Marketplace, Craigslist, or eBay, for a one-time fee of $75 to $100.

That will get you access to what the researchers say is a burgeoning range of pirated content, including the latest movies – even while they’re still in theaters – or live events such as pay-per-view boxing matches or elite soccer games. The report includes a screenshot of one piracy app, Exodus Redux, that was offering movies such as Aquaman a full week before it was released in December.

Into the Spider-Verse, or into a world of e-hurt?

The researchers said that what most users don’t realize is that plugging in one of these devices into their home network is like pulling a Trojan horse in through the front door: the devices enable hackers to bypass the security of the home network’s router firewall, for example. Any apps already on the box or later downloaded can unleash malware, all under the guise of “free” content.

The devices are easy for hackers to exploit for a few reasons: first, they’re hooked into the home network and bypass the router’s security. Second, normal security protections are typically not installed or are disabled to accommodate piracy-streaming apps. On Androids, for example, disabling security features opens a specific port to the internet that botnets routinely scan for. That leaves the devices open for hackers to target and to then infect.

As well, users often have to grant full admin access in order to use the apps, including permission to access the device’s entire memory, along with its location and other security protections. In other words, users hand over the keys to the kingdom.

Home very much not Alone

Over the course of 500 hours of lab testing, the researchers experienced these and other security risks, they said:

  • As soon as a researcher downloaded the ad-supported illicit movie and live sports streaming app Mobdro, malware within the app forwarded the researcher’s Wi-Fi network name and password to a server that appeared to be in Indonesia.
  • Malware probed the researchers’ network, searching for vulnerabilities that would enable it to access files and other devices. The malware uploaded, without permission, 1.5 terabytes of data from the researcher’s device.
  • Mobdro sought access to media content and other legitimate apps on the researcher’s network.
  • In one scheme, crooks posed as well-known streaming sites, such as Netflix, to illegally use an actual, paying Netflix customer’s legitimate subscription.

The cybersecurity firm GroupSense assisted by infiltrating Dark Web chatrooms where they found hackers sussing out how to exploit vulnerabilities inherent in the pirate apps, as well as how to use malware to snare the devices into a botnet to use in cyber attacks or for cryptomining. Other chats were about how to get at information stored on the devices, such as photographs, passwords, and credit cards.

The possibilities for mischief and mayhem are manifold, states the report:

Given that users rarely install anti-virus tools on such devices, the opportunities for exploitation are numerous.

Arrrrr, ouch!

The takeaway: Digital pirates might think that ripped-off media is free, but it’s no bargain at all when you consider these serious risks.

The researchers want to see these steps taken to reduce those security risks:

  • Law enforcement should prioritize the investigation and prosecution of these criminal networks.
  • Consumer protection agencies, both at the federal and state level, should warn consumers about the risks that illicit devices and piracy apps pose to their security and to their home devices.
  • Government agencies and corporations should warn employees of the potential risks of using these devices over their networks, so they don’t become a pathway to gain access to networks or steal sensitive information.
  • Digital marketplaces such as eBay, Craigslist, and Facebook Marketplace should ban the sale of piracy devices.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/M5MCGSPBT8Y/

Cops need warrant for both location history and phone pinging, says judge

As we all should know full well by now, location data from our phones can reveal our every move – where and when and with whom we live, socialize, visit, vacation, worship; our trips to an emergency room or family planning clinic; and much more.

Whether law enforcement gets that intimate portrait of our lives from real-time cell phone location data handed over by a phone company or from a cell-site simulator like a stingray is moot. Either way, police need a warrant, Massachusetts Supreme Judicial Court ruled on Tuesday.

The Electronic Frontier Foundation (EFF) is calling this an important win in the ongoing debate about location privacy and the wealth of records stored by third parties – one that could play a role beyond Massachusetts.

This is one of the first decisions to grapple with the scope of the Carpenter ruling – which held that law enforcement needs a warrant for location data – but it won’t be the last.

As it is, the EFF said, momentum is growing at the state level, with legislation pending in both Maryland and Wisconsin that would require police to get a warrant for location data. From the EFF:

[The Massachusetts decision will] hopefully turn the tide on pending court cases looking at the issue.

Commonwealth of Massachusetts v. Almonor

The case in question is Commonwealth of Massachusetts v. Almonor, and it concerns police having ordered cellphone service provider Sprint to ping the phone of a murder suspect without a warrant. In fact, police got two weeks’ worth of the suspect’s historical cell records.

They didn’t get a search warrant that was supported by probable cause. Rather, they relied on federal law. But now, years after the conviction, a different judge has found that the warrantless data grab violated the Massachusetts state constitution. The judge’s reasoning: the state’s constitution says that people have an expectation of privacy in their movements, regardless of the fact that the records were owned by Sprint.

The EFF filed an amicus brief to back up that decision when the state appealed to the Massachusetts Supreme Judicial Court. Along with Kit Walsh of the Berkman Center for Internet and Society at Harvard Law School, the EFF argued that a search warrant is needed before police can track people’s location for an extended period of time.

Details of the Almonor case

In 2012, police had learned the phone number of the suspect, then 24-year-old Jerome Almonor, within about four hours of a murder being committed with a sawed-off shotgun. With the phone number in hand, police next contacted Sprint to ask for Almonor’s phone’s real-time location pursuant to a “mandatory information for exigent circumstances requests” form – i.e., when there’s a threat of imminent harm or the need to protect evidence or property from being destroyed that justifies police acting without a warrant.

Pinging the mobile phone meant surreptitiously accessing its GPS functions so that it sent its coordinates back to the phone carrier and the police. With that real-time location data in hand, police pinpointed Almonor and tracked him down to a bedroom in a private home in Brockton, Massachusetts.

The state’s argument: it could warrantlessly get cell phone location data to find anyone, anytime, at any place, as long as it was less than six hours old. That six-hour window hails back to an earlier case, Commonwealth v. Augustine, in which the court noted that individuals have an expectation of privacy… at least, when the location data is collected for six hours or less, it said in a footnote:

It would be reasonable to assume that a request for historical [cell site location information, or CSLI] of the type at issue in this case for a period of six hours or less would not require the police to obtain a search warrant.

Last week’s decision to reject the “it’s OK if it’s less than six hours” rationale is “heartening news” for location privacy, the EFF said. Absent exigent circumstances, the court held, the police must get a warrant.

This goes beyond Carpenter v. United States

The decision referenced another important, recent decision in the case of phone location data: that of Carpenter v. United States.

In June 2017, the US Supreme Court agreed to take up the case of Timothy Ivory Carpenter: a man who’d been sentenced to 116 years in jail for robbing six cellular telephone stores at gunpoint. The case against him was made at least partly on the basis of months of cellphone location records, turned over without a warrant, which prosecutors said placed Carpenter’s phone within a half mile to two miles of the scenes of the crimes.

In doing so, the Supreme Court took up a slew of questions that arise from the modern era of ubiquitous cellphone usage. The case in question, Carpenter v. United States, is one of many that have arisen when police have used cellphone-derived location data to pinpoint suspects’ whereabouts and whenabouts.

Did the warrantless search violate Carpenter’s Fourth Amendment protection against unreasonable search?

The answer: Yup. That was the decision handed down in June 2018, when the Supreme Court, in a 5-4 decision, ruled that US law enforcement needs a warrant before accessing cellphone location data.

Cellular carriers track our phone location using either built-in GPS data or by triangulating location data between nearby cellphone towers, log it, and can use it to paint a detailed picture of where you were, and when, and who you talk with. The court compared the records to an ankle monitor. Before the June 2018 Carpenter decision, law enforcement officials could check up on anyone they wanted.

Prior to that ruling, the FBI was legally allowed to access this information under a 1994 amendment to the 1986 Stored Communications Act, which enables a judge to grant a court order for access to records if prosecutors can prove that they are “relevant and material to an ongoing investigation”.

So yes, Carpenter is considered a win for location data privacy rights. But the Massachusetts decision from last week addressed an even bigger issue: it goes beyond just getting at our location data, and instead concerns tinkering with our phones in order to track us. Here’s from the Almonor decision:

Manipulating our phones for the purpose of identifying and tracking our personal location presents an even greater intrusion [than that of accessing the historical location data that was at issue in Carpenter]. … by causing the defendant’s cell phone to reveal its real-time location, the Commonwealth intruded on the defendant’s reasonable expectation of privacy in the real-time location of his cell phone.

As it is, the court noted, the state had wanted to set up what would result in “perverse results” and “unduly burden cell phone users,” given that phones are an “indispensable part of modern life.”

Under the Commonwealth’s approach, individuals would face an impossible choice: either reject owning a cell phone, keep it turned off virtually all the time, or subject yourself to warrantless government monitoring at six-hour intervals of the government’s choosing. But this is no choice at all.

Article 14 [of Massachusetts’s state constitution] and the Fourth Amendment exist precisely to protect the privacy interests of people when they are within society, not to require them to opt out of participating in society.

The EFF says that it’s “heartened” at how the Massachusetts court handled these issues, but the fight to protect location data privacy is far from over. As it is, there are multiple cases pending, all of which will also grapple with the scope of the Carpenter ruling, such as State of Maine v. O’Donnell, which concerns how police track and locate people in real time.

The EFF says it continues to actively track O’Donnell as well as other cases playing out across the country.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JXBJG_NoD3o/

Cryptocurrency giants in $850m fraud allegations

The New York Attorney General has accused cryptocurrency exchange Bitfinex and cryptocurrency Tether of an $850m fraud.

The State’s Attorney General Letitia James obtained a court order last week directing iFinex, which operates Bitfinex and Tether, to turn over financial documents within 30 days. In a separate legal filing, she accused Bitfinex’s operators of controlling the cryptocurrency, and said that the exchange has covered up the loss of $850m to a company in Panama.

Tether has called itself a stablecoin, which is a cryptocurrency pegged to a stable asset to minimize price volatility. Stablecoins are supposed to be stable enough to use as currencies, as opposed to wildly volatile cryptocurrencies like Bitcoin, which have become speculative assets. In Tether’s case, one Tether is supposed to be worth one US dollar, and it originally claimed to hold enough US dollars to cover all the Tether cryptocurrency that it has issued.

According to James, Bitfinex handed over $850m in funds to Panamanian company Crypto Capital Corp. There was no written contract between the two companies, and Bitfinex lost access to those funds, which commingled corporate and client funds. She said:

In order to fill the gap, executives of Bitfinex and Tether engaged in a series of conflicted corporate transactions whereby Bitfinex gave itself access to up to $900 million of Tether’s cash reserves.

James said that Bitfinex has taken at least $700m from Tether’s reserves already.

Bitfinex had facilitated nearly $6.8bn in cryptocurrency trades in the last 30 days, according to CoinMarketCap.

The New York Attorney General alleges that Bitfinex was having trouble honouring customers’ withdrawal requests because Crypto Capital refused to process withdrawal requests or return the company’s funds. The filing says that the exchange was desperately trying to retrieve funds from the Panamanian company while simultaneously posting statements assuring the market that everything was ok.

A defiant Bitfinex released a statement challenging the New York Attorney’s accusations, arguing that the court filings were an overreach, written “in bad faith”:

We have been informed that these Crypto Capital amounts are not lost but have been, in fact, seized and safeguarded. We are and have been actively working to exercise our rights and remedies and get those funds released.

According to the New York Attorney General’s filing, documents and statements provided by lawyers for Tether and Bitfinex showed that the companies did not believe the Crypto Capital holds had been seized, and that they were worried the company was committing fraud.

Past controversies

Bitfinex has a long-standing relationship with Tether, sharing the same CEO, JL van der Velde. Both companies have weathered several controversies.

April 2017 saw Bitfinex lose its ability to transfer fiat currency using Wells Fargo, after the bank moved to stop wire transfers in its name. It then reportedly used Crypto Capital Corp as a stopgap until Puerto Rico-based bank Noble, a subsidiary of New York-based Noble Markets, stepped in. The New York Attorney General’s filing says that iFinex ended its relationship with Noble in October 2018.

In December 2017, the US Commodity Futures Trading Commission subpoenaed Bitfinex and Tether to find out more about their operations, concerned that Tether didn’t have enough reserve fiat funds to cover the cryptocurrency that it had issued, which exceeded $2bn.

In June 2018, University of Texas professor John Griffin published a paper with co-author Amin Shams, alleging that traders were using Tether to influence bitcoin prices. Its pegged status allowed people to substitute it for dollars, the paper said, with the added benefit that it didn’t need a banking relationship to trade. It said:

Entities associated with the Bitfinex exchange use Tether to purchase Bitcoin when prices are falling. Such price supporting activities are successful, as Bitcoin prices rise following the periods of intervention.

Van der Velde replied at the time that Tether issuances could not be used to prop up Bitcoin prices on Bitfinex.

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/T0Y6oajiB7E/

NIST tool boosts chances of finding dangerous software flaws

After more than 20 years of steady improvement, the US National Institute of Standards and Technology (NIST) thinks it has reached an important milestone with something called Combinatorial Coverage Measurement (CCM).

Part of a research toolkit called Automated Combinatorial Testing for Software (ACTS), CCM is an algorithmic approach used to test software for interactions between input variables that might cause unexpected failures.

It sounds like a technical mouthful, but this is good news for software, especially when it’s inside complex systems such as aircraft, cars and power plants where these sorts of problems could be life-threatening.

Typically, this will be software taking inputs from arrays of sensors that generate unexpected conflicts the software can’t resolve, for instance between temperature, pressure or altitude.

Designers try to counteract these problems by modelling as many interactions as they can before the software is used in the real world, which is where ACTS and CCM come in.

But there’s always been a problem – modelling enough interactions from enough variables to spot all the possible combinations that might lead to an issue.

This has been improving since the late 1990s when the idea got off the ground, most recently during a revision to the ACTS toolkit in 2015.

Now, in collaboration with University of Texas, Austria’s SBA Research, and Adobe (one of several big companies using the toolkit), NIST thinks that the 2019 revision of CCM has made some kind of leap forward.

NIST mathematician Raghu Kacker said of the difficulties of testing complex software:

Before we revised CCM, it was difficult to test software that handled thousands of variables thoroughly. That limitation is a problem for complex modern software of the sort that is used in passenger airliners and nuclear power plants, because it’s not just highly configurable, it’s also life critical. People’s lives and health are depending on it.

With the help of a new algorithm developed by SBA, NIST’s tool had gone from being able to model a few hundred variables to up to 2,000 from five-way combinations of inputs.

Although not an official part of the tool, developers could request the algorithm. NIST computer scientist Richard Kuhn said:

The collaboration has shown that we can handle larger classes of problems now. We can apply this method to more applications and systems that previously were too hard to handle.

Not far from the surface of this development is the problem of cost – how much time and effort should developers spend removing bugs from their software?

NIST’s hope must be that anything that can remove more bugs for the same effort is going to have a positive effect on security and reliability.

Unfortunately, as helpful as CCM might be, its effectiveness must now be measured against the rising complexity of software systems that are acquiring once unimagined capabilities, such as automation.

There is an expanding range of commercial products that want to help solve this problem. The investment NIST is making in ACTS and CCM suggests there is still plenty of room for a toolset that everyone can use.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/z8PSOg0wNKM/

Powershell, the Gandcrab infection and the long-forgotten server

CyberUK 2019 If your hair isn’t already grey enough, GCHQ staff have revealed a handful of infosec incidents that, in their words, “surprised us”.

During a talk at CyberUK 2019, the annual shindig of the spy agency’s public-facing offshoot, the National Cyber Security Centre (NCSC), a bespectacled and bearded chap who was introduced only as “Toby L” told an enthralled audience one of his “favourite war stories”.

The NCSC is part of GCHQ’s drive since 2013 to rebuild public trust and convince industry that the government is also interested in their economic wellbeing. As part of that, NCSC occasionally gets called in to help with particularly pernickety problems involving malware infections on corporate networks.

“This specific instance of Gandcrab was not the most exciting,” said “Toby”. “It’s ransomware that’s relatively well-understood in the community. It’s relatively easy to recover from one of those compromises. It’s not the ransomware that’s interesting, though, but how it got where it was.”

A look over the company’s logs revealed that Gandcrab had been introduced via a download from Pastebin – an encoded Base64 binary summoned through a Powershell command, no less.

“Base64 on Powershell is a perfectly legit function. It provides a mechanism to run your Powershell scripts without a dedicated commandlet or a dedicated script file,” commented Toby. “It does look pretty weird and dodgy when you’ve got encoded commands being sent to Powershell but it’s a legitimate use of it.”

What ran Powershell, then? NCSC traced that back to a file called agentmon.exe – and this was where it got interesting.

“It’s a legit file from Kaseya, an IT vendor. They sell products that allow remote management monitoring. If you have an outsourced IT vendor managing your network remotely from their office somewhere in the world, chances are they use a tool like Kaseya. They log into your network, access controls on RDP [remote desktop], SSH, whatever you might be using. Coming in through a backdoor if you like, a custom protocol.”

Agentmon.exe was running on that particular network with system-level privs, deploying Powershell commands across it daily – “adding to the noise ratio a bit”, as Toby put it. Digging into Kaseya’s logs revealed something even more intriguing: the server issuing the commands had been run by an outsourced IT provider, and that provider’s contract with the infected company had ended quite a while ago.

CVE-2017-18362 explained half the story. The critical vuln allows anyone with access to the Kaseya server’s ManagedIT.asmx page through its web interface to execute arbitrary SQL queries. As Toby put it: “No whitelisting, no blacklisting, no password entry… send SQL commands and HTTP POST and it’ll just run it.”

But Powershell? Easy if you know about CVE-2018-20753, which allows (yup, you guessed it) unprivileged remote attackers to execute Powershell payloads on all managed devices.

When the external MSP’s contract had ended, post-contract cleanup by both parties hadn’t included the Kaseya deployment on all the devices across the network. Left unpatched, unoperated and unremembered, the Kaseya server had been found by a baddie who’d made use of two relatively old CVEs to compromise the entire network and distribute ransomware across it.

“One of those weird tech issues,” shrugged Toby.

Malwareless malignancy

Another NCSC bod, “Harry W”, as equally descript as his colleague Toby, took to the podium.

An executive at a company got a WhatsApp invite to a conference call from a personal assistant. After establishing comms, the PA said: “We use Viber [another cross-platform VoIP app] now.”

“It’s a legitimate product,” said Harry. “A perfectly fine messaging platform.”

The PA sent the exec a crafted link. But this was no URL made up of dodgy ASCII lookalike characters, or substituting 1 for I, or even to a suspect domain. This was a URL to viber.com/activate-secondary/[string of random characters].

Activate secondary is a very specific feature of Viber,” said Harry. “It allows you to sync your Viber accounts on one device with another; for example, a desktop. You can sync messages between different systems. Make phone calls off one and send messages off the other. Enables users to use it a bit more flexibly.”

A useful feature, then. For both good and bad users.

“What we saw was a number of individuals in this organisation who did click this link,” said Harry. “They would activate secondary, paired their device with another – I think it was on the African continent somewhere. What happens? Full address book popped over. Not just the Viber address book; the entire address book on the phone. You might have personal info on there about job titles. Makes the next pivot for social engineering that bit easier.”

It gets worse.

“If you’ve paired your device, it also allows you to spoof phone calls as the person you’ve just synced with,” revealed Harry. “The attacker can now use your caller ID. If you’ve got a credit card synced, you can make Viber-to-landline calls – much more lucrative! You can also, rather than just ringing friends, ring your own 091 hotline number to bring in extra money.”

“Malwareless,” he continued. “It wasn’t using anything particularly sophisticated, but still, a really interesting channel in which they were trying to get that extra information from targets.”

As he pointed out, no mobile device management platform would have picked up this attack; Viber is a legitimate messaging app present in the various app stores. “Again, when you’ve got legit functions from an app, how can you monitor and detect those?”

Security locking out your attack? Never mind – just enrol for a company VPN

Another cautionary tale came from an attack by Iran-based hacking crew APT35, aka “Newscaster”.

The targeted company had been hit a couple of months previously. Diligently, the firm did all the right things: beefed up security, briefed non-techie staff on things to do and not to do, and the rest of it. All exactly by the book.

Except for their VPN implementation.

Users across the company were told to register for access to the new company-wide VPN. No VPN, no access to business-critical stuff. The VPN itself was protected by 2FA.

“What they had missed out was the enrolment to the 2FA VPN authentication was initiated by the users,” said the NCSC bod. “Not all users, particularly those based in offices, need 2FA. So they didn’t enrol, right? All the [threat] actor did was target a bunch of accounts, figure out a user who wasn’t enrolled and then enrolled themselves instead.”

The hackers were literally signing themselves up to the new access-all-areas corporate VPN. They were eventually discovered and shut out again, but not before they had gained access to a critical proof-of-concept deployment where access had been locked down to just four specific accounts.


As well as telling some fascinating stories, the point of this session was a bit wider: NCSC, even as an arm of GCHQ, is getting out there and helping industry with its infosec headaches. While some chunks of industry will doubtless need a lot more convincing than this to give people from a state spy agency access to their sensitive internal networks, it’s a real shift from the days of 2013 and the Snowden revelations. ®

Have you seen any infosec incidents that made the hair on the back of your neck stand up? Tell us in the comments.

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/29/surprising_infosec_stories_from_ncsc/