STE WILLIAMS

BlueKeep Exploits Appear as Security Firms Continue to Worry About Cyberattack

The lack of an attack has puzzled some security experts, but the general advice remains that companies should patch their vulnerable systems more quickly.

When Microsoft originally issued an alert for a remotely exploitable software flaw in mid-May, security firms immediately drew analogies between the danger posed by the so-called “BlueKeep” vulnerability and the destruction caused by the ETERNAL BLUE exploit, reportedly stolen from the National Security Agency and used to enable the pernicious WannaCry worm to spread in 2017.

Within a week, companies reported they had created successful exploits for the flaw. Researchers from McAfee, for example, analyzed the patch and created a proof-of-concept that could launch an app on the computer. In early July, Sophos showed off an exploit that compromises systems using a fileless attack.

However, the massive cyberattack forecasted by security firms — and worried over by Microsoft, the US Department of Homeland Security, and others — failed to materialize. The lack of a public exploit is a major reason, as is the difficulty of writing one from scratch, says David Aitel, chief security technical officer at Cyxtera, which last week announced it had incorporated a complete exploit for the BlueKeep vulnerability into its penetration-testing product, Canvas. 

“It is not trivial,” he says.

Eleven weeks after Microsoft announced it had patched the critical software issues, the lack of an exploit for BlueKeep continues to puzzle some security professionals. BlueKeep (CVE-2019-0708), a vulnerability in the way older versions of Windows handle remote desktop protocol (RDP) messages, can allow an attacker to run code on systems with the service accessible from the Internet.

Yet, while a catastrophic worm is the obvious threat, other, more subtle dangers exist as well, says Dan Dahlberg, director of security research at BitSight.

“You think of the activities of the sorts of people trying to take advantage of this vulnerability for nefarious pourposes — there are people who are less experienced, who would likely turn it into a worm,” he says. “But there are other actors who might utilize this vulnerability in a much more stealthy manner, and that is going to be much harder to detect.”

In early July, BitSight found that some 800,000 computers still exhibited external signs of vulnerability to BlueKeep. About 5,000 systems are patched daily, Dahlberg says. 

Dahlberg and other security experts have urged companies to continue patching. Microsoft issued updates for a variety of its platforms — not only in Windows 7 and Windows Server 2008, the core systems affected by the issue, but also Windows XP and Windows 2000, both of which the company has stopped otherwise supporting.

Microsoft also has published two blog posts recommending that customers apply the updates as soon as possible. 

“It is possible that we won’t see this vulnerability incorporated into malware,” the company said. “But that’s not the way to bet.”

Typically, attacks skyrocket after a public exploit. In 2012, Symantec researched analyzed malware for the use of previously unknown exploits, so-called “zero days.” The company found out of 18 exploits used in malware, 11 had not been known at the time the malware initially infected systems. Yet, once the exploits became public, use of the attacks jumped by a factor of 100,000 in some cases. 

The lack of a public exploit may explain why there has been no catastrophic attack, because those groups that have exploits — security companies and government intelligence organizations — will use them only for a focused purpose. The WannaCry worm, attributed to North Korea, occurred only after the exploit had been publicly released.

“The reason that we have not seen a big malicious worm like WannaCry, that may have more to do with geopolitics and the state of US-Russia relations than anything else,” Cyxtera’s Aitel says.

In the past, security firms that have created exploits have faced criticism, yet the security community has recognized the legitimacy of researching potential attacks by creating exploits. The addition of exploit code into Cyxtera’s Canvas has caused much less consternation than in the past. 

“Our objective is to help customers solve their risk problems,” Cyxtera said in a statement. “It’s not just about BlueKeep — there will always be another vulnerability that comes along and puts you at risk.”

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/bluekeep-exploits-appear-as-security-firms-continue-to-worry-about-cyberattack/d/d-id/1335380?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DHS Warns About Security Flaws in Small Airplanes

Rapid7 researchers found holes in CAN bus networks that an attacker could exploit to sabotage its operation.

The US Department of Homeland Security Cybersecurity and Infrastructure Security Agency (CISA) has issued an alert on newly found vulnerabilities in the controller area network (CAN) bus networks used on small aircraft that could be abused by an attacker with physical access to a plane.

“An attacker with physical access to the aircraft could attach a device to an avionics CAN bus that could be used to inject false data, resulting in incorrect readings in avionic equipment. The researchers have outlined that engine telemetry readings, compass and attitude data, altitude, airspeeds, and angle of attack could all be manipulated to provide false measurements to the pilot,” the alert said. 

Researchers at Rapid7, who discovered the vulnerabilities and reported them to the DHS CISA, noted in their findings that such an attack with phony readings would be undetectable by a pilot.

DHS recommends that aircraft manufacturers study their products’ CAN bus networks for possible mitigations of the attack, and that owners of small aircraft restrict physical access to their planes.

Read more here and here

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/dhs-warns-about-security-flaws-in-small-airplanes/d/d-id/1335382?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Transforming ‘Tangible Security’ into a Competitive Advantage

Today’s consumers want to see and touch security. Meeting this demand will be a win-win for everyone, from users to vendors to security teams.

I have no qualms about postulating that the average technology consumer is well aware of security and privacy issues littering the Internet. That’s not to say everyone is motivated to act on these problems, but even many less-than-tech-savvy consumers are continuously exposed to news of vulnerabilities, reports of breaches, and even politics and legislation that attempt to regulate all this chaos. Consumers surely know.

Things were different in the early 2000s. I remember sitting in a security class in college, where the professor disappointedly opined that security isn’t a product feature and was therefore doomed to take a backseat to other business priorities. Security, he said, was a mere attribute of quality, a nonfunctional requirement in engineering parlance. A technical detail under the hood, completely invisible to the consumer.

This is in stark contrast with the capabilities and presentation of a product, which all factor into functional requirements: features that consumers can directly observe, interact with, and therefore appreciate as added value. If you can’t show your customers how secure your product is, why try to tell them about it? Why invest in security in the first place?

I should clarify that I’m not talking about security products, such as web application firewalls or malware scanners. In those cases, it’s natural that security capabilities of the product would take center stage. The real question is, what about everything else?

How to Mold Security into a Tangible Feature
The heightened security awareness that permeates the IT landscape is an untapped opportunity for vendors to commit to tangible security features, and transform that investment into a competitive edge. Perhaps a sign of the times to come, big players like Apple have for some time flaunted how they build security and privacy into their products. However, let me point you to a less ubiquitous product: Signal, a cross-platform encrypted messaging service developed by the Signal Foundation and Signal Messenger LLC.

Signal isn’t the most refined messaging application out there. It does get cryptography right, but so do some competitors. What sets Signal apart is how it positions itself as the messaging application for the security conscious crowd and drives that home by empowering users with meaningful privacy features.

For example, security often has a usability overhead such as the authentication of communication endpoints, which may require communicating parties to verify each other’s identity out-of-band. Technologies like GPG-based email encryption front ends often hide these details, and instead save this cumbersome task for power users who seek that added assurance.

In contrast, Signal includes user verification as an explicit step when adding new contacts, integrates the process into its user interface, and eases the usability burden by utilizing QR codes. In the end, Signal doesn’t even come close to solving this age-old problem but still transforms endpoint authentication into a palatable feature rather than hiding it from sight.

Signal also could have implemented a robust end-to-end encryption protocol and called it a day. Instead, it has positioned privacy as a pivotal product feature from the get-go, and carved out a market among the stiff competition thanks to that.

Stellar Security vs. Mediocre Performance?
Do consumers value security enough to give ground on other, more conventional feature expectations? Put another way, can vendors successfully position their stellar security to offset mediocre performance, inferior usability, or difficult integration? We’re not quite there yet, and I do realize that Signal is more an aberration than the bellwether.

That said, today security can be designed as a feature that stands on its own. And it should be. As we rapidly approach a day when security may trump other priorities, it’s time to start thinking about how security applies to core feature design principles. That won’t be an easy task. Engineering needs to transform security into functional requirements deeply embedded into user experience. Marketing needs to advocate security and bolster consumer relationships aligned with the ever-changing threat landscape. Management needs to support this entire strategic change.

The time is right to make security a tangible product feature. Consumers want to see and touch security, and meeting this demand will help vendors stay ahead of the game. It’s a win-win situation.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kaan Onarlioglu is a researcher and engineer at Akamai who is interested in a wide array of systems security problems, with an emphasis on designing practical technologies with real-life impact. He works to make computers and the Internet secure — but occasionally … View Full Bio

Article source: https://www.darkreading.com/endpoint/transforming-tangible-security-into-a-competitive-advantage/a/d-id/1335340?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Capital One Breach Affects 100M US Citizens, 6M Canadians

The breach exposed credit card application data, Social Security numbers, and linked bank accounts, among other information.

Another massive data breach has struck the US financial sector: This time it’s Capital One, which has officially confirmed a breach affecting about 100 million Americans and 6 million Canadians.

On July 29, 2019, the bank and credit card issuer reported an unauthorized intruder had gained access to several types of personal information belonging to Capital One credit card customers and people who had applied for credit cards between 2005 and early 2019. The FBI has arrested and charged one suspect, who is now in custody.

Most of the compromised information belonged to small businesses and consumers who had applied for credit cards. This included applicants’ names, addresses, ZIP codes and postal codes, phone numbers, email addresses, birth dates, and self-reported income. Beyond application data, the intruder obtained portions of credit card customer information, including “status data” such as credit scores and limits, balances, payment history, and contact info. The breach also exposed pieces of transaction data from 23 days during 2016, 2017, and 2018, Capital One said in a statement.

About 140,000 Social Security numbers (SSNs) belonging to Capital One credit card customers were accessed, as well as 80,000 linked bank accounts of secured credit card customers. The attacker was able to obtain approximately 1 million Social Insurance numbers from Canadian users. Credit card numbers and login credentials were not exposed in the breach, officials report.

The unauthorized access took place on March 22-23, 2019, when Capital One says “a highly sophisticated individual was able to exploit a specific configuration vulnerability in our infrastructure.” An external security researcher reported the bug to Capital One via its Responsible Disclosure Program on July 17, 2019. The bank launched an internal investigation, which led to the discovery of this breach on July 19 and the public announcement on July 29.

Capital One stores its data in the cloud; reports indicate the attacker was able to exploit a weakness in a misconfigured web application firewall to gain access to the files stored in an Amazon Web Services (AWS) database. The bank “immediately addressed” the bug and verified there are no other instances in its environment. It altered its automated scanning to regularly look for this issue.

“This incident underscores that every component added to an organization’s IT environment — even security components — can add to the attack surface and become an entry point for attackers,” says Bob Rudis, chief data scientist at Rapid7. While banks have improved their ability to scan for bugs, implement access controls, and improve their overall security posture, it only takes one mistake to leave them exposed to a breach like this one.

The bank encrypts its data as a standard; however, due to the circumstances of this breach, the unauthorized access also enabled data decryption. It’s also Capital One’s practice to tokenize certain data fields, particularly SSNs and account numbers. Tokenized data remained protected.

About the Suspect
The FBI has arrested Paige Thompson, former software engineer with AWS, and charged her with violation of the Computer Fraud and Abuse Act. Thompson, known online under the pseudonym “erratic,” will appear at a hearing on August 1.

The criminal complaint states that after Thompson stole the data from Capital One servers, she posted about it on GitHub. A GitHub user who saw her posts alerted Capital One, which contacted the FBI after confirming a breach. On July 29, agents appeared at Thompson’s home with a search warrant and seized electronic storage devices containing a copy of the data.

In examining the GitHub file, Capital One determined the firewall misconfiguration allowed commands to reach and be executed by the server, which enabled an attacker to access folders or buckets of data in the bank’s storage space, the criminal complaint says . Computer logs showed connections between the bank’s AWS folders and the intruder, using the firewall bug.

Capital One believes it’s unlikely Thompson used the data for fraud or disseminated it.

What You Should Do
Capital One will notify affected customers “through a variety of channels,” the company says. It plans to make free credit monitoring and identity protection available to those affected. That said, security experts strongly urge account holders to be cautious and monitor their accounts.

“While it looks like all the appropriate measures have been taken to mitigate the risk of fraud, Capital One customers should continue to be extremely vigilant,” says Leigh-Anne Galloway, Positive Technologies’ cybersecurity resilience lead. “Keep an eye on your bank accounts and any other connected accounts such as email addresses and immediately flag any suspicious activity to authorities or Capital One.”

Even if all the compromised data has been secured and accounted for, she adds, cybercriminals may still try to capitalize on this breach by sending phishing emails posing as bank officials or authorities. Victims should treat any incoming communication with suspicion.

As for businesses storing information in the cloud, security experts advise taking a closer look at security controls and processes related to protecting data in the cloud: “Organizations should regularly take an inventory of both what they’ve attached to their perimeter network(s) and — especially — regularly review the configurations of these components to ensure they are providing the minimum access necessary to facilitate key business processes,” says Rudis, who also advises scheduling regular penetration tests to ensure systems aren’t exposed.

Cloud security “can sometimes be less forgiving” given the power and magnitude of its storage and processing powers, adds BlackCloak CEO Dr. Chris Pierson. Data stores of the past were smaller and more distributed; today’s cloud instances present new challenges. “Given the changed dynamics of cloud environments, security and infrastructure teams must be able to continually monitor, scan, and protect the data they have and hold,” he says.

While many major cloud providers are building stronger security into their offerings, it’s still the business’s responsibility to handle risk management, monitoring, backups, and maintenance. Given that Capital One’s cloud software was not properly configured, it should be a warning to businesses to ensure security teams are trained and alerted to the danger of small issues like these having big consequences. 

Capital One estimates this data breach will cost about $100 million to $150 million in 2019, with costs primarily driven by customer notifications, credit monitoring, technology, and legal support. That said, it could end up costing far more: Equifax, the credit reporting giant that suffered a data breach affecting 147 million people in 2017, will pay up to $700 million in damages.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/capital-one-breach-affects-100m-us-citizens-6m-canadians/d/d-id/1335385?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Insecure Real-Time Video Protocols Allow Hollywood-Style Hacking

Lack of security in the default settings of Internet-enabled video cameras make co-opting video feeds not just a movie-hacker technique, but a reality for millions of cameras.

More than 4.6 million video cameras may be open to an attack that could co-opt the video feeds of network-connected video cameras if the owners relied on the device’s default settings, according to research by Internet of Things (IoT) security firm Forescout Technologies. 

In a report published on July 30, the company’s researchers found that an attacker who already has some level of access to a smart building’s or corporation’s network could completely replace the video feeds from many types and configurations of IP video cameras because they rarely use encryption or authentication. A simple attack to reroute the video and restart the device can easily replace a video stream with attacker-provided data, the company states

“Our main point is not to demonstrate that you take over a system, but that you can conduct a cyber-physical attack — you are disrupting functions in the physical world using cyber means,” says Elisa Costante, senior director of research for Forescout. “If you encrypt the protocols, none of this would be possible.”

Hackers co-opting video feeds to stymie corporate defenses is a staple of Hollywood movies. Unlike many attack techniques, which Hollywood studios often treat as some sort of techno-wizardry, hacking IP video cameras is often straightforward because most devices continue to be poorly secured.

Forescout’s attack, for example, relies on an common technique known as ARP poisoning, where the attacker misdirects network traffic by sending an address resolution protocol (ARP) packet to link an IP address with an attacker-controlled system. The effectiveness of the attack highlights how manufacturers continue to fail to secure the network-connected devices — such as IP cameras — to prevent the easiest attacks.

While some manufacturers have secured their devices, tens of millions of IP-connected video cameras have been installed by businesses and consumers, many without thought to security, Forescout says.

Secure versions of the real-time streaming protocol (RTSP) exist but are often not implemented, the company’s report states.

“Unfortunately, these secure alternatives are not always available in IoT devices, are almost never configured by default, and are many times not enabled by the end users, who generally do not have all the knowledge required to secure RTP sessions in the first place,” the company says.

A scan for the unsecured RTSP port uncovered more than 4.6 million devices that exposed the real-time streaming protocol to the Internet, suggesting that those devices are likely to be misconfigured and have unencrypted streams. Such devices often pose a higher security risk because they are rarely managed in the same ways as computer systems, with little on-board security and very infrequent patching.

The worries come the same week that security firm Armis revealed that more than a dozen flaws exist in a variety of versions of the real-time operating system (RTOS) created by VxWorks, a provider of embedded software. The vulnerabilities could leave as many as 200 million devices vulnerable to attack, many of which are unlikely to be patched.

In Forescout’s report on its research, the company includes a video demonstrating how an attacker could sabotage an IP video stream to make security guards, for example, not see an intruder. Current security solutions are unlikely to be able to detect such attacks, the company says.

“The security challenges presented by these devices are forcing organizations to rethink their cybersecurity strategies,” the company states in the report. “Legacy security solutions are not enough to secure today’s networks because either they are unsupported by embedded devices or they are incapable of understanding the network traffic generated by these devices.”

Instead, companies need to focus on easily managed devices and configure them to use encryption, the report stated.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/insecure-real-time-video-protocols-allow-hollywood-style-hacking/d/d-id/1335386?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple iOS Flaw Could Give Attacker Access via iMessage

Google Project Zero researchers found an iOS vulnerability that could let an attacker snoop on a victim’s phone remotely.

Apple’s most recent update to iOS wasn’t simply to add features: It also patched a significant vulnerability discovered by Google Project Zero. Google security researchers Samuel Groß and Natalie Silvanovich found the vulnerability, designated CVE-2019-8646, which could allow a threat actor to gain access to iOS devices and read their contents using a malicious iMessage as an attack vector.

A malicious actor also could exploit the flaw to remotely read one-time-passwords sent via SMS — a technique frequently used as part of a two-factor authentication scheme.

Google followed responsible disclosure and notified Apple in May. Apple patched the vulnerability within the 90-day window that Google allowed. Silvanovich will present details of the vulnerability in a Black Hat USA briefing, Apple iMessage Flaw Lets Remote Attackers Read Files on iPhones.

iOS users who subscribe to automatic updates should already have applied the patch; other iOS users are encouraged to update to iOS 12.4 immediately.

For more, read here.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/apple-ios-flaw-could-give-attacker-access-via-imessage/d/d-id/1335388?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Post-Equifax settlement, NY updates data breach notification laws

The crows are coming home to roost after Equifax’s 2017 data breach, and over the past week, those crows have lobbed these projectiles:

Fine

The Federal Trade Commission last week announced that Equifax has agreed to pay $675 million – up to possibly $700 million – as part of a settlement over its massive security flub: failing to secure the huge amount of personal information stored on its network, leading to a breach that exposed millions of names and dates of birth, Social Security numbers, physical addresses, and other personal information that could lead to identity theft and fraud.

The settlement includes $300 million paid into a fund for credit monitoring services, for compensation to those who forked over money to Equifax to buy credit or identity monitoring services or who had other out-of-pocket expenses as a result of the breach.

Starting next year, it will also provide affected US consumers with six free credit reports per year for seven years (on top of the one free one they get every year from Equifax and the two other credit reporting agencies, Experian and TransUnion).

Finally, Equifax also agreed to pay $175 million to 48 states, the District of Columbia and Puerto Rico, as well as $100 million to the Consumer Financial Protection Bureau (CFPB) in civil penalties.

Law

New York passed two laws to beef up its data breach notifications requirements to help shield its citizens from getting Equifax-ified again.

New York Governor Andrew Cuomo signed the Stop Hacks and Improve Electronic Data Security (SHIELD) Act on Thursday. It will go into effect on 21 March 2020. On the same day, he also signed the Identity Theft Prevention and Mitigation Services Act. That one goes into effect on 23 September 2019.

SHIELD expands the scope of information covered by current data breach notification law to include biometric information, plus email addresses and their corresponding passwords or security questions and answers.

Here’s State Senator Kevin Thomas, Chairman of Committee on Consumer Protection and the sponsor of the bill:

It is critical that our laws keep pace with the rapidly changing world of technology. The SHIELD Act raises security standards so that no more New Yorkers are needlessly victimized by data breaches and cyber-attacks.

The bill – New York Senate Bill S5575B/Assembly Bill A5635A – expands the definition of “private information”, i.e., data that, if breached, could trigger a notification requirement.

This is what’s considered “private information” under the new law, according to legal intelligence site JDSupra:

  • Personal information consisting of any information in combination with any one or more of the following data elements, when either the data element or the combination of personal information plus the data element is not encrypted, or is encrypted with an encryption key that has also been accessed or acquired;
  • Social Security number;
  • Driver’s license number or non-driver identification card number;
  • Account number, credit or debit card number, in combination with any required security code, access code, password or other information that would permit access to an individual’s financial account; account number, credit or debit card number, if circumstances exist wherein such number could be used to access an individual’s financial account without additional identifying information, security code, access code, or password; or
  • Biometric information, meaning data generated by electronic measurements of an individual’s unique physical characteristics, such as a fingerprint, voice print, retina or iris image, or other unique physical representation or digital representation of biometric data which are used to authenticate or ascertain the individual’s identity;
    OR
  • A user name or email address in combination with a password or security question and answer that would permit access to an online account.

That’s an expanded definition of “private information,” but JDSupra points out that it’s not as broad as laws in other states, such as California, Illinois, Oregon, and Rhode Island. In those states, certain health insurance identifiers are included along with medical information.

For anybody processing NY residents’ private info

The law applies to any entities that process NY residents’ information. It increases civil penalties and widens the definition of a data breach to also require breach notifications from “any person or entity with private information of a New York resident, not just to those that conduct business in New York State.”

According to the New York State Senate official website, the law also…

Requires reasonable data security, provides standards tailored to the size of a business, and provides protections from liability for certain entities.

Governor Cuomo:

As technology seeps into practically every aspect of our daily lives, it is increasingly critical that we do everything we can to ensure the information that companies are trusted with is secure.

Mandatory free ID theft protection, credit freezes

As far as the second bill goes, the Identity Theft Prevention and Mitigation Services Act, it requires credit reporting agencies that have experienced a breach involving Social Security numbers – we’re looking at you, Equifax – to provide five years of identity theft prevention and mitigation services to affected consumers.

It also gives consumers the right to freeze their credit at no cost. We used to have to pay for credit freezes, up until consumers finally got the right to freeze our credit without having to pay for it in 2018.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/D5ggPWpjbBQ/

US chases fraudulent bitcoin exchange BTC-e for $100m

Two years ago, the US government fined an international cybercriminal and his fraudulent bitcoin exchange over $100m. Now, it’s going after them for the money.

Attorneys for the US government filed a complaint in court last week against BTC-e and its operator Alexander Vinnik to recover civil penalties originally levied in 2017.

Authorities arrested Vinnik in July 2017 while in Greece on holiday with his family. At the same time, the US indicted him for laundering money through the site, and FinCEN levied civil penalties. It fined BTC-e $110m for facilitating ransomware and dark web drug sales, and fined Vinnik $12m for his role in the crimes. It was the first action that the regulator had taken against a foreign money services business operating in the US.

Opening in 2011, BTC-e served 700,000 users worldwide and was a popular money laundering tool for cybercriminals, according to the most recent indictment. They would use the exchange to convert money from cryptocurrency to fiat, including US dollars, euros, and rubles.

Legitimate cryptocurrency exchanges normally have to follow know-your-client rules by requesting official identification documents from clients. They also have to register with local regulators (in the US, that’s FinCEN). BTC-e wasn’t registered, and it also lacked even basic measures to identify its users, says the complaint:

To create an account, a user did not need to provide even the most basic identifying information, such as name, date of birth, address, or other identifiers. All BTC-e required to create a user account was a self-created username, password, and an email address.

Even though users created accounts under suspicious or suggestive usernames like “ISIS”, “CocaineCowboys”, “blackhathackers”, “dzkillerhacker”, and “hacker4hire”, the exchange failed to investigate them.

BTC-e used a selection of front companies to facilitate deposits and withdrawals from clients. This helped it avoid collecting information about users that would leave a central financial paper trail, the indictment alleges. It also helped it to cover up the fact that it did business with clients in the US.

Furthermore, when transferring cryptocurrencies between users, it used online mixing services, which aggregate currencies from many users and then redistribute them. This hides the ownership and history of otherwise-traceable bitcoin on the blockchain, like wiping a serial number.

All this activity enabled the company to profit from unfavourable exchange rates compared to FinCEN-registered exchanges, according to court documents.

BTC-e processed over 300,000 in funds stolen from Mt Gox, one of the first and most successful exchanges, which collapsed after a massive bitcoin theft in 2014. It also processed at least $3m in transactions from the Cryptolocker and Locky malware, according to FinCEN.

The latest indictment is now asking for the original $12m from Vinnik, but only $88,596,314 from BTC-e.

Vinnik is incarcerated in Greece, where the US, France, and Russia have been attempting to extradite him.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/n6rUDnziA4A/

Listening in: Humans hear the private info Siri accidentally records

Have you ever asked Apple’s personal voice assistant, Siri, if it’s always listening to you?

If so, you presumably got one of its cutesy responses. To wit:

I only listen when you’re talking to me.

Well, not just when you’re talking to Siri, actually. These voice assistant devices get triggered accidentally all the time, according to a whistleblower who’s working as a contractor with Apple.

The contractor told The Guardian that the rate of accidental Siri activations is quite high – most particularly on Apple Watch and the company’s HomePod smart speaker. Those two devices lead to Siri capturing the most sensitive data out of all the data that’s coming from accidental triggers and being sent to Apple, where human contractors listen to, and analyze, all manner of recordings that include private utterances of names and addresses.

The Guardian quoted the Apple contractor:

The regularity of accidental triggers on [Apple Watch] is incredibly high. The watch can record some snippets that will be 30 seconds – not that long, but you can gather a good idea of what’s going on.

The whistleblower says there have been “countless” instances of Apple’s Siri voice assistant mistakenly hearing a “wake word” and recording “private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters” and more. Those recordings often reveal identifying information, they said:

These recordings are accompanied by user data showing location, contact details, and app data.

If you aren’t muttering, “So, what’s new?” by this point, you haven’t been paying attention to the news about how much these devices are overhearing and how little the vendors are worrying about the fact that it’s a privacy violation.

Over the past few months, news has emerged about human contractors working for the three major voice assistant vendors – Apple, Google and Amazon – listening to us as they transcribe audio files. As a series of whistleblowers have reported, Google Assistant, Amazon Alexa and Siri have all been capturing recordings that are often unintentionally recorded by owners after their devices get triggered by acoustic happenstance: word sound-alikes, say, or people chattering as they pass by in the street outside.

Accidental recordings: “technical problem” or “privacy invasion?”

It’s all done to improve the vendors’ speech recognition capabilities, and identifying mistaken recordings is part of that. However, the whistleblower said, Apple instructs staff to report accidental activations “only as a technical problem”, with no specific procedures to deal with sensitive recordings. The contractor:

We’re encouraged to hit targets, and get through work as fast as possible. The only function for reporting what you’re listening to seems to be for technical problems. There’s nothing about reporting the content.

All the big voice assistant vendors are listening to us

First, it was whistleblowers at Amazon who said that human contractors are listening to us. Next it was Google, and now Apple has made it a trifecta.

Earlier this month, Belgian broadcaster VRT News published a report that included input from three Google insiders about how the company’s contractors can hear some startling recordings from its Google Assistant voice assistant, including those made from bedrooms or doctors’ offices.

With the help of one of the whistleblowers, VRT listened to some of the clips. Its reporters managed to hear enough to discern the addresses of several Dutch and Belgian people using Google Home, in spite of the fact that some of them never said the listening trigger phrases. One couple looked surprised and uncomfortable when the news outlet played them recordings of their grandchildren.

The whistleblower who leaked the Google Assistant recordings was working as a subcontractor to Google, transcribing the audio files for subsequent use in improving its speech recognition. He or she reached out to VRT after reading about how Amazon workers are listening to what you tell Alexa, as Bloomberg reported in April.

They’re listening, but they aren’t necessarily deleting: in June of this year, Amazon confirmed – in a letter responding to a lawmaker’s request for information – that it keeps transcripts and recordings picked up by its Alexa devices forever, unless a user explicitly requests that they be deleted.

“The amount of data we’re free to look through seems quite broad”

The contractor told the Guardian that he or she went public because they were worried about how our personal information can be misused – particularly given that Apple doesn’t seem to be doing much to ensure that its contractors are going to handle this data with kid gloves:

There’s not much vetting of who works there, and the amount of data that we’re free to look through seems quite broad. It wouldn’t be difficult to identify the person that you’re listening to, especially with accidental trigger – addresses, names and so on.

Apple is subcontracting out. There’s a high turnover. It’s not like people are being encouraged to have consideration for people’s privacy, or even consider it. If there were someone with nefarious intentions, it wouldn’t be hard to identify [people on the recordings].

The contractor wants Apple to be upfront with users about humans listening in. They also want Apple to ditch those jokey, and apparently inaccurate, responses Siri gives out when somebody asks if it’s always listening.

This is the response that Apple sent to the Guardian with regards to the news:

A small portion of Siri requests are analyzed to improve Siri and dictation. User requests are not associated with the user’s Apple ID. Siri responses are analyzed in secure facilities and all reviewers are under the obligation to adhere to Apple’s strict confidentiality requirements

Apple also said that a very small, random subset, less than 1% of daily Siri activations, are used for “grading” – in other words, quality control – and those used are typically only a few seconds long.

For its part, Google also says that yes, humans are listening, but not much. Earlier in the month, after its own whistleblower brouhaha, Google said that humans listen to only 0.2% of all audio clips. And those clips have been stripped of personally identifiable information (PII) as well, Google said.

You don’t need an Apple ID to figure out who’s talking

The vendors’ rationalizations are a bit weak. Google has said that the clips its human employees are listening to have been stripped of PII, while Apple says that its voice recordings aren’t associated with users’ Apple IDs.

Those aren’t impressive privacy shields, for a few reasons. First off, Big Data techniques mean that data points that are individually innocuous can be enormously powerful and revealing when aggregated. That’s what Big Data is all about.

Research done by MIT graduate students a few years back to see how easy it might be to re-identify people from three months of credit card data, sourced from an anonymized transaction log, showed that all it took was 10 known transactions – easy enough to rack up if you grab coffee from the same shop every morning, park at the same lot every day and pick up your newspaper from the same newsstand – to identify somebody with a better than 80% accuracy.

But why get all fancy with Big Data brawn? People flat-out utter names and addresses in these accidental recordings, after all. It’s the acoustic equivalent of a silver platter for your identity.

Getting hot and sweaty with your honey while wearing your Apple Watch, or near a HomePod? Doing a drug deal, while wearing your watch? Discussing that weird skin condition with your doctor?

You might want to rethink such acoustic acrobats when you’re around a listening device. That’s what they do: they listen. And that means that there’s some chance that humans are also listening.

It’s a tiny sliver of a chance that humans will sample your recordings, the vendors claim. It’s up to each of us to determine for ourselves just how much we like those vague odds of having our private conversations remain private.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/NBH7Od8K-QM/

Hackers target Telegram accounts through voicemail backdoor

As politicians should know by now, secure messaging apps such as Telegram can quickly become a double-edged sword.

On the one hand, a growing number of governments are so worried about its security capabilities, they try to ban the app. On the other, politicians who use the app themselves on the assumption of privacy can find their conversations exposed in the media.

The Brazilian Government’s Justice Minister Sergio Moro announced on 5 June 2019 that his smartphone had been hacked, four days before the politically compromising contents of his Telegram chats with a senior prosecutor started turning up as source material for articles in the media.

Since then, it has emerged that other Brazilian politicians, including President Jair Bolsonaro, and Economy Minister Paulo Guedes were also among a total of 1,000 other Telegram accounts targeted, which led to the arrest on 23 July 2019 of four suspects accused of being behind the attacks.

Voicemail… again

We’ll skip the contentious nature of the data hacked in this incident to focus on how the hack took place by exploiting one of the oldest weaknesses in the book – voicemail.

Voicemail? It’s not even part of the Telegram service, so it’s no wonder that some people didn’t see it coming.

Remember, Telegram is already vulnerable to account takeover/reset attacks of the sort that have troubled other services whereby attackers pretend to be a person and get a new SIM with the target’s phone number.

All that’s needed after that is to download the Telegram app and use the SMS verification message to access the user’s account.

Spoofing

But according to the testimony of one of the arrested suspects, Walter Delgatti Neto, there’s another, potentially more vulnerable, way to get those verification messages – via voicemail.

Accessing voicemail boxes turns out to be easier than it should be. Some people forget to set four-digit codes and those that don’t can potentially be undone by crooks cycling through the 10,000 possibilities.

Many voicemail systems fight back by checking that the number making an access call belongs to the subscriber, but these numbers can easily be spoofed if the attacker knows the correct number.

If an attacker can access voicemail they can access verification messages, such as Telegram’s, which are sent to voicemail if the hacker’s target is on a call or doesn’t answer three times in a row.

Apparently, news of the weakness has spread on forums, leading to leaks of attacks on other valuable targets, including Puerto Rico Governor Ricardo Roselló, whose position became untenable after his Telegram chats were recently leaked.

Importantly, according to a presentation at last year’s DEFCON convention, Telegram isn’t the only security service that might be susceptible to this weakness. Any service that allows SMS verification to be delivered by voice (which many do) could be at risk.

What to do?

Telegram users should ensure their voicemail is protected by a randomly generated PIN. They should also tighten up security by setting up both SMS verification and an additional password (go to Settings Privacy and Security 2-Step Verification), and enable a recovery email address.

If security is critical, keeping phone numbers secret is also important, or at least not associating them with a real identity.

Political suicide?

But the biggest mystery of all is why politicians entrust sensitive chats to a proprietary public service.

This is, after all, an app which has had its encryption protocol, MTProto, challenged by doubters, while others point out that users must manually turn on end-to-end encryption through Secure Chat and hope that any data that does end up on Telegram’s servers is securely encrypted.

Most likely, politicians are like almost everyone else  – they work on reputation and assumptions about security and don’t realise that the world is now full of people who will happily prey on their naivety.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JiHkgvqqGmE/