STE WILLIAMS

Oh ****… Sudo has a ‘make anyone root’ bug that needs to be patched – if you’re unlucky enough to enable pwfeedback

Sudo, a standard tool on Unix-y operating systems that lets select users run some or all commands as root, can be exploited to give superpowers to any logged-in user – if deployed with a non-default configuration.

This security hole, discovered by Joe Vennix at Apple Information Security, is only active if the pwfeedback option is enabled. This option shows an asterisk each time a key is pressed, when entering a password. The good news is that pwfeedback is generally disabled by default.

Sudo is included in macOS, but this option was not enabled when we tried it on our Catalina box. However, a few Linux distributions – seemingly Mint and Elementary OS – do enable the option. The purpose of the feature, as its name implies, is to reassure users that they are not typing into a black hole.

If sudo is installed and vulnerable, any user can trigger the vulnerability, even if not listed in the sudoers list of those with sudo privileges.

Linux Mint is vulnerable to the flaw discovered in sudo

Linux Mint is vulnerable to the flaw discovered in sudo

Like many programming blunders, this is a buffer overflow issue. “The code that erases the line of asterisks does not properly reset the buffer position if there is a write error, but it does reset the remaining buffer length. As a result, the getln() function can write past the end of the buffer,” the sudo developers explain. There is also a flaw that means the pwfeedback option is not ignored, even when reading from something other than a terminal device.

You can tell if you are vulnerable by running sudo -l and checking the output. If the word pwfeedback appears under Matching Defaults entries, it is potentially at risk. The next thing to do is to check the version number with sudo --version. Versions 1.7.1 to 1.8.25p1 inclusive are vulnerable. The bug is fixed in sudo 1.8.31, available now, and versions 1.8.26 to 1.8.30 are not exploitable.

Our brand-new install of Linux Mint was indeed affected, with version 1.8.21p2 installed and pwfeedback enabled.

The solution is to disable pwfeedback in the sudoers file, as explained in the linked article. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/05/sudo_bug_allows_privilege_escalation/

RIP FTP? File Transfer Protocol switched off by default in Chrome 80

Chrome 80 emerged from Google this week with a few more nails to hammer into the coffin of the venerable File Transfer Protocol (FTP).

While there has been somewhat of a kerfuffle around Chrome of late, the eagle-eyed will have noted that version 80, which debuted in the stable channel yesterday, has finally disabled the implementation by default.

You can still switch it back on via an option or command line flag (such as --enable-ftp) but, to be honest, why would you? Google noted that usage in the browser was so low (yes, The Chocolate Factory is watching, always watching) that there wasn’t much point in improving support.

While the likes of FileZilla can replace what is being sliced from Chrome, the time has surely come to follow the 2017 example of the Debian gang and shut down those old servers once and for all.

It has been a death by a thousand cuts for FTP in Chrome. Version 72 snipped off fetching document sub-resources over the protocol. A bug in 74 dropped support for accessing FTP URLs over HTTP proxies and went down so well that version 76 removed proxy support for FTP entirely.

FTP will lumber on in the browser, zombie-like, for a few more months. Version 81 will switch it off for all Chrome installations (not just non-Enterprise ones) and Version 82 should remove the thing once and for all.

The File Transfer Protocol has its roots in the happier, hippier times of 1971, when astronauts were still bounding about on the Moon. Over the years, it gained support for TCP/IP, IPv6 and, crucially, some security extensions.

Security is the key thing here. For example, FTP doesn’t encrypt its traffic and anyone armed with a packet sniffer can read the content of transmissions. Solutions such as FTP over Secure Shell connections can help, but ultimately FTP itself is a protocol from a simpler and more trusting time.

Google’s move may spur the last holdouts to type QUIT, particularly with the culling of the code in the Enterprise version of the browser. The likes of FTPS, SFTP and HTTPS will shunt data around in a far more secure fashion. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/05/ftp_deprecated_chrome/

Hiring Untapped Security Talent Can Transform the Industry

Cybersecurity needs unconventional hires to help lead the next phase of development and innovation, coupled with salaries that aren’t insulting

Think of the hottest high-tech regions and two words likely come to mind: Silicon Valley. There’s no question that the area stretching from San Francisco to San Jose continues to be the undisputed world leader when it comes to technology innovation and development, and of course, tech talent. This is especially true for cybersecurity technology and talent. So, naturally, it’s typically the first place many cybersecurity employers look when recruiting.

However, there’s a bigger perspective I feel we are missing, even ignoring: Untapped talent.

We’ve all seen the statistics about the cybersecurity staff shortage. One specific report, The Cybersecurity Workforce Gap, published by the Center of Strategic and International Studies, reports that by 2022, “the global cybersecurity workforce shortage has been projected to reach upwards of 1.8 million unfilled positions.” Further, “Workforce shortages exist for almost every position within cybersecurity, but the most acute needs are for highly skilled technical staff.” Many other reports put that number above 3 million. 

To me, this is both overwhelming, but also puzzling. It makes me wonder how much of the cybersecurity talent shortage is self-inflicted. Here are some of the variables in that equation that we as security professionals can address.

Hiring desires don’t align with salaries
A recent Forrester report calls out what many of us in the hiring industry have seen for years: “The deeper failure of bias, expectation, compensation, and commitment to effective recruiting and retention.”

Often times, recruiters and hiring managers are looking for superheroes but pay them entry-level salaries. Forrester’s Chase Cunningham notes, “Job postings will require a bachelor’s degree with five to seven years of experience with all kinds of technology, and a master’s degree preferred, but by the way we only want to pay you $85,000 a year.”

This alone creates huge alignment problems in organizations and the industry as a whole. You can’t expect to hire world-class talent if you’re not willing to pay them what they’re worth, and what the market requires you pay them.

Unwillingness to challenge biases
Many people who do not have technical degrees are automatically and immediately disqualified from careers in cybersecurity. This is a serious problem. While I understand the technical nature of many positions in this space, one can have immense technical knowledge and talent, without a computer science degree. 

One of my industry colleagues told me that some of the best software engineers in his company had philosophy degrees, not engineering degrees. Cybersecurity also needs non-technical talent to help lead the next phase of what we need – strategists, leaders, product leaders, and facilitators to help companies better protect themselves.

One of the places I’ve personally seen such incredible talent is Northern Ireland. The country has such diversity in its talent pool, and most don’t realize it. This may be a shock, but Northern Ireland is now the top area in the world for investment in US cybersecurity development projects. The region boasts an impressive roster of international companies as well as innovative cybersecurity startups, and it’s all supported by world-renowned university research and a strong incubation and entrepreneurial ecosystem. 

Northern Ireland was also ahead of the game in foreseeing the need for cybersecurity education and training and has been investing heavily in it for two decades, with government, academia, and the private sector teaming up to encourage widespread adoption. The result is an absolute hot spot for world-class talent. We would not have known that this country was such an amazing pool of talent had we not started to challenge our assumptions about hiring in the cybersecurity industry.

The Bottom Line
The cybersecurity threat landscape doesn’t look to be changing any time soon, so the need for skilled talent will only continue to grow. But we need to start looking everywhere for talent, not just what and who we think are the right candidates and backgrounds. 

Remember what Silicon Valley used to represent – that anyone, from any background, was able to create something from nothing, to defy the odds, to prove that technologies can be built by those with different viewpoints and qualifications, and still drive huge innovation, the very innovation that was fueled by recognizing that talent can come from all countries, experience levels, and different educational backgrounds.

Related Content:

Carla Wasko joined WhiteHat Security in June 2017.  She brings over 20 years of HR leadership experience in Human Resources to WhiteHat, where she reports directly to the CEO and is responsible for driving the strategy for People, Places and Culture.  Her previous … View Full Bio

Article source: https://www.darkreading.com/risk/hiring-untapped-security-talent-can-transform-the-industry/a/d-id/1336925?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Keeping Compliance Data-Centric Amid Accelerating Regulation

As the regulatory landscape transforms, it’s still smart to stay strategically focused on protecting your data.

GDPR. CCPA. NYPA. Staying up to date on the proposed and implemented global compliance standards requires a glossary and possibly a legal degree. Adhering to these various standards necessitates a concerted, coordinated effort across an organization. While large businesses may have the luxury of entire teams devoted to ensuring compliance, the majority of small and medium sized businesses are doing their best to interpret the regulations themselves and implement processes that address requirements in the least disruptive way possible.

And when new regulations are introduced or the business expands to geographic regions governed by a different set of regulatory standards, the process begins again. Interpret, comply, repeat. Keeping pace on the hamster wheel of compliance can be exhausting and disruptive, while also distracting from core business objectives in a manner that few companies can afford.

Instead of continuing this cycle, businesses need to rethink their compliance tactics. The best approach to thriving in an accelerating regulatory landscape is to strategically focus on the root of the challenge: Protecting your data. By taking a data-centric approach to security, companies can be better prepared to adapt to whatever regulatory environment they find themselves operating in.

Rather than focusing on securing networks, applications, and endpoints, data-centric security shifts an organization’s focus to securing the data itself. The approach emphasizes protecting what really matters — sensitive data assets — rather than trying to protect everything. There are many approaches to achieve this goal but most are built around identifying, classifying, securing, and monitoring data throughout its lifecycle. This data lifecycle can be broken into three categories: data at rest, data in transit, and data in use.

Data at rest: Often residing on the hard drive or in databases, data lakes, or cloud storage, this represents inactive data stored in any digital form. It is often protected using perimeter-based, access control and user authentication technologies and additional protections such as data encryption can be added as warranted by the sensitivity of the data involved.

Data in transit: This designation represents data moving through a local device, private network, or public/untrusted space. Standard practice is to protect data in transit using transport encryption, an efficient and effective defense strategy assuming businesses adhere to proper protocols.

Data in use:Traditionally the least acknowledged among the three data segments as it has historically lacked technology solutions practical enough for commercial use, data in use has become the point of least resistance for increasingly sophisticated attackers. Protection strategies for data in use commonly rely on nascent technologies including secure multiparty compute, homomorphic encryption, and secure enclave.

It’s helpful to think of these three components as the data security triad. By viewing the data lifecycle in this holistic manner, organizations can eliminate protection gaps and more clearly recognize vulnerabilities in order to establish the thorough, flexible security frameworks that this type of regulatory environment requires. The tools and tactics may change over time, but the focus on protecting data at all points in its lifecycle remains the same. The introduction of new regulations will require making adjustments rather than overhauling an entire data protection strategy, which will allow organizations to remain focused on core business objectives.

It is important that a data-centric approach to security does not render the data locked and unusable. Privacy-preserving technologies can enable the collaborative business practices while respecting the boundaries of regulated environments. Utilizing these types of innovative technologies allows companies to securely share data, employ third parties assets, and facilitate a number of other business functions that might otherwise be blocked by the recent swell of privacy regulations.

In the age of accelerating regulation, ensuring compliance requires protecting data at all times — whether at rest on the file system, moving through the network, or while it’s being used or processed. By centering security strategies around the data itself, organizations are better prepared to navigate the frequently-shifting compliance landscape, which will remain a patchworked collection of regulations across region and industry for the foreseeable future.

Related Content:

Dr. Ellison Anne Williams is the Founder and CEO of Enveil. She has more than a decade of experience spearheading avant-garde efforts in the areas of large scale analytics, information security and privacy, computer network exploitation, and network modeling at the National … View Full Bio

Article source: https://www.darkreading.com/risk/keeping-compliance-data-centric-amid-accelerating-regulation/a/d-id/1336908?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Twitter admits to raid on users’ phone numbers

December’s story of the researcher who tricked Twitter’s Android app into matching random phone numbers to 17 million user accounts just took a turn for the worse.

This week, Twitter confirmed that during its investigation into Ibrahim Balic’s research, it discovered that others had also successfully tried the same technique, stating:

We became aware that someone was using a large network of fake accounts to exploit our API and match usernames to phone numbers.

Twitter owes Balic its thanks, because until it was closed this was an easy-to-exploit hole in users’ basic privacy.

The flaw related to Twitter’s contact upload feature, by which users upload their contact book to enable them to connect to other Twitter users whose email or phone number matches the data.

It’s a useful feature with a legitimate purpose that any social media platform would want to encourage – quickly finding people you already know.

Except that Balic discovered that when he uploaded two billion numbers generated in a non-sequential way (to make them appear more like a real contacts list) Twitter would reveal the identity of any matches.

The only limitations were that it only worked when using the Android app (web-based uploads were immune), and only for Twitter users who’d both added their phone numbers to the service and turned on the ‘Let people who have your phone number find you on Twitter’ option.

By the time Twitter suspended his access on 20 December 2019, he’d claimed to have uncovered the numbers of millions of Twitter users in Israel, Turkey, Iran, Greece, Armenia, France and Germany, including one independently confirmed to belong to a senior Israeli politician.

As to who else might have been exploiting the same technique, the company said:

During our investigation, we discovered additional accounts that we believe may have been exploiting this same API endpoint beyond its intended use case. While we identified accounts located in a wide range of countries engaging in these behaviors, we observed a particularly high volume of requests coming from individual IP addresses located within Iran, Israel, and Malaysia. It is possible that some of these IP addresses may have ties to state-sponsored actors.

What to do

Twitter says it has fixed the issue by stopping account names being returned during searches, and apologized for not thinking of this sooner:

We’re very sorry this happened. We recognize and appreciate the trust you place in us, and are committed to earning that trust every day.

Users can check whether they’ve entered their phone number into Twitter (for example, to enable SMS two-factor authentication).

Find out if yours is searchable via More Settings and privacy Login and security Discoverability and contacts untick ‘Let people who have your phone number find you on Twitter’.

This isn’t the first time Twitter’s got into bother over how it uses (or lets others use) data such as phone numbers.

In October, it admitted it had inadvertently allowed advertisers access to phone and email data as part of the company’s Tailored Audiences system designed to feed users promoted tweets to their timelines.

A year earlier was the mini-scandal that third parties had access to supposedly private direct messages.

The lesson is simply this: anything you tell a social media platform might one day become fair game for someone else. If that bothers you, act before someone else does.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JjUJFBQ-IaQ/

Critical Android flaws patched in February bulletin

Google has patched some serious bugs in Android, including a couple of critical flaws that could let hackers run their own code on the mobile operating system (OS).

As with many new patch releases, the details about one of the most critical vulnerabilities, CVE-2020-0022, are not yet public. However, what Google does tell us in its February 2020 advisory is that it lies in the system component of Android, which contains the system apps that ship with the OS.

It’s a remote code execution bug in the context of a privileged process, giving the attacker a high level of access to the operating system, and it applies to versions 8.0, 8.1, and 9 of the Android Open-Source Project (AOSP), on which the various phone implementations of Android are based. It also looks like there’s another, less dangerous, vulnerability associated with this bug, which renders a phone subject to a denial of service (DoS) attack.

The other critical-ranked bug is CVE-2020-0023, this is an information disclosure vulnerability and applies to version 10 of the AOSP.

Overall, there are 25 bugs. Aside from six in Android’s system component, there are seven in the Android Framework, which contains the Java APIs for the OS. All the Framework bugs are ranked high, with some extending back to version 8.0 of the AOSP. The worst one could enable a malicious application to gain extra privileges by bypassing use interaction requirements, the developers said.

There were just two bugs at the kernel level, both rated high and both leading to escalation of privileges. An attacker using one of these bugs could execute arbitrary code in the context of a privileged process, the advisory said.

Finally, there were two sets of bugs relating to Qualcomm components. The first set involved open-source components. There were six bugs here, rated high, spanning the camera, the kernel, the audio subsystem, and the graphics. The second set involved closed-source components from Qualcomm. All four of those bugs were rated high, and Qualcomm provided a separate advisory for them.

The Android security bulletin contains two patch levels. The Framework and system groups fall under patch level 2020-02-01, while the kernel and Qualcomm patches are grouped under 2020-02-05. Google did this so that OEMs could fix a subset of vulnerabilities that were similar across all Android devices more quickly, it said in the advisory. However, device vendors really should patch the lot, it warned.

What to do

So, when can Android users get these patches?

Users of Google’s Pixel phones are likely to get them first. The company has already issued factory images and over the air (OTA) updates for phones going back to and including the Pixel 2, for which support ends this October. Users of other companies’ Android products should wait until they fold the patches into their own Android implementations.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/k2wIz5MF-3I/

Facebook will let parents see kids’ chat history, peer into inbox

Seven months after a crack formed in the keep-the-kids-safe bubble of Facebook’s Messenger Kids chat app, it’s beefing up the app’s Parent Dashboard with new tools and letting parents read their kids’ chat histories, see the most recent videos and photos they sent or received, and delete any content they find objectionable.

On Tuesday, product manager Morgan Brown said in an announcement that on top of the new tools and features for parents to manage their child’s experience in Messenger Kids, the company has also updated the app’s privacy policy to include additional information about its data collection, use, sharing, retention and deletion practices.

Facebook is pulling kids into that “what are you doing with my data?” conversation: it’s developed an in-app activity that educates them on what other people can see about them, such as that people they know may see their name and photo and that parents can see and download their messaging content.

Privacy guides in Messenger Kids.

New Parent Dashboard features

To get to the Parent Dashboard, tap the shortcut menu in the Facebook app and scroll to the Messenger Kids icon. If you have multiple kids using Messenger Kids, select the name of the child whose account you’d like to manage to access their specific dashboard. These are the new features you’ll find:

Recent Contacts and Chat History: See who your child is chatting with, whether they’re video chatting or sending messages, and how frequently those conversations happened over the past 30 days.

Log of Images in Chats: Peek into their inbox to see the most recent photos and videos your child has sent and received. There you can remove any inappropriate content from your child’s message thread and report it.

Reported and Blocked Contacts History: Access a list of the reporting and blocking actions your child has taken in the app. You’ll see a list of the contacts they’ve blocked and/or unblocked, whether they’ve reported any messages, any contacts they’ve reported, and the reason why. Parents will continue to be notified via Messenger if their child blocks or reports someone.

Remote Device Logout: Parents can see all devices where their child is logged in to Messenger Kids and log out of the app on any device. Facebook notes that this isn’t meant to control when kids have access to the app – that’s what Sleep Mode is for.

Download Your Child’s Information: Request a copy of your child’s Messenger Kids information, similar to how you can download your own information within the Facebook app. The download will include a list of your child’s contacts as well as the messages, images and videos they’ve sent and received. Your child will be notified through the Messenger Kids app when you request this information.

New ways for kids to block for themselves

Facebook has also simplified the way that kids block contacts in the app, enabling them to unblock somebody on their own if they want to restart one-on-one chatting. Parents will still be able to view chats with blocked contacts by looking at their kid’s inbox. Kids and their blocked contacts will remain visible to one another, and they’ll stay in shared group chats, but the blocker and the blocked contact won’t be able to message each other individually. Kids will also receive a warning if they return to, or are added to, a group chat that includes a blocked contact, and can leave group chats at any time.

Happy talk

The new features and privacy policy are built on what Facebook says is its continuing dialogue with “thousands of parents, parenting organizations, child safety advocates and child development experts about the need for a messaging app that lets kids have fun connecting with friends and family while giving parents control over the experience”.

With two out of three parents wishing they had more control over their kids’ online experiences,* we’ve continued our dialogue with parents and experts around the world to ensure we’re providing a messaging app that works for families.

You well may ask that that asterisk’s all about. In fact, that “2 out of 3 parents” wanting more control statistic comes out of a survey commissioned by Facebook. It’s not an unbelievable number, by any stretch of the imagination, but it harkens back to another (paid) bunch of experts that the platform consulted with prior to the release of Messenger Kids.

Experts did not, in fact, initially welcome Messenger Kids with open arms. A 2018 Wired report found that Facebook funded its favorite child health experts to vet the app before it debuted while skirting the toughest nonprofits working in the field of child safety and development: Common Sense Media and Campaign for a Commercial Free Childhood. Those nonprofits told Wired that they’d only learned of the app until weeks or days before its debut.

In January 2018, a coalition of 97 child health advocates asked Facebook to torch the app, citing “a growing body of research [that] demonstrates that excessive use of digital devices and social media is harmful to children and teens” and that the app is likely to “undermine children’s healthy development.”

Facebook hatched Messenger Kids just a month – December 2017 – prior to the ditch-it campaign, having decided to bring messaging to the age 6-12 clutch of Facebook users-to-be.

Messenger Kids was designed to be compliant with the Children’s Online Privacy Protection Act (COPPA). Congress enacted the legislation in 1999 with the express goal of protecting children’s privacy while they’re online. COPPA prohibits developers of child-focused apps, or any third parties working with such app developers, from obtaining the personal information of children aged 12 and younger without first obtaining verifiable parental consent.

It won’t have ads, Facebook promised, nor in-app purchases, and kids’ data isn’t collected for marketing: a good way to sidestep pesky legal entanglements like the class action that looked to sue Facebook after people’s kids spent hundreds of dollars on in-game purchases for Ninja Saga.

No siree, no gunslinging chickens or picnic-lugging bears, courtesy of your online-game-loving offspring, will drain your bank account via Messenger Kids, Facebook assured us.

It all sounded like a good, safe bubble to put the kids into, but that bubble has developed cracks. In June 2019, Facebook found what it called a “technical error”: a hole in the supposed closed-loop messaging system that allowed children to join group chats with people their parents hadn’t approved.

Hopefully the new tools and features won’t introduce any new cracks in that bubble. Unfortunately, it would be an understatement to say that software has a tendency to spring surprises on its developers, as do the tech platforms themselves.

Having said that, here’s another piece of armor to help protect the kids:

How to keep your children safe on their phones

If you’re concerned about what your kids can get at on their smartphones, then good – you should be. It’s scary out there. In this video, Matt Boddy explains how you can restrict what they can access.

(Watch directly on YouTube if the video won’t play here.)

Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YUhRiBvzy6c/

Someone else may have your videos, Google tells users

As the well-worn internet saying goes – there is no cloud, it’s just someone else’s computer.

This week, an unknown number of Google Photos users were alarmed to find that this can turn out to be true in surprisingly personal ways.

According to an email sent to affected users, between 21 and 25 November 2019 anyone using the Google ‘Download your data’ service might have experienced a serious glitch:

Unfortunately, during this time, some videos in Google Photos were incorrectly exported to unrelated users’ archives. One or more videos in your Google Photos account was affected by this issue.

Conversely, being a two-way issue, affected users might notice any videos in their archive not belonging to them.

The service is part of Google Takeout (or Google Takeaway) and can be used to download copies of a wide range of data relating to Google services, including photos and videos.

Google doesn’t state how many users this relates to but it’s safe to assume that if you used the function between those dates, you are probably affected.

One Google user who did was Duo Security co-founder and CEO, Jon Oberheide, who tweeted the news to the world after receiving the email this week:

After contacting Google for clarification, he was told that “unfortunately, we’re not able to provide a full list of impacted videos.”

Because the videos are now stored on other people’s computers, there is no obvious way of getting them back.

Google says it has now fixed whatever problem led to the issue and advises affected users to perform another data export of the same data while deleting any already downloaded. Re-downloading the data should overwrite any content as long as that archive itself hasn’t been backed up elsewhere.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ANgXswF_DM0/

Malware infection attempts appear to be shrinking… possibly because miscreants are less spammy and more focused on specific targets

Attempts to infect computers with ransomware and other malware over networks are decreasing, reckons infosec outfit Sonicwall.

However, that may be because more and more attacks are tailored for individual, specific targets, rather than being spammed out, and thus not well detected by internet watchers and their honeypots.

A mere 9.9 billion of these malware attacks were picked up by Sonicwall in 2019, the American company claimed in its latest figures, saying that this represented a six per cent decrease on 2018’s figures. Ransomware specifically was down nine per cent to 188 million, apparently.

By attack, Sonicwall appears to mean an attempt to connect to a vulnerable network service to potentially exploit it. Yes, a small step above port scanning. Apply the usual seasoning of salt to these glossy vendor claims.

“Attacks,” the outfit said, “were more evasive with higher degrees of success, particularly against the healthcare industry, and state, provincial and local governments.”

Public sector organisations are becoming a more popular target among ransomware crooks because they’re perceived as being more likely to roll over and pay ransoms in order to get their files back. The private sector is not immune from meekly giving money to criminals, however, as a recent High Court judgment showed.

Sonicwall chief exec Bill Conner told The Register he had seen a “huge increase in encrypted threats in web apps and cloud apps” – meaning encrypted malicious code hidden in applications.

ransomware

WannaCry ransomware attack on NHS could have triggered NATO reaction, says German cybergeneral

READ MORE

Interestingly, cryptojacking – malware that uses your device’s compute power to secretly mine cryptocurrency on behalf of lazy script kiddies – was apparently down 78 per cent by volume, as seen by Sonicwall, since July 2019.

This may or may not be related to revelations a little while ago that the average profit from cryptojacking malware is a measly $5.80, along with more recent warnings to sysadmins that lots of network traffic to and from Github and Pastebin could be an indicator of cryptojacking compromise.

In addition to all of these, Sonicwall also reckons in its 2020 brochure, out today, that microchip side-channel exploitation techniques are evolving beyond vanilla Meltdown and Spectre, saying that attacks such as TPM-fail may well be being “weaponised” in the near future.

Finally, while the alert corners of the infosec world tend to patch their IT estates promptly as and when new vulnerabilities become known about, it is the less up-to-speed among us who need the regular reminders. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/04/sonicwall_threat_report/

This is not Huawei to reassure people about Beijing’s spying eyes: Trivial backdoor found in HiSilicon’s firmware for net-connected cams, recorders

This may shock you, but Huawei effectively built a poorly hidden, insecure backdoor into surveillance equipment that uses its HiSilicon subsidiary’s chips, it appears.

This security blunder could be exploited over the local network to inject commands into vulnerable devices.

A hardware probester going by the name of Vladislav Yarmak explained on Monday how the tech giant left a remote access tool in firmware used in network-connected video recorders and security cameras. Of equal concern is the fact that HiSilicon seems uninterested in closing the hole, since it still hasn’t addressed a similar vulnerable discovered and reported in 2017.

To be clear, this security vulnerability is said to be present in the software HiSilicon provides with its system-on-chips to customers. These components, backdoor and all, are then used by an untold number of manufacturers in network-connected recorders and cameras.

This latest hole, as described by Yarmak, is pretty simple. The firmware opens a service on TCP port 9530. You connect to this port, and exchange some data to agree upon a randomly generated session key that’s used to encrypt the rest of your communications with the software. You then send a request, Telnet:OpenOnce, to the device to tell it to open a Telnet service. If all goes to plan, a Telnet daemon starts on TCP port 9527.

You then connect to that remote service with the username root and password 123456 – there are in fact six possible root passwords – and you’re in as the superuser, able to control the gizmo and issue shell commands to the underlying Busybox-based Linux operating system.

One of the passwords suggests this affects at least devices using HiSilicon’s Arm-based hi3518 system-on-chip. The full client-server exchange is detailed by Yarmak in the above link. A crucial point is that although both sides agree on a session key, it relies on a pre-shared key that is present in plaintext in the firmware for anyone to find and extract and use. It doesn’t appear this port 9530 service is open to the internet, rather just the local network.

It’s not a major threat, or anything people need to fret about, it’s just another indicator of Huawei’s piss-poor approach to security.

HiSilicon and Huawei did not respond to requests for comment.

china

There are already Chinese components in your pocket – so why fret about 5G gear?

READ MORE

We’re told these backdoor shenanigans are nothing new for HiSilicon, as the manufacturer has been accused of enabling remote access in its firmware on purpose going back as far as 2013. The Telnet daemon used to be enabled by default in earlier versions of the firmware; now, since 2017, you have to unlock it by knocking on the software stack in a particular way.

“Devices with vulnerable firmware has the macGuarder or dvrHelper process running and accepting connections on TCP port 9530,” wrote Yarmak.

“More recent firmware versions had Telnet access and debug port (9527/tcp) disabled by default. Instead they had open port 9530/tcp which was used to accept special command to start telnet daemon and enable shell access with static password which is the same for all devices.”

Yarmak claims hundreds of thousands of devices may be open to this kind of issue, although a Shodan.io scan revealed just 13 with that magic port 9530 open. Then again, there may be many more open on local networks. This is a zero-day vulnerability because it seems Huawei wasn’t warned about it before this week’s public disclosure. Here’s how Yarmak put it:

It is not practical to expect security fixes for the firmware from the vendor. Owners of such devices should consider switching to alternatives.

However, if a replacement is not possible, device owners should completely restrict network access to these devices to trusted users. Ports involved in this vulnerability is 23/tcp, 9530/tcp, 9527/tcp, but earlier researches indicate there is no confidence other services implementation is solid and doesn’t contain RCE [remote code execution] vulnerabilities.

Chalk this up as yet another blow against Huawei as the Chinese telecoms giant tries to fight off allegations its gear can be remotely bugged by China’s government. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/04/hisilicon_camera_backdoor/