STE WILLIAMS

How to Avoid Technical Debt in Open Source Projects

Engineering teams have only a certain amount of capacity. Cutting down the volume of rework inherent in the open source business model begins with three best practices.

While the first step in any open source project is to understand your bill of materials, many companies have yet to create “health checks” that identify the components that are in the software they are building and buying.  

Here’s the issue: Software is complex. It gets developed by teams of engineers who use a variety of open source libraries for whatever functionality they need. Even though they download and incorporate them into the software, their use is not always reported or documented somewhere. Though it is possible to ask a developer for a list of what they are using at that point in time, without an automated way to collect information as it is implemented, it can get lost.

But if enterprises continue to adopt open source as a business model — according to a recent report from Veracode, 95% of IT organizations rely on open source software — they need to focus on strategies to alleviate the “technical debt” involved. In other words, how can they lessen or avoid the additional rework associated with an open source business model?

Begin Here
Organizations should start by identifying which open source and commercial libraries and which versions of those libraries they are using, said Chris Eng, chief research officer at Veracode.

“The problem is that multiple development teams are spread out across the organization, which can span across different business units and geographies. You have different processes governing how all of those things work,” Eng says.

Shy of a central place where the entire company can put this information, it is extremely difficult to keep track of all the different libraries and versions that are being used. However, in order to assess risk as it relates to a vulnerability that is discovered in a particular library, organizations need to know what they are using and where it is being used.

Eng provides three best practices he says organizations should start with if they are looking for strategies to alleviate their technical debt.

1. Get open source risk under control: This step is the prerequisite to all others because you can’t secure something if you don’t know it’s there. Getting risk under control means finding all the applications that you have and then developing the habit of collecting that information.

If only that were as easy as it sounds. For some organizations, the process of understanding what is being used and where could be a multiyear effort. Eng says organizations will want to get automated tools and processes in place so that whenever they do a new build, add a new feature, or push something out to production, information is gathered about the bill of materials and the known vulnerabilities in those different libraries. 

2. Cross-reference known vulnerabilities: Let’s assume organizations get to the point where they know all the libraries they are using. They are keeping that up-to-date and always know what their developers are using. Then they can start cross-referencing the known vulnerabilities in that library.

Places like the national vulnerability database and different tools can provide organizations with views into that information and actually perform the cross-referencing, Eng says. Developers can look up an application they are developing and have it report all of the libraries that it’s using, along with all of the known vulnerabilities in that library and the different severities.

3. Establish “health check” policies: Cross-referencing gives developers critical information about risks that are inherited by virtue of using these open source libraries so they can prioritize what to fix based on that information. A health check might be a policy that says, “We don’t let anything go to production with inherited OS vulnerabilities that are high severity or above,” or, “We are going to let anything go out that is using a version of the library that is over a year old.” 

Eng says some organizations go as far as identifying the specific sanctioned versions of libraries they may use. If they detect developers are using some other than those, then they don’t get to release.

Only So Many Hours in a Day
Engineering teams have only a certain amount of capacity. They can be working on new features, fixing bugs, or doing maintenance. At the highest level, Eng says, software development is features versus maintenance, which includes bug reports and getting libraries up to date.

“If you have to do all this patching, which may involve reprogramming and will definitely involve testing, you are taking that capacity away from building new features,” he says.

What can help prevent that sort of technical debt build up is not letting libraries get so far out of date.

“Make it part of the maintenance process that when a new version comes out, you build that time into the engineering process to actually do the upgrade at that time so you stay reasonably current,” Eng advises.

Related Content:

Image Source: deagreez via Adobe Stock

 

Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition’s security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM’s Security Intelligence. She has also contributed to several publications, … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/how-to-avoid-technical-debt-in-open-source-projects/b/d-id/1335579?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Big Factors Putting Small Businesses At Risk

Small organizations still face a long list of security threats. These threats and vulnerabilities should be top of mind.PreviousNext

(Image: Rawpixel.com - stock.adobe.com)

(Image: Rawpixel.com – stock.adobe.com)

Cybercriminals are increasingly taking aim at smaller organizations. This puts small and midsize businesses (SMBs) in a tough spot. Faced with a long list of cyberthreats, they also are operating with smaller budgets and staff constraints, both of which can lead them to make poor security decisions.

Over the past year, Alert Logic has observed a “steady increase” in attacks and changes in attack methods affecting SMBs. An analysis of 5,000 attacks per day across its customer base from November 2018 to April 2019 reflected a variety of ways small businesses leave themselves exposed. Depending on the industry, SMBs typically invest less in security programs, says Jack Danahy, senior vice president of security at Alert Logic. Their weak spots can put them at risk.

“It is more likely that an attack focused at an older, unpatched vulnerability, or a relatively simple phishing attack, will find more success at these smaller organizations,” he explains. “So from my perspective, attackers are focusing on what they perceive as softer targets.” Danahy also says he has “no doubt” of a higher level of successful public attacks on smaller businesses.

As George Anderson, product marketing director at Webroot, points out, some of the threats SMBs face today are different from the security challenges they faced just a few years ago.

“I think the changes have been very dramatic,” he notes. As an example, he points to nation-state actors now targeting data SMBs hold. “That wasn’t very common four to five years ago,” Anderson explains, but activity has started to ramp up since it was first spotted back in 2016.

It’s imperative small businesses know how to maximize their limited security resources. To do so, they must be well-versed in the threats and vulnerabilities putting them at greatest risk. While it’s possible to have the same security as large firms, different steps need to be made. Reading up on SMB threats can help inform policies and procedures they should put in place.

Here, we outline the attacks SMBs should be aware of and the vulnerabilities putting them at risk. Did we miss anything? Feel free to add your thoughts in the comments.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: 5 Ways to Improve the Patching Process.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/risk/7-big-factors-putting-small-businesses-at-risk/d/d-id/1335581?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Phoning Home’: Your Latest Data Exfiltration Headache

Companies phone enterprise customer data home securely and for a variety of perfectly legitimate and useful reasons. The problems stem from insufficient disclosure.

It seems that every day there’s another news headline about how companies are using consumer data. Facebook was in the news again recently when the FTC slapped it with a $5 billion fine based on investigations into the Cambridge Analytica scandal. FaceApp made headlines for potential connections to Russia. And already this year, Congress has held hearings with some of the biggest names in tech, digging into their practices around the use and protection of customer data.

The manner in which companies handle and use customer data has become a source of growing scrutiny. But consumers aren’t alone in needing to worry about how companies use their data. Every day, enterprise organizations put massive volumes of data in the hands of third-party vendors. In some cases, like with software-as-a-service applications, it’s explicit that enterprise data will live within a third-party environment. But with other products, particularly those that live within the enterprise data center or cloud infrastructure, it’s far less clear. And when factoring in the devices that employees themselves connect to the network (without the knowledge of IT), knowing exactly how, when, and for what purpose third-party vendors are collecting and using their data can be even more difficult.

Phoning Home
As highlighted in our recent security advisory, we have observed a growing frequency of vendors “phoning” or “calling” data home (the white-hat term for exfiltrating data) to their own environments — sometimes without the knowledge or permission of their customers. To be clear, phoning data home is not necessarily problematic. Companies phone customer data home for a variety of perfectly legitimate and useful reasons with the customer’s advance knowledge and approval, and do so securely through de-identification and encryption.

But phoning data home becomes problematic when enterprise customers are unaware that it’s happening. Unfortunately, that happens more than you’d think — and at times, the perpetrators are the ones you’d expect to take the best care of your data. Within our security advisory, we highlighted four cases from the past year where vendors (including two prominent security technology providers) were calling home their enterprise customers’ data without the customers’ knowledge or authorization. These cases, spanning two large financial services firms, a healthcare provider, and a multinational food services company, all illustrate the alarming need for businesses to have better visibility into how their data is being used and where it’s going — even with trusted security vendors.

To be clear, we don’t exactly know why these vendors are phoning home data. In all likelihood, it was either for a legitimate purpose or the result of a misconfiguration. But the fact that large volumes of data were traveling outbound from customer environments to vendor environments without customers’ knowledge or consent is problematic. Not only does the enterprise customer lose control of how the data is managed once it leaves its environment, this also potentially exposes it to major regulatory issues.

A Regulatory Headache
Although the United States doesn’t have a unified data privacy framework, many large enterprise organizations operate according to the General Data Privacy Regulation (GDPR). Depending on the industry, they may also be subject to other data security or privacy regulations such as HIPAA, PCI, GLBA, FISMA, etc.

These regulations, GDPR in particular, require that organizations know exactly what data they have, the value of the data, how they are using it, and how they are protecting it. If the organization is unaware that a vendor is removing data from its environment, no matter how benign the reason, that certainty is eliminated.

This also gets at the heart of the processor/controller relationship. In many cases, an enterprise may be both a controller and a processor. As controller, enterprises must only appoint processors that guarantee compliance with GDPR. If an enterprise has no way of knowing what a vendor is doing with the data, then the enterprise cannot lawfully appoint the vendor and would risk penalties in doing so.

For organizations that fall into the processor category (and most do with respect to at least some of their data) any data called home by a vendor, even for a benign purpose, makes that vendor a subprocessor. If the organization is unaware the data is being called home, it is still responsible for the subprocessor’s actions and may be exposed to additional liabilities.

Mitigating Risks
There are several actions enterprises should take to protect their data from the risks associated with phoning home.

First, enterprises need to better monitor vendor activity on their networks, including active vendors, former vendors, or a prospective vendor post-evaluation. Organizations should also monitor egress traffic, especially from sensitive assets such as domain controllers, and match egress traffic to approved applications and services. Additionally, companies should track the deployment of software agents as part of any evaluation of products. More broadly, all enterprises need to understand the regulatory considerations of data crossing political and geographic boundaries. And they must track whether data used by vendors is in compliance with their contract agreements.

At the highest level, companies need a new operating principle of accountability for their vendors. Ask questions. Understand vendor protocols for phoning data home. Find out how data is being used. Know where data is going and when it’s appropriate to phone data home to a vendor environment.

Again, most phoning home is well-intentioned and has a legitimate purpose. But until companies start paying more attention to their data and holding vendors more accountable, unauthorized phoning home represents a paramount risk to enterprise security — one of Cambridge Analytica proportions. 

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Avoid Technical Debt in Open Source Projects.”

Jeff Costlow is the CISO at ExtraHop. He started his career in computer security in 1997. Jeff has deep experience with networking protocols, a passion for secure software development and many years of software engineering under his belt. In his spare time, Jeff enjoys … View Full Bio

Article source: https://www.darkreading.com/endpoint/phoning-home-your-latest-data-exfiltration-headache/a/d-id/1335519?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bad Actors Find Leverage With Automated Active Attacks

Once used only by nation-state attackers, automated active attacks have gone mainstream and allow the average cyber-criminal to gain entry and engage in malfeasance, says Chet Wisniewski, Principal Research scientist with Sophos. Luckily, organizations are getting smarter at spotting these stealthy, customized attacks earlier than they used to.

Article source: https://www.darkreading.com/bad-actors-find-leverage-with-automated-active-attacks/v/d-id/1335548?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Confidential Computing Consortium Includes Google, Intel, Microsoft

The Linux Foundation plans to form a community to “define and accelerate” the adoption of confidential computing.

The Linux Foundation today announced plans to form the Confidential Computing Consortium (CCC) — a nonprofit organization of hardware vendors, cloud providers, developers, open source experts, and academics dedicated to defining and driving adoption of confidential computing.

Modern approaches to cloud computing address data at rest and in transit. Encrypting data at rest is considered the next and most difficult step to fully encrypting the life cycle of sensitive data. Confidential computing will let encrypted data be processed in memory without sharing it with the rest of the system, reducing exposure and giving users more control and transparency.

Tech giants committed to the initiative include Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom, and Tencent. Organizations involved with the CCC will work to drive the confidential computing market, collaborate on technical and regulatory standards, and build open source tools to create a space for the development of trusted execution environments (TEE). The CCC will also serve as a foundation for education and outreach projects.

Some of these companies plan to make open source project contributions to the CCC, including Intel‘s Software Guard Extensions SDK, Microsoft Open Enclave SDK, and Red Hat Enarx. A proposed structure for the consortium will have a governing board, technical advisory council, and separate technical oversight for each of its technical projects.

“The earliest work on technologies that have the ability to transform an industry is often done in collaboration across the industry and with open source technologies,” said Jim Zemlin, executive director at The Linux Foundation, in a statement on today’s news.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Avoid Technical Debt in Open Source Projects.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/new-confidential-computing-consortium-includes-google-intel-microsoft/d/d-id/1335587?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

30+ countries, 160,000 emails, $4.2m in cyber-heists… maybe it’s time for the Silence hacker crew to change its name

The rapidly growing hacking crew dubbed Silence, has – in less than three years – gone from ransacking small regional banks in Eastern Europe to stealing millions from some of the largest international banks.

A report issued this morning by Singapore-based infosec outfit Group-IB claims that Silence, active since 2016, is now operating in more than 30 countries, and has so far been able to infiltrate banks’ computer networks to siphon at least $4.2m from compromised cash machines around the world.

Group-IB, which has monitored the cyber-crooks since their earliest days, says that as the Russian gang grew, so did the sophistication of their work. Now, having survived three years, Silence is operating as an extremely sophisticated and capable crew.

“Early on, Silence showed signs of immaturity in their tools, techniques, and procedures by making mistakes and copying practices from other groups,” the report, due to appear on Group-IB’s website today, recounts. “Now, Silence is one of the most active threat actors targeting the financial sector.”

When we last took a look at Silence, the crew was fresh off of its largest-ever financial hacking caper: nicking $3m from Bangladesh-based Dutch Bangla’s cash machines.

Since then, Group-IB estimates that the team has grown even more ambitious, sending out more than 170,000 emails to banks around the world, with a focus on Asia, where 80,000 messages were sent.

Those emails were often booby-trapped with links or attachments in an attempt to trick victims into downloading and opening one or more of the group’s preferred pieces of malware. The infected PCs connect back to a command-and-control server, and are then used to allow the hackers to move laterally around the bank’s computer networks.

The actual theft of the money is conducted through ATMs. As in the Dutch Bangla operation, other banks have reported that, once the miscreants get into the network, they gain control of the servers managing the cash machines and card processing systems.

silence

Russian ‘Silence’ hacking crew turns up the volume – with $3m-plus cyber-raid on bank’s cash machines

READ MORE

This allows the attackers to direct money mules to specific ATMs that are then ordered to dispense cash. If the mules are caught (as they were with the Dutch Bangla heist) the hackers masterminding the operation are shielded from the cops.

As successful as this method has been, it has also attracted attention to the operation, and Group-IB says that it has forced the Silence crew to up their game by making their malware tools harder to trace and attribute. They do, however, still have some learning to do.

“Silence has made a number of changes to their toolset with one goal: to complicate detection by security tools. In particular, they changed their encryption alphabets, string encryption, and commands for the bot and the main module,” Group-IB notes.

“Silence has also made a move to including fileless modules in their arsenal, albeit much later than other APT groups, suggesting that the group is still playing catch-up compared to other cybercriminal groups.” ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/21/silence_hackers_continues_growth/

Stuff like sophisticated government spyware is scary and all – but don’t forget, a single .wmv file can pwn you via VLC

VideoLAN has issued an update to address a baker’s dozen of CVE-listed security vulnerabilities in its widely used VLC player software.

The VLC update includes patches to clear up flaws that range in impact from denial of service (read: application crashes) to remote code execution (i.e. malware installation). Users and admins can get fixes for all of the vulnerabilities by updating VLC to version 3.0.8 or later.

So far, no attacks exploiting these holes have been reported in the wild.

“While these issues in themselves are most likely to just crash the player, we can’t exclude that they could be combined to leak user information or remotely execute code,” VideoLAN offered in announcing the update. “ASLR and DEP help reduce the likeliness of code execution, but may be bypassed.”

Each of the 13 flaws would be exploited by opening a booby-trapped media file, such as vids in WMV, MP4, AVI, and OGG formats. In other cases, the flaws could be exploited via browser plugins by visiting a malicious webpage.

11 of the 13 vulnerabilities were uncovered and reported to VideoLAN by bug hunter Antonio Morales Maldonado of security firm Semmle. Of those 10 bugs, Maldonado reckons that five in particular – two use-after-free() flaws and three out of bounds write bugs – are particularly dangerous as they would potentially allow for remote code execution if used successfully in the wild.

CVE-2019-14438 is particularly interesting as it targets .ogg files.

hacker

Dodgy vids can hijack PCs via VLC security flaw, US, Germany warn. Software’s makers not app-y with that claim

READ MORE

“This vulnerability could be triggered by inserting specially crafted headers which are not correctly counted by the xiph_CountHeadersfunction. As a result, the total number of bytes that could be written is larger than expected, overflowing previously allocated buffers,” Semmle notes in its disclosure.

“As a result, the total number of bytes that could be written is larger than expected, overflowing previously allocated buffers. In this case, the vulnerability risk is also increased due to the large amount of bytes that can be overwritten, and the possibility that it can also be turned into an OOB read.”

Two other remote code execution flaws were discovered by white-hats Hyeon-Ju Lee (who found CVE-2019-13602) and Xinyu Liu (CVE-2019-13962). Both of those would be triggered by launching a specially-crafted .MP4 file.

Maldonado’s other finds include three out-of-bounds read flaws (leading to information disclosure or an application crash) as well as two divide-by-zero and one null pointer dereference flaws that would crash the application. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/21/vlc_security_patch/

CISOs Struggle with Diminishing Tools to Protect Assets from Growing Threats

Most CISOs see the risk of cyberattacks growing and feel they’re falling behind in their ability to fight back, a new survey finds.

More than 80% of CISOs think that the risk of cyberattacks is increasing — and nearly a quarter believe that the attackers’ capabilities are outpacing their own, according to new research from Forbes in association with Fortinet. The reasons for the perceived disparity include shortages in budget and skilled professionals along with a threat attack surface that is quickly becoming larger and more sophisticated.

Artificial intelligence and increasing automation are among the tools CISOs are deploying to deal with increasing threat pressure while they work to increase their budgets and improve the training among security and IT staff to more adroitly deal with malicious activity.

Among the resources to be protected, customers’ personally identifiable information (PII) is listed as most critical, with 36% of those responding saying that it’s their primary concern. PII joins company intellectual property as assets CISOs say are at the top of the list of things to be protected — and the top of the list of assets criminals are most likely to target.

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/cisos-struggle-with-diminishing-tools-to-protect-assets-from-growing-threats/d/d-id/1335584?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Apple iOS update ends in jailbroken iPhones (if that’s what you want)

For the first time in… well, in ages, anyway… a jailbreak exists for the very latest version of iOS!

Jailbreaking is where you exploit a security hole, or more likely a whole series of security holes, in what is essentially a carefully orchestrated cybersecurity attack on yourself, in order to liberate yourself from Apple’s locked-down attitude to iPhones.

Want to install your own apps? Want to modify locked system settings? Want to run network services like SSH or even a tiny web server? Want the freedom to delve more deeply into a running system than Apple will let you? Want to patch security holes on old and unsupported devices?

Want to run the risk of heading into the unknown and accidentally putting your iDevice at more risk than it was before?

Jailbreaking lets you do all of those things, typically by lagging behind the latest iOS updates on purpose, leaving as many holes open as you dare while the jailbreaking community tries to figure out ways to exploit them.

If you keep your iPhone bang up-to-date, you run the risk that by the time a working exploit has been discovered for version X, you’re already on X+2 or X+7 and the exploit no longer works.

And, yes, it’s a complicated irony that one of the oft-mentioned benefits of jailbreaking, namely that it means you can fix bugs as soon as you like without waiting for Apple, is usually achieved by deliberately avoiding bug fixes that Apple has already published.

Well, this time is different!

If you’ve been sticking with iOS 12.3, for example, in the hope of a jailbreak coming out, you face the unusual prospect of upgrading officially to 12.4 first.

Long-time Apple hacker and jailbreaker Pwn20wnd (the middle characters are both digits) just released an updated to his popular Undecimus project, also known as unc0ver and touted as “the most advanced jailbreak tool.”

Right now, at least, you simply can’t jailbreak iOS 12.3 even though iOS 12.4 is open for jailbreaking business, and here’s why.

Bear with us, because there’s a metaphor coming.

Why bugs come back

You’re riding home on your bicycle, it’s cold and wet, there’s not too far to go, you’re already thinking lovingly of the electric heater in the bathroom (hipsters don’t use gas, remember?); suddenly, there’s a hissss….

…and your tyre goes flat.

You laboriously remove the offending wheel, take off the tyre, find the hole, patch the tube (hipsters repair rather than replace, remember?), pump it back up, put everything back together, ride on, feel like an achiever!

You’re colder and wetter than before, but smugly chatting in your imagination to the grandchildren you don’t yet have, saying, “When I was young, we had to fix our own…”; suddenly, there’s a hissss….

…and your tyre goes flat.

Double-punctures are more common than you might think, and often happen for the simple reason that the very act of applying a patch can be the cause of another failure, because it disturbs the status quo ante.

Perhaps you treated the symptom (a hole) but didn’t find the cause (a sliver of glass in the tyre), making another flat tyre almost inevitable, and soon?

Perhaps you introduced a new foreign body, such as a stone or another glass shard, while you had the tyre off the rim, making another flat almost inevitable, and soon?

Perhaps you dislodged or disturbed a previous patch, badly applied when you were in a hurry last time, making another flat almost inevitable, and soon?

Well, that’s what just happened to Apple, metaphorically speaking.

The SockPuppet exploit

Back in March 2019, a Google bug-hunter called Ned Williamson found and reported a bug, denoted CVE-2019-8605, in Apple’s kernel code.

Under Google’s Project Zero rules, details of bugs reported this way are suppressed for 90 days, or until a patch is broadly available, thus giving the affected vendor time to fix the problem before the bug is publicly disclosed.

The idea of the 90-day rule is that the crooks don’t get a free-for-all while the patch is being prepared.

Nevertheless, vendors still have genuine pressure on them to get security bugs patched, but not so much pressure that they are forced to act in haste, and thus perhaps to repent at leisure.

Anyway, Apple duly published patches within the deadine, issuing macOS 10.14.4 and iOS 12.3 on 13 May 2019.

These updates dealt with a raft of other security problems at the same time, but both operating systems notably received this fix:

KERNEL

Available for:  macOS Sierra 10.12.6, macOS High Sierra 10.13.6, macOS Mojave 10.14.4
Available for:  iPhone 5s and later, iPad Air and later, and iPod touch 6th generation

Impact:         A malicious application may be able to execute 
                arbitrary code with system privileges

Description:    A use after free issue was addressed with improved 
                memory management.

CVE-2019-8605:  Ned Williamson working with Google Project Zero

On 11 July 2019, presumably thinking that the danger was past, Williamson published a working exploit dubbed SockPuppet, a pun on the fact that the bug exists in low-level networking code.

(In the jargon, network connections are made between sockets, and sockets are commonly denoted by the abbreviation sock in networking code.)

This demonstration exploit was upgraded to a faster and more reliable version called SockPuppet2 on 22 July 2019.

And that’s where the puncture repair story in this case ought to have ended…

…except that it looks as though Apple’s most recent update to iOS, version 12.4, reintroduced the bug.

Whether Apple dislodged the earlier patch, introduced a new way to exploit the previous hole, or patched the symptom rather than the cause last time is not yet known, but the bug is back.

Ironically, Apple’s iOS 12.4 patch came out on 22 July 2019, the very same day as the new-and-improved SockPuppet2 demonstration exploit code.

That was a coincidence, of course, but it ended in trouble for Apple, because it made the recently-released unc0ver jailbreak possible.

Apple now needs to get iOS 12.4.1 out (lets assume that’s what it will be called) as soon as possible, and not just because the company disapproves of jailbreaking and goes out of its way to prevent it.

A patch-to-the-patch-that-broke-the-patch is needed because there’s now a publicly-known exploit, and an open source jailbreaking toolkit that uses it, against the iOS version that the majority of iPhone owners are currently running.

What to do?

According to reports, the current jailbreak doesn’t work on the very latest iDevices.

Apparently, devices using Apple’s new A12 processor aren’t affected -so you can relax – for now, at least – if you have an iPhone XS, iPhone XS Max, iPhone XR, iPad Mini (2019) or iPad Air (2019).

The rest of us are vulnerable.

One obvious suggestion is “roll back to 12.3”, but there are two reasons not to do so: firstly, 12.4 fixed a lot of other potentially serious holes at the same time as accidentally re-enabling SockPuppet; secondly, Apple won’t let you.

Jailbreakers who already have a jailbroken device can use a bunch of tricks to enable downgrading, or more precisely to prevent Apple disabling it, but those of us who aren’t long-term jailbreakers are out of luck.

Apple prevents downgrades as an anti-jailbreaking measure, or else you could always and easily hack your phone by rolling back to a version that you know could be jailbroken and then rolling back forwards with the jailbreak installed.

Another suggestion is to jailbreak your own phone, and then look for community-contributed patches to tide you over until Apple comes to the update party.

We recommemd against doing that – if you aren’t already familiar with the jailbreaking scene, then trying it out for the first time on a work phone or one you use regularly to run your personal life is probably a step too far.

In particular, we strongly recommend against some of the jailbreaking tricks currently showing up in online videos that promise a “jailbreak with no computer” – these typically require you to install unauthorised apps built using rogue Apple Developer Certificates.

As far as we can see, your phone can’t currently get jailbroken remotely, so crooks couldn’t install this jailbreak as a ‘crack’ against your will.

They’d need physical access to to your device, they’d have to know your unlock code, and they would need to install a third party app by addding a device management profile that you would be able to spot later on.

For now, the simplest advice is probably the safest: keep your lock code to yourself, don’t let other people play with your phone, and get Apple’s next update as soon as it comes out…

…which is likely to be soon, so watch this space!

You can check for third-party device management by going to SettingsGeneral and looking for a menu item called Device Management. If it exists, go into the option to see who’s been granted access to your phone. If it’s a work phone and it’s enrolled in a Mobile Device Management system like Sophos Mobile Control, you will see one or more entries in the Device Management menu – ask your IT team to tell you what to expect if you see something suspicious.

LEARN MORE ABOUT JAILBREAKING AND ROOTING

We recorded this Naked Security Live video to give you and your family some non-technical tips to improve your online safety, whichever type of phone you prefer.
Watch directly on YouTube if the video won’t play here.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MWaHZOUkY_4/

No REST for the wicked: Ruby gem hacked to siphon passwords, secrets from web devs

An old version of a Ruby software package called rest-client that was modified and released about a week ago has been removed from the Ruby Gems repository – because it was found to be deliberately leaking victims’ credentials to a remote server.

Jussi Koljonen, a developer with Visma in Helsinki, Finland, discovered the hacked code in rest-client v1.6.13, and opened an issue to discuss the matter on the GitHub repo for the software. The gem, originally intended to help Ruby developers send REST requests to their web apps, was altered to fetch malicious code from pastebin.com that steals usernames, passwords, and other secrets from the client’s host machine.

According to Jan Dintel, a developer with Digidentity in The Hague, Netherlands, when the infected client is used to send a REST request to a non-localhost website, the malware siphons off the URL of that site along with environment variables that may include authentication tokens, API keys, and other secrets you really don’t want in the wrong hands. These details can be reused by the malicious code’s mastermind to hijack the victims’ accounts.

It also allowed arbitrary Ruby code to run on the infected host, and overloaded the #authenticate method in the Identity class to obtain and leak the user’s email address and password every time the function is called to log into a service.

Cartoon man with hat and tie. Facial features replaced by question mark.

Malicious code ousted from PureScript’s npm installer – but who put it there in the first place?

READ MORE

The creator of the cracked gem, Matthew Manning, a software developer based in Atlanta, Georgia, promptly apologized, saying that his rubygems.org account had been compromised.

“I take responsibility for what happened here,” he explained in a post on Hacker News. “My rubygems.org account was using an insecure, reused password that has leaked to the internet in other breaches. I made that account probably over 10 years ago, so it predated my use of password managers and I haven’t used it much lately, so I didn’t catch it in a 1Password audit or anything. Sometimes we miss things despite our best efforts. Rotate your passwords, kids.”

In an email to The Register, Manning said, “I believe this type of attack is called ‘credential stuffing’ which is a subcategory of brute force attack. I had no idea anything had happened until a security researcher emailed me yesterday, around the time the GitHub issue was opened.”

The CVE created for the incident is CVE-2019-15224. It’s estimated that only about 1,000 people downloaded rest-client v1.6.13, so the fallout from the incident is likely to be minimal.

The maintainers of rubygems.org removed not only rest-client v1.6.10 through v1.6.13 (released August 13 and 14), but a handful of other compromised gems with related code, including:

  • bitcoin_vanity: 4.3.3
  • lita_coin: 0.0.3
  • coming-soon: 0.2.8
  • omniauth_amazon: 1.0.1
  • cron_parser: 1.0.12 1.0.13 0.1.4
  • coin_base: 4.2.2 4.2.1
  • blockchain_wallet: 0.0.6 0.0.7
  • awesome-bot: 1.18.0
  • doge-coin: 1.0.2
  • capistrano-colors: 0.5.5

The incident recalls another compromised gem spotted last month, strong_password v0.0.7, and similar attacks on several JavaScript libraries distributed through the npm repository, like the compromises of the purescript-installer, electron-native-notify and event-stream.

When successful, attacks on developer accounts provide miscreants with a way to multiply their effort – a malicious library or module can turn a single hacked account into many when other developers incorporate the compromised code and others opt to use the resulting applications.

Since developer-focused attacks have become more common, software repositories like rubygems.org, npm, and PyPI have encouraged developers to use multifactor authentication to help defend their accounts. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/20/ruby_gem_hacked/