STE WILLIAMS

Is Machine Learning the Future of Cloud-Native Security?

The nature of containers and microservices makes them harder to protect. Machine learning might be the answer going forward.

Cloud-native architectures help businesses reduce application development time and increase agility, at a lower cost. Although flexibility and portability are key drivers for adoption, a cloud-native structure brings with it a new challenge: managing security and performance at scale. 

Challenges in the Cloud
The nature of containers and microservices makes it harder to protect them in these ways:

1. They have a dissolved perimeter, meaning that once a traditional perimeter is breached, lateral movement of attacks (such as malware or ransomware) often goes undetected across data centers and/or cloud environments.

2. With a DevOps mindset, developers are continuously building, pushing, and pulling images from various registries, leaving the door open for various exposures, whether they are operating system vulnerabilities, package vulnerabilities, misconfigurations, or exposed secrets.

3. The ephemeral and opaque nature of containers leaves a massive amount of data in its wake, making visibility into the risk and security posture of the containerized environment extremely complicated. Sorting through interconnected data from thousands of services across millions of short-lived containers to understand a specific security or compliance violation in time is akin to finding a needle in a haystack.

4. With increased development speeds, security is being pushed later in the development cycle. Developers are failing to bake security in early, opting instead to add it on at the end, and ultimately, they are increasing the chance of potential exposures in the infrastructure.

With tight budgets and the pressure to constantly innovate, machine learning (ML) and AIOps — that is, artificial intelligence for IT operations — are increasingly being built into security vendor road maps because it is the most realistic solution to decrease the burden on security professionals in modern architectures, at least at this point.

What Makes ML a Good Fit?
As containers are constantly being spun up and down on demand, there is no margin of error for security. An attacker has to be successful just once, and this is much easier in a cloud-native environment that is constantly evolving, especially as security struggles to keep up. This means runtime environments can now be compromised due to insider hacks, policy misconfigurations, zero-day threats, and/or external attacks.

It is hard for a resource-starved security team to manually secure against these threats, at scale, in this dynamic environment. It may take hours or days before a security profile is adjusted, which is plenty of time for a hacker to exploit this window of opportunity.

Over the last few decades, we have witnessed tremendous progress in ML algorithms and techniques. It has now become possible for individuals who do not necessarily have a statistical background to take models and apply them to various problems.

Containers are a good fit for supervised learning models for the following reasons:

1. Containers have minimal surface area: Because containers are fundamentally designed for modular tasks and have smaller footprints, it is easier to define baseline activity inside and decide what is normal versus abnormal. In a virtual machine, there could be hundreds of binaries and processes running, but in a container, the number is far less.

2. Containers are declarative: Instead of looking at a random manifest, DevOps teams can look at the daemon and container environment to understand exactly what that specific container would be allowed to do at runtime.

3. Containers are immutable: The immutability factor serves as a theoretical guardrail to prevent changes at runtime. For example, if a container starts running netcat all of a sudden, that could be an indicator of a potential compromise.

Given these characteristics, ML models can learn from the behavior, enabling them to be more accurate when creating runtime profiles that assess what should be allowed versus not. By letting machines define pinpointed profiles and automatically spotting indicators of potential threat, it improves detection. This also alleviates some of the burnout among members of the security operations center team because they don’t have to manually create specific rules for their different container environments, which helps them focus on the response and remediation rather than manual detection.

In this new world, security has to keep up with the ever-changing technology landscape. Teams must equip their cloud-native security tools to cut through noise and distractions, and find the insight they are looking for and need. Without ML, security teams find themselves stuck on details that don’t matter and missing what does.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Pawan Shankar has more than eight years of experience in enterprise networking and security. Previously, he worked for Cisco as an SE and a PM working with large enterprises on data center/cloud networking and security solutions. He also spent time at Dome9 (acquired by Check … View Full Bio

Article source: https://www.darkreading.com/cloud/is-machine-learning-the-future-of-cloud-native-security/a/d-id/1335206?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

18% of Enterprises Holding Back on Windows 10 Upgrade

Microsoft will officially end support for Windows 7 on January 14, 2020. Many large businesses aren’t ready.

With less than six months until Microsoft officially ends support for Windows 7 on January 14, 2020, the clock is ticking for the 18% of large enterprises that haven’t yet adopted Windows 10.

New research, conducted by Censuswide and commissioned by Kollective, shows nearly one in five large enterprises has yet to complete its migration. When Kollective reported on the state of Windows 10 migration in January 2019, 43% of businesses in the US and UK were still running Windows 7 and 17% of respondents didn’t know about Microsoft’s end-of-support deadline. Six months later, researchers found, 96% of organizations have begun their migration from Windows 7 to Windows 10.

While most (77% of) all businesses have finished the migration, nearly one in five large enterprises has not. Microsoft confirmed Windows 7’s end-of-life four years ago; however, upgrading can be a lengthy process — some firms took up to three years to transition from Windows XP to Windows 7, Kollective reports. Microsoft is offering options to extend support for large organizations but the cost of missing the deadline is an estimated $500,000 for an enterprise with 10,000 machines, researchers add.

The challenges don’t end when migration does. Many IT teams don’t realize how Windows 10 will affect their ongoing patching and update schedule, researchers found. Fifteen percent of IT pros are unaware of Windows-as-a-service and the need to continuously update endpoints.

Read more details here.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/18--of-enterprises-holding-back-on-windows-10-upgrade/d/d-id/1335249?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Software Developers Face Secure Coding Challenges

Seven in ten developers are expected to write secure code, but less than half receive feedback on security, a survey finds.

More organizations are adopting agile programming practices and secure development lifecycles, but most fail to provide developers the tools and processes they need to produce secure code. 

A newly published survey conducted by DevOps service provider GitLab found that 70% percent of programmers are expected to write secure code, but only 25% think their organization’s security practices are “good.” The gap between the expectations to which developers are held responsible and the reality of their work environments underscores the problems that companies face in securing their software, says Kathy Wang, senior director of security at GitLab.

“I do think that [on] the security side of the industry — the  state of it right now — we are still in a reactive mode,” she says. “There are a lot of companies out there that are moving toward the DevOps mindset, but I think most have not made the transition yet.” 

GitLab in its survey interviewed more than 4,000 developers, managers, and executives at software-producing companies, about 60% of whom are customers of GitLab, to suss out the trends affecting developers. The vast majority of companies are focused on some form of agile software development, with 50% using Scrum in some development groups, 37% using Kanban, and 36% using DevOps. Another 17% continue to use the more methodical waterfall development practice, the survey found.

Among the major issues they cite is security and how the production of secure code is handled at the companies. While agile methodology aims to break down barriers between groups — with DevOps’ push for a single development and operations pipeline being the most obvious example —companies have trouble in practice, the survey found.

“The idea that ‘everyone is responsible for security’ might be the ideal but it can also be part of the problem as ‘everyone’ can easily turn into ‘no one,'” the report stated. “Security professionals often complain about being on the outside, while developers and operations teams can resent being told how to prioritize their work.”

Testing 1-2-3

While 45% of companies have some form of continuous code deployment in the organization (one measure of agile development), half of developers believe that most vulnerabilities continue to be found only after merged code is exported into a test environment; they say they encounter the most delays during the testing stage of development.

Not catching software defects during the development process increases the cost of fixing the issues dramatically, Wang says. 

“We have application security teams and code scanning, but not every company is using those tools,” she says. “If you don’t use it, you are relying on manual code review and things are missed, which means you are finding things after the fact, after code is committed, and that is much more expensive.”

The survey found significant security benefits with a mature DevOps implementation: security teams are three times more likely to find vulnerabilities before code is merged. About a third of teams automated the use of static scans every time code is committed, and a bit more than quarter had inline security features that checked code as it is written.

Scanning for out-of-date dependencies is the most common type of security check, with 56% of those surveyed using the feature. Only 35% of companies used static analysis security testing (SAST) and 22% used dynamic analysis security testing (DAST), according to the survey.

In all, testing coverage extended to more than 90% of code in the most mature 14% of DevOps teams. 

“You want to make sure that developers are as educated as possible about secure coding processes,” Wang says. “You want tools, and with DevOps, you have more advanced components that you want to deploy.”

The security metrics that respondents deemed to be most important were the severity of vulnerabilities, the time lapsed since a vulnerability was discovered, the mean time to resolution, and the number of vulnerabilities reported.

One particular interesting tidbit: Developers who mainly work from remote locations more often rated their maturity of their organization’s security practices higher than those developers who work at the office. Wang did not have an explanation for the gap in perceived security practices.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

 

 

 

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/application-security/software-developers-face-secure-coding-challenges/d/d-id/1335247?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ransomware attackers, US mayors say you should go jump in a lake

In May, the US city of Baltimore was partially paralyzed when a ransomware attack seized parts of the government’s computer systems.

The data-kidnappers demanded 13 Bitcoins, worth about US $100,000 at the time.

Would the city cough it up? Or would it tough it out, knowing that lining the attackers’ pockets only encourages them to attack other government systems, and that paying is no guarantee they won’t come back to gouge away more?

At the time, Baltimore Mayor Bernard “Jack” Young said that the city might eventually pay up, but at that point, he was leaning toward the “no” camp.

Eventually, Mayor Young didn’t just lean toward no: he signed up whole hog with the “Go toast yourself” camp, sponsoring a resolution unanimously approved by the US Conference of Mayors last month, calling on cities to not pay ransom to cyberattackers.

Be it resolved: You can go suck on a lemon

There are 1,407 US cities with populations of 30,000 or more that make up the membership of the nonpartisan Conference of Mayors. It’s not binding, but this is the resolution that they all agreed to at their 87th annual meeting last month in Honolulu:

Opposing Payment To Ransomeware [sic] Attack Perpetrators

WHEREAS, targeted ransomware attacks on local US government entities are on the rise; and
WHEREAS, at least 170 county, city, or state government systems have experienced a ransomware attack since 2013; and
WHEREAS, 22 of those attacks have occurred in 2019 alone, including the cities of Baltimore and Albany and the counties of Fisher, Texas and Genesee, Michigan; and
WHEREAS, ransomware attacks can cost localities millions of dollars and lead to months of work to repair disrupted technology systems and files; and
WHEREAS, paying ransomware attackers encourages continued attacks on other government systems, as perpetrators financially benefit; and
WHEREAS, the United States Conference of Mayors has a vested interest in de-incentivizing these attacks to prevent further harm,
NOW, THEREFORE, BE IT RESOLVED, that the United States Conference of Mayors stands united against paying ransoms in the event of an IT security breach.

The Conference of Mayors’ numbers are backed up by a report published in May 2019 by US cybersecurity firm Recorded Future. The report says that ransomware attacks against state and local governments, while on the rise, are underreported.

There’s no ransomware reporting requirement

Such attacks aren’t always publicly reported – there’s no equivalent of the UK’s watchdog, the Information Commissioner’s Office (ICO), or of the General Data Privacy Requirements’ (GDPR’s) strict rules (and breathtaking fines) about reporting breaches to ensure that anybody knows about ransomware attacks against government agencies.

Recorded Future’s Allan Liska gives a shout-out to local reporters on this front:

A lot of the information I was able to find was in local papers or local television news reports, which makes sense – most of these incidents are not ‘big enough’ to be considered national news, so local journalists would be the only ones covering them.

Liska noted that the cut-off for the Recorded Future report was the end of April 2019. But since then, there have been at least three new ransomware attacks against state and local governments: Lynn, Massachusetts (twice: one attack against its schools, and then one against its online parking payment system; an attack against online bill pay in Cartersville, Georgia; and Baltimore, with the May attack being at least the second time the city has been hit. The first (publicly reported) attack was in 2018, when attackers went after Baltimore’s emergency service dispatchers.

Just in Florida alone, we’ve seen these cities get hit over the past few months:

  • Riviera Beach, Florida, which agreed to pay attackers over $600,000 three weeks after its systems were crippled.
  • Lake City, Florida, which was hit on 10 June 2019 by Ryuk ransomware, apparently delivered via Emotet. Lake City officials agreed to pay a ransom of about $490,000 in Bitcoin.
  • Key Biscayne, Florida, which got clobbered by an Emotet-delivered Ryuk attack. The city reportedly hasn’t yet decided if it’s going to pay the ransom.

…and earlier this month, it was Georgia’s court system.

In fact, Liska said, he dug up ransomware attacks in 48 states and the District of Columbia. That leaves only the states of Delaware and Kentucky with no (publicly reported) ransomware attacks.

That doesn’t mean those two states haven’t been attacked, mind you, Liska said. He pointed to an example of a writeup of an attack against a Utah county that said that …

The FBI is aware of other ransomware attacks on other Utah governments.

… but he couldn’t find public reports of attacks against other Utah government agencies.

It’s a good reason to continue to support your local news outlets.

What to do?

For information about how targeted ransomware attacks work and how to defeat them, check out the SophosLabs 2019 Threat Report.

The bottom line is: if all else fails, you’ll wish you had comprehensive backups, and that they aren’t accessible to attackers who’ve compromised your network. Modern ransomware attacks don’t just encrypt data, they encrypt parts of the computer’s operating system too, so your backup plan needs to account for how you will restore entire machines, not just data.

For more on dealing with ransomware, listen to our Techknow podcast:

(Audio player above not working? Listen on Soundcloud or access via iTunes.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WeFkAObXh_A/

FCC underwhelmed by carriers’ sluggish robocall efforts

The head of the Federal Communications Commission (FCC), Geoffrey Starks, has published the responses he’s received from major voice service providers after calling for them all to give customers free, on-by-default robocall blocking services last month.

Thanks for the timely response, he said on Thursday. No thanks for the muddled, slow-moving mess, though:

Despite historically clamoring for new tools, it does not appear that all providers have acted with haste to deploy opt-out robocall blocking services. The Commission spoke clearly: we expect opt-out call blocking services to be offered to consumers for free. Reviewing the substance of these responses, by and large, carriers’ plans for these services are far from clear.

In June 2019, the FCC voted to allow carriers to deploy call blocking services and to offer them to consumers by default, on an informed, opt-out basis. It also made it clear that if carriers don’t get in line and implement Caller ID authentication through the SHAKEN/STIR framework by the end of this year, it’s going to churn out regulations to make them do so.

In response to that order, Commissioner Starks asked 14 telecoms to inform the Commission of their plans to offer free robocall-blocking services by default.

This isn’t the first time the carriers have been slammed for their lousy efforts on robocalls, which are driving everybody nuts – to the extent that lawmakers have actually made bipartisan efforts to do something about it. In May, the bipartisan Telephone Robocall Abuse Criminal Enforcement and Deterrence Act, or the TRACED Act, sailed through the Senate on a 97-1 vote.

The carriers have pushed back on SHAKEN/STIR, pointing out that it’s not a complete solution – it’s not going to be useful without universal adoption, and that means it won’t affect calls carried by international providers. Nor is it cheap. Nor does SHAKEN tell them anything about the content of a call or whether it’s legal.

Those, at least, were some of the reservations the carriers expressed in November 2018, after FCC Chairman Ajit Pai told them that he’d expect that within a year we’d all be able to get back to actually answering our phones without finding we’ve been tricked by illegally spoofed caller IDs.

One year on from Pai’s call would be November 2019: about four months from now. How are those carriers getting on with it?

Progress report: A bit of a slog

Here are some of the progress points:

ATT is expanding its existing call-blocking system, called Call Protect, to provide free, default, automatic blocking of suspected fraud calls, for all newly installed lines. It says it’s also working on call-blocking and labeling tools for more customers in the coming months, also at no charge. Customers can expect to be notified via text message when automatic fraud blocking is added to their service.

T-Mobile’s Scam ID and Scam Block work automatically on all iOS and Android devices and are both free.

Comcast told Starks that it offers free, default tools at the network level that automatically block illegal and fraudulent robocalls. It also offers “a range of free robocall mitigation tools that its customers may opt in to using,” and it’s exploring how to make some of those tools available on an opt-out basis. Comcast didn’t give a timeline for when it would be offering those default tools.

Sprint said it would offer a free call-blocking application “in the near future.”

Put all the responses together, and it paints a picture that Starks has found to be underwhelming:

In our action last month, the Commission committed to studying this issue and delivering a progress report within a year. If we find that carriers are acting contrary to our expectations, we will commence a rulemaking. To that end, as I noted in my letters, I expect to be updated by carriers as progress is made on offering free call blocking services and recommend that carriers not stop until the job is finished.

Here’s the list of links to the 14 carriers’ letters in response to the FCC’s call for free, opt-out-based robocall blocking services for you to peruse.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LfsQXCfgI8U/

Apple quietly removes Zoom’s hidden web server from Macs

In an embarrassing twist to the week-long saga of Zoom’s vulnerable web-conferencing app, Apple has issued a ‘silent’ update that automatically removes the software’s hidden web server from Macs.

Zoom released its own fix doing the same thing a day earlier, on 9 July 2019, but Apple remained unconvinced that this protected users who had either not updated their software or had deleted it before the company took this action.

Removing something hidden from a platform like Apple’s isn’t a good look, and to add insult to injury, according to Apple expert Patrick Wardle, the removal was carried out using the macOS Malware Removal Tool (MRT).

Zoom later said it had worked with Apple to “test” the removal update, although to some people that will sound like a face-saving statement of the obvious.

Rinse and repeat

It’s fair to say, then, that last week was not a good one for anyone working at Zoom, whose web conferencing software boasts of having more than four million users across desktop and mobile platforms, including Windows (some of whose users are also affected).

The timeline of the vulnerabilities uncovered in Zoom, and the company’s response to it, have become rather confusing since news of the issue was made public on 8 July 2019 by researcher Jonathan Leitschuh.

Naked Security has already covered much of this in an earlier story, including some basic mitigation against it.

We’ll summarise the increasingly confusing story since that coverage by noting that the vulnerabilities have now generated three advisories:

  • CVE-2019-13449 (the original denial-of service flaw),
  • CVE-2019-13567 (webcam takeover, unpatched but mitigated by removing the web server described above), and
  • CVE-2019-13567 (a proof-of-concept making possible Remote Code Execution).

The first and third issues should be fixed by updating to Zoom client version 4.4.2 on macOS (the software is also re-branded by RingCentral, in which case it’s version 7.0.136380.0312).

The aftermath

Applications are afflicted with security problems all the time, but the account offered by Leitschuh of his attempts to get the company’s attention when he first discovered the issue in March 2019 doesn’t read well.

First, it took him weeks to get a response before he says the company offered him a bug bounty on the condition he didn’t publicly disclose the problem.

After some toing-and-froing and the expiration of Leitschuh’s 90-day disclosure, a ‘fix’ was issued that turned out to have a workaround, at which point he made the flaws public.

Tweeted Leitschuh on 8 July 2019:

Zoom responded in a statement, admitting that its website “doesn’t provide clear information for reporting security concerns,” and announcing imminent plans to launch a public bug bounty program.

It also painted a less tardy picture of its response to the flaws, without fully explaining why its engineers took the arguably risky step of running a local web server with an undocumented API in the first place.

For his part, Leitschuh recommends reporting flaws via third-party bug bounty programs rather than via Zoom’s. Either way, with researchers all over its software like a rash, Zoom has a job on its hands to restore trust.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/IpuuasImo2g/

New old Windows bug emerges, your ‘strong’ password is anything but, plus plenty more

Roundup Here is a brief look at some of the other security stories floating around right now.

Ruby gem strong_password tarnished

Earlier this month, an alert went out to Ruby on Rails developers after it was discovered that a popular package had been hijacked and injected with malicious code.

Tute Costa was going through the gems used for his Ruby application and checking for updates when he noticed that something was amiss with the strong_password package.

It was eventually concluded that the GitHub account managing the gem had been hijacked from its original owner and then had a bit of malicious code inserted. Costa alerted both the original owner and the Ruby on Rails security team.

“While waiting for their answers, I tried to understand the code,” Costa explained.

“If it didn’t run before (checking for the existence of the Z1 dummy constant) it injects a middleware that eval‘s cookies named with an ___id suffix, only in production, all surrounded by the empty exception handler _! function that’s defined in the hijacked gem, opening the door to silently executing remote code in production at the attacker’s will.”

Eventually, order was restored and the package was put back in control of the original developer. Devs who use the strong_password gem are advised to update to version 0.0.8 or downgrade to version 0.0.6 to make sure the malicious code is removed.

Malware group exploits zero-day in anitquated versions of Windows

Normally, an active attack on an unpatched Windows vulnerability is going to be headline news. This one, however, warrants far less attention.

Researchers with ESET say that an Eastern European group known as Buhtrap has been launching targeted attacks aimed at CVE-2019-1132, a privilege execution flaw in the win32k.sys component.

This sounds bad, but fortunately it’s a non-issue for most anyone with even a relatively up-to-date PC. The vulnerable version of win32k.sys is only present in Windows 7 SP1 or earlier. This means anyone running Windows 10, Windows 8, or Windows 7 SP2 is in no danger.

“The exploit only works against older versions of Windows, because since Windows 8 a user process is not allowed to map the NULL page. Microsoft back-ported this mitigation to Windows 7 for x64-based systems,” ESET said.

Microsoft also included a fix in the SSU update accompanying this month’s Patch Tuesday bundle.

Full Brazilian (compromise) for routers

Avast is sounding the alarm after uncovering a massive compromise of routers in Brazil.

The security house estimates that 180,000 users have been hit with a malware attack that attempts to change the DNS setting on their routers, allowing the attackers to re-route traffic requests to sites of their choosing.

Users are advised to update their router firmware and run an antivirus scan to check for any additional malware infections.

New reports on Lake City ransomware fiasco

Another week, another piece of news about the town of Lake City and its battle with a nasty ransomware infection.

A report from local news station WINK finds that despite paying the demanded ransom last month, the city was not able to decrypt all of its locked files. Perhaps that was why one of the city’s IT managers was dismissed recently.

Color us shocked: criminals aren’t particularly trustworthy.

Microsoft slips telemetry files into security update, people lose their minds

A minor stir was raised on Patch Tuesday when the Windows 7 “security only” version of the monthly update was found to contain a telemetry component called Compatibility App. This lead to outcry that Microsoft was trying to sneak tracking tools into what should only be a security fix.

As Luta Security CEO Katie Moussouris notes, however, there are plenty of legitimate reasons for this.

In other words, relax nerds. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/15/security_roundup_120719/

Malicious code ousted from PureScript’s npm installer – but who put it there in the first place?

Another JavaScript package in the npm registry – the installer for PureScript – has been tampered with, leading project maintainers to revise their software to purge the malicious code.

After a week of reports of unexpected behavior, software developer and PureScript contributor Harry Garrood on Friday published his account of the affair.

The installer, invoked by typing npm i -g purescript from the command line, was designed to install PureScript, a programming language that compiles to JavaScript, on the user’s system using the npm command line interface. It gets used about 2,000 times a week.

According to Garrood, the installer was originally developed and maintained by Shinnosuke Watanabe (@shinnn), a developer based in Japan. The PureScript maintainers had disagreements with Watanabe about the upkeep of the installer and asked him to transfer the project to their control.

“He begrudgingly did so,” explained Garrood in his post, noting that the 0.13.2 PureScript compiler release that debuted on July 5th is the first since the project team took over management of the installer package. And that’s where the problems started.

The PureScript installer has dependencies also under the control of Watanabe, or rather it did until they were removed earlier this week: the npm packages load-from-cwd-or-npm and rate-map. Garrood says malicious code was introduced into each of these packages at separate times to break the recent revision of the PureScript installer – but not previous versions published by Watanabe.

“@shinnn claims that the malicious code was published by an attacker who gained access to his npm account,” explained Garrood. “As far as we are aware, the only purpose of the malicious code was to sabotage the PureScript npm installer to prevent it from running successfully.”

Compromised developer accounts represent an ongoing concern among all the software package registries. Earlier this month, a Ruby gem (package) was hijacked. And in June, a vulnerability in an npm package was exploited to steal cryptocurrency, echoing a similar incident that came to light in November last year.

But it’s not clear that Watanabe’s account was actually hijacked; this may just be a case of one developer lashing out at others over personal disagreements.

Garrood implies that Watanabe is to blame for the security lapse but stops short of accusing him explicitly. He calls the compromise a malicious act without attributing it to anyone. At the same time, he cites behavior that’s difficult to explain – he claims that Watanabe deleted a GitHub issue post on July 9 made by developer Jolse Maginnis indicating that his load-from-cwd-or-npm package is breaking the installer.

A mole

NPM Inc settles union-busting complaints on third try – after CEO trolled for ordering internal mole hunt

READ MORE

In his analysis of the malicious portion of load-from-cwd-or-npm, Garrood observes that the purpose of a specific conditional statement that had been added “seems to be to ensure that the malicious code only runs when our installer is being used (and not @shinnn’s).”

On Twitter, developer Vincent Orr chastised Garrood for insinuating that Watanabe is to blame, to which Garrood replied, “I’ve deliberately not assigned any blame, just relayed facts.”

Orr however suggests that’s inconsistent with mentioning Watanabe’s GitHub handle a dozen times.

The Register emailed Garrood and Watanabe seeking comment but we’ve not heard back.

We’ve also asked NPM to elaborate on whether it has any investigated the incident or taken any action against Watanabe based on these allegations. No word yet. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/15/purescripts_npm_installer/

In memoriam – Corby Corbató, MIT computer science pioneer, dies at 93

Almost everyone’s heard of Linux – it’s the operating system kernel that’s behind a significant proportion of servers on the internet, including most of Google, Facebook, Amazon and many other contemporary online juggernauts.

In its Android flavour, Linux powers the majority of smartphones out there, and in one form or another it’s also the kernel of choice for many so-called IoT devices such as bike computers, home Wi-Fi routers, webcams, baby monitors and even doorlocks.

Most people who use Linux know that the name is a sort-of pun on Unix, the operating system that Linux most resembles.

And Unix, of course, is the operating system behind a significant proportion of the devices out there that don’t run Linux, being at the heart of Apple’s macOS and iOS systems, as well as the various and widely-used open source BSD distributions.

But nowhere near as many people realise that the name Unix was originally Unics, and was itself a pun on Multics, the ground-breaking multiuser operating system that gave rise to the Unix project itself.

Multics, in turn, was essentially Version 2 of an Massachussets Institute of Technology (MIT) operating system called CTSS, short for Compatible Time-Sharing System.

We take it for granted these days

CTSS offered a whole new way of organising computation, one that we take for granted these days on our laptops, servers and phones.

You could run programs in the background as batch jobs, or in the foreground as interactive sessions, and that was “you (plural)”, because several users could be running interactive sessions at the same time.

Each user had the illusion that they had a computer of their own, with programs and users protected from trampling on each other’s stuff by a supervisor program that kept control of the hardware and distributed its resources between the various processes running alongside each other.

As with many next-generation projects, Multics was ambitious, using brand new hardware built specially for the purpose, and a large development team split between multiple organisations.

Indeed, the complexity of the Multics project is what led to Unix, which was an alternative, slimmed-down approach described by Unix co-creator Dennis Ritchie as one of whatever Multics was many of.

Ultimately, the leaner, meaner, cleaner, smaller-team approach of Unix won the day, which is why we casually use the term “Unix-like systems” these days as if it were as a synonym for “anything that isn’t Windows” (and even Windows comes with a Unix-like subsystem now).

However, we could just as accurately, and with a more profound sense of history, say Multics-inspired or CTSS-derived instead.

As Dennis Ritchie said in the 1970s:

In most ways UNIX is a very conservative system. Only a handful of its ideas are genuinely new. In fact, a good case can be made that it is, in essence, a modern implementation of MIT’s CTSS system. This claim is intended as a compliment to both UNIX and CTSS. Today, more than fifteen years after CTSS was born, few of the interactive systems we know of are superior to it in ease of use; many are inferior in basic design.

The man behind it all

The man behind CTSS and Multics, the man who did the groundwork that made Unix happen, was Fernando José Corbató, better known simply as Corby.

Corby won the 1990 Alan Turing Award – the equivalent of a Nobel Prize in Computer Science:

For his pioneering work organizing the concepts and leading the development of the general-purpose, large-scale, time-sharing and resource-sharing computer systems, CTSS and Multics.

And that brings us to that sad news that Corby died yesterday at the age of 93.

Lest we forget

Amusingly, Corby is credited with “inventing” the computer password – Wired magazine, back in 2012, reported:

According to Corbató, even though the MIT computer hackers were breaking new ground with much of what they did, passwords were pretty much a no-brainer. “The key problem was that we were setting up multiple terminals which were to be used by multiple persons but with each person having his own private set of files […] Putting a password on for each individual user as a lock seemed like a very straightforward solution.”

So, even back in the 1960s when computer security was almost an oxymoron, Corby wanted the notion of privacy to be supported by technological controls that kept people out of each others’ stuff in deed as well as in thought…

…which is a legacy well worth remembering as we approach the 2020s and live in a world where we have now have almost all the technological controls we need to keep our privacy truly private.

Ironically, however, we now seem to keep on arguing for contrary controls that run a very real risk of keeping other people out of our stuff in thought only, instead of in deed.

Fernando José Corbató, RIP.


Featured image of Corby with computer thanks to MIT Computer Science Artificial Intelligence Lab

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vOmot0erDIE/

Blah blah Blaha: Slovak infosec firm ESET sues politico who called them ‘outrageous fascists’

Infosec company ESET is reportedly suing a member of the Slovakian Parliament for insulting it over social media.

According to Slovakian news outlet SME, ESET became fed up with the antics of local Marxist politician Ľuboš Blaha, who was allegedly describing the threat intelligence outfit’s owners as oligarchs who “own the media and pay politicians”.

He is also said to have claimed that ESET is linked to the US CIA, presumably to discredit the company as some kind of willing pawn to US foreign policy. The company has an extensive presence in the US market.

SME reporter Martina Raabova told The Register that the lawsuit against Blaha was filed by ESET because, in the infosec firm’s view, “he wrote several misleading statuses on Facebook about the company”.

Further reporting from Slovakia revealed that ESET asked Blaha to stop publishing the accusations of bribery, corruption and foreign collaboration or they’d sue. This prompted him to publish a Facebook video in which he said: “ESET is trying to silence me. This is their vision of freedom and democracy!”, going on to brand them “outrageous fascists”.

El Reg has not encountered any fascists, outrageous or otherwise, representing ESET at infosec industry events, that we know of at least.

Blaha’s beef with ESET seems to be that, as a local big business, they must be exploiting their wealth to support politicians who oppose his point of view. He appears to use social media to communicate directly and informally with his fan base, much like some other popular politicians in the English-speaking world.

Instances of security companies initiating lawsuits against politicians are really rather rare. Kaspersky has taken the US government to court for banning the public sector from using its products, though that is a biz-to-gov lawsuit rather than against the politicians who caused it.

A security product testing company, NSS Labs, sued Crowdstrike, ESET and a bunch of other firms last year, claiming that they were conspiring to stop product deficiencies becoming public – though that lawsuit has nothing to do with politicians being rude on Facebook.

ESET refused to comment, citing the ongoing legal case. ®

Bootnote

Google Translate renders ESET to “Isis” when translating from Slovakian to English. The name Eset is Slovakian for the Egyptian goddess Isis, who is the goddess of marriage and femininity. While the SME headline “Isis ran out of patience and sued Blaha” might suggest that the murderous bastards of the Islamic State terror group have finally turned to peaceful means of conflict resolution, sadly it’s just an error in translation.

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/12/eset_slovakian_mp_lawsuit/