STE WILLIAMS

Second time lucky: Sweden drops Julian Assange rape investigation

A rape investigation involving everyone’s favourite cupboard-dwelling WikiLeaker, Julian Assange, has been dropped, Swedish prosecutors told the world’s press today.

Deputy director of public prosecutions Eva-Marie Persson told journalists that the case against Assange had been discontinued, around seven years after allegations were first made against him by two complainants related to incidents that allegedly took place in August 2010.

“The reason for this decision is that the evidence has weakened considerably due to the long period of time that has elapsed since the events in question,” said the Swedish Prosecution Authority in a public statement.

Illustration of Julian Assange

Assange fails to delay extradition hearing as date set for February

READ MORE

Citing previous decisions of the Swedish Supreme Court on the legal tests needed to proceed with a rape investigation, prosecutors said these rulings “imply that it is insufficient for the injured party’s version of events to be more credible than the suspect’s [evidence, on its own, to justify a prosecution*]; however, a credible assertion of events on the part of the injured party in combination with other facts that have emerged may be sufficient for a conviction.”

It appears that in Sweden there must be independent evidence that backs up an allegation before a prosecution can go ahead. An allegation alone, however compelling, appears not to meet the legal threshold.

Naomi Colvin, an Assange campaigner currently with the Bridges for Media Freedom campaign group, tweeted: “Without wanting to state the obvious, if the investigation had ever been conducted properly, this [today’s dropping of the investigation] would have happened many years ago.”

The campaign’s Greg Barns chipped in to tell The Register: “The decision by Sweden is the only one it could have taken. It finally recognises that Julian Assange’s adamant denial of wrongdoing is the truth. It is the US that must now be persuaded to drop its unfair and dangerous pursuit of Assange.”

After the passage of nearly 10 years, it seems local prosecutors took the view that the rape complainants’ recall of events plus the heavy international media coverage would not be enough to secure a conviction.

The rape investigation was reopened in May this year after Ecuador, which had been sheltering Assange in its London embassy, finally kicked him out of their broom cupboard. The prosecution had previously been discontinued in 2017, with the Swedes evidently having given up hope that they’d ever get their hands on Assange.

Sweden said one of the complainants had asked for the probe to be resumed in April this year.

Chief prosecutor Marianne Ny said at the time: “In view of the fact that all prospects of pursuing the investigation under present circumstances are exhausted, it appears that it is no longer proportionate to maintain the arrest of Julian Assange in his absence. Consequently, there is no basis upon which to continue the investigation.”

Assange remains an involuntary guest of HM Prison Belmarsh in southeast London, with American prosecutors seeking his extradition and trial on a charge of conspiracy to commit computer intrusion “for agreeing to break a password to a classified US government computer”. The Australian was remanded in custody as a flight risk, being refused bail, after famously entering Ecuador’s London embassy to evade the British justice system. That little stunt cost his rich backers more than £90,000 in forfeited bail sureties – and eventually earned him a 50-week prison sentence once British police captured him.

He faces a full extradition hearing at Westminster Magistrates’ Court in February 2020, with the inevitable appeal probably being heard at the High Court in the second half of next year.

Kristinn Hrafnsson, WikiLeaks editor-in-chief, said in a tweet from that organisation: “Let us now focus on the threat Mr Assange has been warning about for years: the belligerent prosecution of the United States and the threat it poses to the First Amendment.”

Elisabeth Massi Fritz, the lawyer for one of the complainants, told The Reg:

Had the previous prosecutors in charge of this investigation recorded and held more detailed interviews with the supporting witnesses, then those testimonies would have had a higher evidentiary value, and could have been used today. Unfortunately that is not the case.

My client feels that it’s unjust that Assange will never have to face a proper interrogation where he would be asked the same questions that she has had to answer so many times.

The right decision at this stage of the investigation would have been to question Assange in London, formally notify him of the allegations against him, and then proceed with an indictment.

The current prosecutor has done a thorough job since the preliminary investigation was resumed in May this year, and she deserves praise for that.

My client has done everything she could to get justice, and as her lawyer I have fought for her rights and interests for many years. It has been an honour to help her.

®

Bootnote

* The italics are The Register‘s insertion.

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/19/julian_assange_sweden_rape_investigation_dropped/

A Security Strategy That Centers on Humans, Not Bugs

The industry’s fixation on complex exploits has come at the expense of making fundamentals easy and intuitive for end users.

Too often, the human element of security is ignored or overlooked. As Martijn Grooten has pointed out, humans are features, not bugs, in information security. It’s past time we acknowledged this reality and focus on improved usability for technical solutions and better communication outside the security community. With this one-two punch, the Internet Society’s Online Trust Alliance estimates, over 90% of compromises could be prevented.

Certainly, this is not a novel concept. In his Black Hat 2017 keynote, Alex Stamos called for greater empathy toward users, acknowledging the industry’s fixation on complex exploits that has come at the expense of making the fundamentals easy and intuitive. While great research avenues have emerged and sophisticated advanced persistent threats (APTs) have been detected, the overemphasis on lower-probability, complex exploits comes at the expense of higher-probability, less-sophisticated tactics that are responsible for over 90% of data compromises.

The focus doesn’t have to be one approach or the other, as equal attention on both research avenues could significantly affect security for the majority of the population. Researchers behind the 2019 Verizon “Data Breach Investigations Report” find most attacks could be classified as nuisance attacks, which means solutions exist to prevent them. For instance, by adding a recovery number to your Google account for two-factor authentication, researchers found they could block, “100% of automated bots, 99% of bulk phishing attacks, and 66% of targeted attacks.”

If the fundamental technology solutions are well-known, why does digital literacy remain so low? This is where the human element — and especially usability and communications — has largely been ignored. For instance, despite the benefits of multifactor authentication (MFA), less than 10% of Gmail users enable it. Similarly, passwords remain a source of derision within the industry, as year-over-year default settings and poor password choices like “123456” and “password” continue to top the list, and have even been linked to high-profile breaches. This is why it is so essential to make the fundamentals, such as encryption, usable while also communicating their benefits.

In each case, there are usability and communication problems. According to a recent CyLab study, many survey respondents were not aware of password managers or found them hard to use. MFA suffers from similar usability problems, even though it is increasingly easy to use with limited delay. For the minority who do use MFA, those few seconds for authentication seem too long because they simply aren’t aware of the security benefits from that short pause. The perceived security-convenience trade-off becomes especially confusing for users when they learn how some of these “best practices” can be circumvented by attackers. Why introduce inconvenience if the Charming Kitten cyber warfare group may bypass it?

The state of digital literacy is just another symptom of a broader problem. Security best practices generally fail the usability and user experience test, while the benefits and value of foundational security concepts remain underanalyzed or siloed within esoteric technical discussions.

Fortunately, it is not all doom and gloom. First, there is a growing awareness of the need for applied research on usable security. This targeted research can demonstrate the actual security benefits of proposed solutions and offer concrete value-added insights to encourage greater user adoption.

Next, there is similarly a data scarcity problem in information security research, hindering our ability to demonstrate (or reject) the benefits of various best practices. Securely sharing data and findings can help the community as a whole demonstrate the value-add. In addition, the growing emphasis on security by design can help relieve the burden on many users, if successful.

Finally, as beneficial as security conferences are, we need to break out of our own ecosystem and expand our footprint across different verticals as well as mainstream, consumer-focused forums. There are already positive signs that this momentum is growing, as security experts offer their expertise to schools, libraries, and senior centers as well as non-security tech events.

Improving the state of digital literacy should be a top priority for our industry. The security challenges aren’t going to let up any time soon as the proliferation of attackers and their techniques continues unabated. There are also significant national security, economic security, and societal benefits that can be gained through both greater research and greater outreach.

It may not be as sexy as finding the next hot exploit or APT, and that research definitely must continue. But we need to find greater balance between research and outreach, targeting those usable solutions that can address the compromises and attack vectors that affect the majority of the population. As a community, we are uniquely situated to address this gap by making advances in digital literacy and usable solutions an industry imperative.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Dr. Andrea Little Limbago is the chief social scientist at Virtru, a data privacy and encryption software company, where she specializes in the intersection of technology, cybersecurity, and policy. She previously taught in academia before joining the Department of Defense, … View Full Bio

Article source: https://www.darkreading.com/endpoint/a-security-strategy-that-centers-on-humans-not-bugs/a/d-id/1336350?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

If You Never Cared About Security …

Oh, I used to feel that way. (Until a BEC attack.)

Source: Host Unknown

What security-related videos have made you laugh? Let us know! Send them to [email protected].

Beyond the Edge content is curated by Dark Reading editors and created by external sources, credited for their work. View Full Bio

Article source: https://www.darkreading.com/if-you-never-cared-about-security---/d/d-id/1336399?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Magecart Hits Macy’s: Retailer Discloses Data Breach

The retail giant discovered malicious code designed to capture customer data planted on its payment page.

Macy’s has confirmed a data breach following the discovery of Magecart malware on its checkout page and wallet page, which is accessed through My Account, the retailer reports.

In a letter to customers, the retailer says it was alerted to a “suspicious connection” between Macy’s and another website on October 15. An investigation determined malicious code was added to two macys.com web pages on October 7. The code was “highly specific” and only allowed a third party to capture data submitted by customers on the wallet page and checkout page if credit card data was entered and “place order” was clicked. Its teams removed the code on October 15.

During the week that the malicious code was live, Macy’s reports cybercriminals may have potentially accessed customer data including first name, last name, address, city, state, ZIP code, phone number, email address, and their payment card’s full number, security code, and month and year of expiration if this data was typed into either of the affected web pages.

Customers who checked out, or interacted with, the My Account wallet page on a mobile device or the macys.com mobile app were not affected in the incident, the company reports.

Macy’s has reported the breach to federal law enforcement and hired a forensics firm to assist in the investigation. It has also shared affected payment card numbers with brands Visa, Mastercard, American Express, and Discover. The number of victims has not been confirmed.

Magecart is a constantly growing threat to retail websites. Recent data indicates the card-skimming threat has reportedly compromised more than 2 million victim websites and directly breached more than 18,000 hosts. Its many victims include, most recently, Procter Gamble’s First Aid Beauty, as well as other major companies Ticketmaster and British Airways.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/magecart-hits-macys-retailer-discloses-data-breach-/d/d-id/1336400?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Booter boss behind millions of DDoS-for-hire attacks jailed

The US has sentenced a 21-year-old man from the US state of Illinois to 13 months in prison for running multiple distributed denial of service (DDoS) services with names that sound like somebody squeezed them out of a London youth subculture: ExoStresser, QuezStresser, Betabooter, Databooter, Instabooter, Polystress, and Zstress.

A profitable set of snazzily named services, at that: Sergiy P. Usatyuk has also been ordered to forfeit the more than half a million – $542,925 – that he made from the DDoS-for-hire scheme. That money came both from renting out his services and from space he sold to his brethren booter operators so they could advertise on his sites.

Also up for forfeiture: all the gear Usatyuk used to run his site-jamming floods, or which he bought with his ill-gotten loot – namely, dozens of servers and other computer equipment.

Usatyuk was convicted on one count of conspiracy to cause damage to internet-connected computers.

He and an unnamed buddy developed and ran the so-called booter services and related websites from around August 2015 through November 2017. They were behind the launch of millions of DDoS attacks against targeted victim computers that rendered targeted websites slow or completely zombified, and that discombobulated normal business operations. During just the first 13 months of the scheme, the users of the booters launched 3,829,812 attacks.

The bragging rights went up as advertising collateral: As of 12 September 2017, ExoStresser advertised on its website that the one booter service alone had launched 1,367,610 DDoS attacks and caused targets to suffer 109,186.4 hours of network downtime: some 4,549 days.

Booters – also known as stressers or DDoS-for-hire – are publicly available, web-based services that launch these server-clogger-upper attacks for a small fee or, sometimes, none at all.

As befits the “stresser this” and “stresser that” brand names for Usatyuk’s offerings, DDoS-for-hire sites sell high-bandwidth internet attack services under the guise of “stress testing.” DDoS attacks are blunt instruments that work by overwhelming targeted sites with so much traffic that nobody can reach them. They can be used to render competitor or enemy websites temporarily inoperable out of malice, lulz or profit: some attackers extort site owners into paying for attacks to stop.

One example is Lizard Squad, which, until its operators were busted in 2016, rented out its LizardStresser attack service. An attack service that was, suitably enough, given a dose of its own medicine when it was hacked in 2015.

You might remember Lizard Squad as the Grinch who ruined gamers’ Christmas with a DDoS against the servers that power PlayStation and Xbox consoles – an attack it carried out for our own good.

For our own good, as in, the attackers didn’t feel bad: some kids would just have to spend time with their families instead of playing games, one of them said at the time.

In similar anti-kid fashion, one of Usatyuk’s services – Betabooter – was rented by an attacker who launched a series of DDoS attacks against a school district in the Pittsburgh, Pennsylvania area, the Justice Department (DOJ) said on Friday. It not only disrupted the district’s computer systems; it also affected the computer systems of 17 organizations that shared the same computer infrastructure, including other school districts, the county government, the county’s career and technology centers, and a Catholic Diocese in the area, according to the indictment.

The DOJ noted that booter-based DDoS attack tools offer a low barrier to entry for users looking to engage in cybercrime. Indeed, hiring a service to paralyze your enemies’, your competition’s and/or your targets’ sites makes it as easy as simply handing over the money, no technical skill required.

In April 2018, when the world’s largest DoS site – Webstresser.org – got busted, we got a look at how little money the crooks were being charged for all this mayhem. According to Webstresser’s pricing table, archived before the site was taken down, memberships started at $18.99/month for the “bronze” level, went up to $49.99/month for the “platinum” service, and topped out at $102/month for “lifetime bronze.”

In January 2019, Europol announced that it was coordinating the mop-up to track down Webstresser’s more than 151,000 registered users.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/UtMF4LwHAXg/

Sophos 2020 Threat Report: AI is the new battleground

AI is the new battleground, according to a report released by SophosLabs this week. The 2020 Threat Report highlights a growing battle between cybercriminals and security companies as smart automation technologies continue to evolve.

Security companies are using machine learning technology to spot everything from malware to phishing email, but data scientists are figuring out ways to game the system. According to the report, researchers are conceiving new attacks to thwart the AI models used to protect modern networks… attacks which are starting to move from the academic space into attackers’ toolkits.

One such approach involves adapting malware and emails with extra data that make them seem benign to machine learning systems. Another replicates the training models that security companies use to create their AI algorithms, using them to better understand the kinds of properties that the machine learning models target. That lets attackers tailor malicious files to bypass AI protections.

The other big AI-related worry is generative AI, which uses neural networks to create realistic human artefacts like pictures, voices, and text. Also known as deepfakes, these are likely to improve and present more problems to humans who can’t tell the difference. Sophos predicts that in the coming years, we’ll see deepfakes lead to more automated social engineering attacks – a phenomenon that it calls ‘wetware’ attacks.

Automation is already a growing part of the attack landscape, warns the threat report. Attackers are exploiting automated tools to evade detection, it says, citing ‘living off the land’ as a particular threat. This sees attackers using common legitimate tools ranging from the nmap network scanning product to Microsoft’s PowerShell in their quest to move laterally through victims’ networks, escalating their privileges and stealing data under the radar.

Online criminals are also tying up admin resources with decoy malware, which they can drop liberally throughout a victim’s infrastructure, the report warns. This malware carries benign payloads, enabling them to misdirect admins while they furtively drop the real payloads.

The third weapon in the attackers’ automated arsenal are potentially unwanted applications (PUAs). Unlike the benign malware decoys, PUAs often don’t garner much attention because they aren’t classified as malware. Yet attackers can still program them to activate automatically and deliver damaging payloads at a time of their choosing, Sophos warns.

Automated attacks also pose a threat to machines exposing specific ports online, the report points out. By way of example, Sophos uses computers with public-facing remote desktop protocol (RDP) ports, which are common targets for brute-force password attacks. This is just one example of what the company calls ‘internet background radiation’ – the constant hubbub of online activity that contains an ocean of malicious traffic.

But while AI is the threat of tomorrow and automated technologies present real and present dangers, we shouldn’t ignore infections from the past. Malware that swept the internet years ago still highlights inherent insecurities across large swathes of online infrastructure. The report singles out ‘zombie’ WannaCry infections that are still lurking on many networks. These infections, based on variants of the original malware, show that there are still vast quantities of unpatched machines online.

The same goes for Mirai, the IoT-based botnet that swept the world in 2016 and still exists today. SophosLabs has seen Mirai-infected networks launching attacks on database servers using sophisticated strings of commands that can take over an entire system.

The report highlighted plenty of other threats, including a growing diversity of attacks on smartphone owners. Attackers are resorting to everything from SIMjacking to adware and ‘fleecing’ apps that charge exorbitant amounts for legitimate assets of very little value.

As technology evolves at a breakneck pace, one thing is certain: the creativity of the cybercrime community will continue to evolve with it. However, while companies may fret over tomorrow’s technologically sophisticated threats, the first place to begin any cybersecurity effort is with basic steps such as software patching, strict access policies, proper system and network monitoring, and user education. Measures like these don’t need sophisticated AI practitioners to implement, and they can save many headaches down the line.

You can download the SophosLabs 2020 Threat Report here.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MkuqFZJIxhw/

Interpol: Strong encryption helps online predators. Build backdoors

Multinational police agency Interpol is due to say that tech companies deploying strong encryption helps paedophiles – unless they build backdoors for police workers.

Three people “briefed on the matter” told financial newswire Reuters yesterday that the agency would be issuing a statement this week condemning the use of strong encryption because it helps child predators.

The newswire reported that “an Interpol official said a version of [a] resolution introduced by the US Federal Bureau of Investigation would be released without a formal vote by representatives of the roughly 60 countries in attendance” at an Interpol summit held last week.

“Service providers, application developers and device manufacturers are developing and deploying products and services with encryption which effectively conceals sexual exploitation of children occurring on their platforms,” a draft of the resolution seen by Reuters said.

It continued: “Tech companies should include mechanisms in the design of their encrypted products and services whereby governments, acting with appropriate legal authority, can obtain access to data in a readable and useable format.”

While the statement may well read like the rantings of a demented senior citizen in some long-forgotten care home, it builds on similar statements from Western governments, police and spy agencies, as well as new international treaties. So-called “think of the children” rhetoric is a tried and trusted strategy for police workers who are determined to get their way with politicians.

Interpol ignored questions from Reuters, while the US FBI also reportedly shrugged off inquiries.

The agency has yet to issue the communique in question, though it is expected to be welcomed by Western governments increasingly fed up that their internal security agencies are unable to exercise China-style social control and surveillance over their populations.

Interpol counts every country in the world as a member except for North Korea, ironically given that rogue state’s general disregard for the rule of law online. While the agency is occasionally criticised by Western charities for allowing rogue states and dictatorships to abuse its processes, in general it is Western governments and their state agencies which are now using Interpol’s name to dilute vital encryption safeguards in the name of police convenience. ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/18/interpol_says_encryption_helps_paedos_barmy/

Human Nature vs. AI: A False Dichotomy?

How the helping hand of artificial intelligence allows security teams to remain human while protecting themselves from their own humanity being used against them.

Nobel Prize-winning novelist Anatole France famously opined: “It is human nature to think wisely and act foolishly.” As a species, we’re innately designed with — as far as our awareness extends — the highest, most profound levels of intellect, knowledge, and insight in our vast, infinite universe. But this does not equate to omniscience or absolute precision.

Humans are by no stretch of the imagination perfect. We feel pressured, we get stressed, life happens, and we end up making mistakes. It’s so inevitable, in fact, that it’s essentially hardwired into our DNA. And for better or worse, this aspect of human nature is both perfectly natural and resolutely expected. In most cases, the human predilection to screw up is evened out by a dogged pursuit of rectification. But in cybersecurity terms, this intent and journey happens all too slowly; this is a realm where simple mistakes can result in dire consequences at the blink of an eye.

To place this into context, a simple hack or breach can result in the loss of billions of dollars; the complete shutdown of critical infrastructure such as electric grids and nuclear power plants; the leak of classified government information; the public release of unquantifiable amounts of personal data. In many instances, these all too real “hypotheticals” — the collapse of economies, the descent of cities into chaos, the compromise of national security or the theft of countless identities — can all potentially be pinpointed back to human error around cybersecurity.

With so much at stake, it’s not unsurprising that many CISOs are not confident in their employees’ abilities to safeguard data. That’s because most of the cybersecurity solutions used by a majority of the workforce are difficult to use and cause well-intentioned workarounds that open up new vulnerabilities in order for employees to simply be productive at their jobs.

Malicious actors don’t just realize this; they use it to their ultimate advantage. Employees are only human, and social engineers excel when it comes to exploiting our human nature. But we don’t want employees jettisoning their all-too-precious humanity in order to protect themselves against the ill-intentioned wiles of social engineers. Enter the helping hand of artificial intelligence (AI), which allows employees to remain human while protecting them from their own humanity being used against them. Adaptive security that’s powered by AI can make up for the human error that we know can and will happen.

Employees, myself included, need help staying secure in the workplace because we’re easily prone to distraction and being tricked. The goal is to never make mistakes or open companies up to vulnerability. But as France put it, our wisdom is sometimes superseded by our brash, spontaneous, emotionally-driven actions.

But can artificial intelligence really be trusted to make up for human error? Well, it all depends on who’s answering this question and their perception of AI. Those with a level-headed view of AI not deeply rooted in science fiction or Hollywood tropes mostly agree that AI is an effective tool for catching and circumventing careless human error because it’s unburdened by the feeble aspects, or foibles, of human nature and the cognitive limits of rationality inherent in it. As IBM’s Ginni Rometty puts it: “Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.”

AI bridges the gap between work productivity and security, bringing to fruition the concept of “invisible security” that creates a line of defense that can essentially be categorized as human-nature-proof. The fact of the matter is today’s threat vectors now morph at the speed of light, or the speed of human-enabled artificial intelligence. With the help of AI and machine learning, employees possess a strong fighting chance against the theft of corporate data by malicious actors who also utilize these high-speed models and algorithms to achieve their nefarious goals.

That being said, any debate that would position the trustworthiness of humans against AI is grounded on a false dichotomy in that AI has yet to advance to the level of sentience where it can truly act or function without human intervention.

Humans and AI actually make up for each other’s weaknesses — AI compensating for human nature’s cognitive limits of rationality and error, and humans serving as the wizard behind AI’s Oz, imbuing the technology with as much or as little power as we deem appropriate. When paired correctly and responsibly, human nature and AI can combine in a copacetic manner to foster the strongest levels of enterprise cybersecurity.

Related Content:

John McClurg is Blackberry’s chief information security officer. In this role, he leads all aspects of BlackBerry’s information security program globally, ensuring the development and implementation of cybersecurity policies and procedures. John comes to BlackBerry from … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/human-nature-vs-ai-a-false-dichotomy/a/d-id/1336329?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Medical Device Vendors Hold Healthcare Security for Ransom

While being pummeled by ransomware attacks, healthcare centers also face growing IoT-related threats. Here’s how they manage security amid a complex set of risks.

Four-hundred-ninety-one ransomware attacks slammed US healthcare organizations in the first three quarters of 2019 alone, according a recent report by Emsisoft. Cyberattacks on healthcare are reportedly already 60% higher than 2018 figures. The US Food and Drug Adminstration FDA just issued warnings about an urgent remote code execution vulnerability affecting millions more medical devices than initially thought.

And yet IT security teams at hospitals and healthcare centers are hampered in their efforts to defend against these threats, hamstrung, in part, by vendors that fail to take security seriously. 

Thomas August, CISO at John Muir Health, a healthcare system compromising two acute care hospitals, a behavioral health center, and community health practices throughout the east San Francisco Bay area, has seen his peers wrestle with ransomware attacks. He has his own ideas on why organizations in his industry are such popular targets.

August points out that the devices on his networks are split between traditional IT systems in the billing and records functions, and advanced Internet of Things (IoT) devices in the healthcare delivery areas. Many of those IoT devices are built on software and operating systems that are archaic and unpatched (think Windows 95).

And then the news gets bad.

Many of the medical devices attached to the hospital network are managed, under contract, by the vendor.

“In the case of medical devices specifically, the vendors have historically not done a very good job of owning their end of the bargain,” August says. “They don’t allow health systems to patch. They don’t allow health systems to put anti-malware on them. They don’t allow health systems to change admin credentials. There’s a lot of things they don’t allow the health systems to do, and if we try to do it it breaks the warranty.”

So these are the types of choices August is faced with: leave a radiology scanner open to vulnerabilities or protect a radiology scanner with antivirus knowing that if the AV causes the scanner to malfunction, the device manufacturer will refuse to cover any repairs and the hospital will likely need to replace that million-dollar radiology scanner. The usual security monitoring tools that work for other systems, like SIEMs, also won’t work for these embedded systems. 

As he talks about the impact of the situation, August doesn’t mince words. “In a lot of regards, the systems that we have are subject to the vendors really owning their responsibility here, and there’s nothing we can do about it,” he says. “It’s very, very, frustrating.”

But frustration doesn’t equate to inaction for August and other healthcare CISOs.

“For the most part, we segment them off and just keep them in their own private Idaho because there’s very little else we can do,” August says. “If I can’t keep certain devices from accessing the Internet by putting filters up, I can segment them in such a way that they have no way to get to the Internet, period.”

When faced with a variety of different devices with varying levels of built-in security capabilities and update status, not to mention management responsibility and ownership, proper segmentation is key to overall network health, August suggests.

But while unpatched IoT devices may be a key source of frustration, the critical sources of and reasons for ransomware infection lie elsewhere.

(continued on next page)

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/how-medical-device-vendors-hold-healthcare-security-for-ransom/b/d-id/1336388?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GitHub Initiative Seeks to Secure Open Source Code

New Security Lab will give researchers, developers, code maintainers, and organizations a way to coordinate efforts on addressing vulnerabilities.

A new initiative that GitHub announced last week has focused attention on the urgent need across industries for more organized approaches to addressing security vulnerabilities in open source software.

Last Thursday GitHub launched Security Lab, an effort that seeks to provide researchers, maintainers of open source projects, developers, and organizations with a common venue for collaborating on security.

GitHub has dedicated a team of security researchers to Security Lab. The researchers will work with peers from several other organizations to find and report bugs in widely used open source projects. Developers and maintainers will be able to work together on GitHub to develop patches for disclosed flaws and to ensure coordinated disclosures after vulnerabilities have been properly patched.

To encourage broad participation GitHub has made publicly available CodeQL, a semantic code analysis tool it says can help security researchers find vulnerabilities in open source software using simple queries.

“If you’re a security researcher or work in a security team, we want your help,” GitHub said in a statement. “Securing the world’s open source software will require the whole community to work together.”

Among those that have committed to contributing their time and expertise to the effort are Google, Intel, Uber, HackerOne, and Microsoft, which last year purchased GitHub for over $7 billion. Each of these initial partners has committed to contributing to the effort in a different way, GitHub said, without elaborating.

GitHub did not respond immediately to a Dark Reading request for more information on partner participation and other aspects of the effort. In the announcement, however, it described the Security Lab initiative as being focused on the whole open source security life cycle. 

“GitHub Security Lab will help identify and report vulnerabilities in open source software, while maintainers and developers use GitHub to create fixes, coordinate disclosure, and update dependent projects to a fixed version,” it states.

A Major and Growing Concern
Vulnerabilities in open source software and components have become a major and growing enterprise security issue. Many development organizations use open source code heavily to accelerate software development, but few bother to check for vulnerabilities, keep track of flaw disclosures in open source components, or patch their software when fixes do become available. The situation is often exacerbated by many organizations’ failure to maintain a proper inventory of the open source components in their software stack. Troublingly, 40% of new open source vulnerabilities do not have an associated CVE, so they are not included in any database, GitHub said.

Research conducted by Synopsis in 2018 found open source code in over 96% of codebases that was scanned for the study. Synopsis found some 298 open source components, on average, in each of the scanned codebases, compared to 257 in 2017. In many cases, the scanned codebases had substantially more open source components than proprietary code.

Significantly, 60% of the code in the Synopsis study had at least one security flaw. Forty-three percent contained vulnerabilities that were more than 10 years old, and 40% had at least one critical security flaw.

Fausto Oliveira, principal security architect at Acceptto, says unpatched vulnerabilities in open source code present a major threat to organizations. “The adoption of open source components permits companies to have a faster turnaround for their software projects at a cheaper cost,” he says.

The downside is that adversaries are often as well informed or even better informed than security researchers of security vulnerabilities that are present in code components. “By having unpatched versions of open source components in production, an organization is offering a low-effort door into their infrastructure and services,” Oliveira says.

One way the Security Lab initiative is seeking to address this is via a GitHub Advisory Database that contains detailed information on advisories created on GitHub. Maintainers will be able to work privately with security researchers on developing security fixes, applying for a CVE, and in structuring vulnerability disclosures, GitHub said.

To the extent that Security Lab is focused on addressing such issues, it is a good idea, says Thomas Hatch, CTO and co-Founder at SaltStack. “My concern is that this is not the first time we have seen these sorts of efforts,” he says.

Many companies have tried over the years to secure open source code, but the level of attention required to address such a massive undertaking can deeply daunting, Hatch adds. “I don’t think this will solve all our problems, but it is a fantastic step in the right direction,” he notes.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How Medical Device Vendors Hold Healthcare Security for Ransom.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/github-initiative-seeks-to-secure-open-source-code/d/d-id/1336394?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple