STE WILLIAMS

Failbreak: Bloke gets seven years in the clink for trying to hack his friend out of jail

A Michigan fella will spend up to seven years and three months behind bars – for trying to hack government IT systems in the US state to get a friend out of jail.

Konrads Voits, 27, of Ypsilanti, Michigan, received the 87-month sentence after he pleaded guilty to one federal charge of damaging a protected computer. He will also have to give up his laptop, four mobile phones, $385.49 worth of Bitcoin, and one “Green Integrated Circuit Component, Serial No. Y21A2123” in asset forfeiture.

Voits admitted to a 2017 attempt to break into the network and records systems of the Washtenaw County Jail. Using a combination of phishing emails, social engineering – calling employees to get their login passwords – and malware infections, Voits was able to get into the county’s record system and alter some inmate records with the aim of getting an unnamed inmate at the jail released early.

jail

Prison hacker who tried to free friend now likely to join him inside

READ MORE

The US Dept of Justice’s Eastern Michigan office said the intrusion was only noticed after an employee looked through the records of the inmate and then alerted the county’s IT staff to the suspicious activity. From there, investigators were able to root out the employees who had been infected (many were directed to phishing sites or convinced via phone to download malware.) The activity was eventually traced back to Voits, who has had previous run-ins with police for drug possession and stalking.

Interestingly, it turns out that at least one of Voits’ earlier encounters with the local cops may have been while he was working on the jail hack. Five days before the county’s IT staff determined their network had been compromised, police found Voits with his laptop on the roof of a building in Ann Arbor directly across the street from one of the municipal offices targeted in the attack.

Believing Voits’ story that he was only “trying to get better reception,” officers let him go with a trespass notice. Ten days after the incident, he was collared and charged for the attack.

The DoJ estimated that investigating and cleaning up the infiltration – around 1,600 employees had their personal information compromised in Voit’s data-harvesting attempts – ended up costing around $235,000.

“Thanks to the quick response of the IT employees at Washtenaw County, and to the careful review of records by employees at the County Jail, nobody was actually released early,” the DoJ officials said last week.

“Washtenaw County spent thousands of dollars and numerous extra work hours responding to and investigating the breach.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/30/man_jailed_prison_break_hack_attack/

10 Security Innovators to Watch

Startups in the RSA Conference Innovation Sandbox competed for the title of “Most Innovative.”PreviousNext

“What did you see that was really innovative?”

That’s the question everyone who goes to a trade show hears on returning to the office. At the RSA Conference this month, one answer to the question comes in the annual Innovation Sandbox Contest. The ten finalists this year ranged from cloud offerings to RF security to software connectors, and each presented its unique vision of what companies most need to be secure.

At the end of the presentation, a panel of judges chose the most innovative, and at the end of the article you can see which company took home the trophy.

One thing that’s worth noting is that the security industry is similar to the rest of the computer industry in that acquisition is a chief business model for those starting companies. Of the ten companies participating in the contest, two have already been acquired by other firms — and that number could change by the time you read this.

Here are the 10 young companies that vied for the title of “Most Innovative” in 2018 at RSAC – and a look at the one that came away with the title.

 

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/10-security-innovators-to-watch/d/d-id/1331679?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Slack Releases Open Source SDL Tool

After building an SDL tool for their own use, Slack has released it on Github under an open source license.

Security is a matter of friction — applying as much as possible to malign actors and processes, and as little as possible to legitimate users and applications. For software developers, any additional friction can seem too much and lead to teams working around, rather than with, the processes intended to provide built-in security. Slack is a fast-moving company that needs lightning-fast development cycles and secure software. It’s a situation that called for a tool they didn’t have. So they built one and released it as an open source application for anyone to use.

Slack has a small development team and a seemingly insatiable appetite for new capabilities and features; it’s not uncommon for the company to deploy code to production 100 times in a day. “Integrating security into products, with distinct steps and quite a bit of process, didn’t align with the way things worked here,” says Max Feldman, a member of the product security team at the company.

Feldman says that the development team looked at existing tools, including Microsoft’s, but that the tools either added too much overhead or were oriented toward a waterfall development process. “Process can be antithetical to rapid development,” says Feldman. His team’s challenge was to, he says, “bring best practices into Slack while remaining “Slack-y.”

The new tool is intended to help Slack implement a security development lifecycle. The application, dubbed “GoSDL,” was described in depth in a recent company blog post. The goal, says Feldman, was to develop rapid and transparent development.

GoSDL is, he says, a fairly simple PHP application that allows any team member to begin the process of interacting with security. “The beginning of the process of a new feature is one where they can check whether they want direct security involvement,” Feldman says. If so, the feature is flagged “high risk,” not because of any actual risk but to make it high priority for security team action. If the security involvement box isn’t checked, it doesn’t mean that security steps aside, but their involvement begins with a series of questions about the impact on existing products and features.

Once the security team is involved it begins to put together risk assessments (high, medium, or low) for each component of the feature. The product engineer or manager is responsible for a component survey with additional checklists of potential issues.

All of the checklists and communications to this point are created in the PHP application running on the Slack platform. Once the lists reach the point of requiring action, the application generates a Jira ticket that creates the action item checklist.

“This empowers engineers and developers to evaluate their own security,” Feldman says. “We’ll be involved and help, but the more they’re versed in security, the better we are.” And that “better” is embodied in a cultural shift toward security, as well.

“One of the things we tried to do with the blog post and documentation is talk about the culture and how to use it,” Feldman says, adding that the “transparency and communication are an integral aspect of this; without them it could still work but it would be much different.”

It is important, he says, for security to be seen as a trusted partner in the development process rather than a blocking adversary. “The fostering of mutual trust between development and engineering is a goal. Engagement, getting familiar with people, meeting people as they join,” is critical, he says.

“For us the behavioral and cultural aspects are sufficient but we’ve tried with the blog post to clarify how it might be useful. We want to let teams integrate the tool and make things pleasant for everyone,” Feldman explains.

GoSDL is available on Github.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/endpoint/slack-releases-open-source-sdl-tool/d/d-id/1331682?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Speed at Which New Drupal Flaw Was Exploited Highlights Patching Challenges

In the rush to patch, organizations can create fresh problems for themselves.

The speed at which malicious attackers recently exploited a remote code execution flaw in the Drupal content management system (CMS) should serve as fresh warning about the need for organizations to test processes for quickly responding to vulnerability disclosures.

Drupal administrators last Wednesday rushed out an out-of-cycle security release warning about a highly critical vulnerability (CVE-2018-7602) affecting Drupal 7.x and 8.x versions. The new vulnerability — related to an even more severe and somewhat incompletely fixed flaw (CVE-2018-7600) from March — potentially gives threat actors multiple ways to attack a Drupal site, maintainers of the open source CMS platform warned.

They urged website owners and operators to immediately update to the most recent version of Drupal 7 or Drupal 8 core. Sites running 7.x were asked to upgrade to Drupal 7.59; those using 8.5.x to Drupal 8.5.3; and those on Drupal 8.4.x to Drupal 8.4.8. For organizations unable to update quickly enough, the Drupal administrators issued a security patch to mitigate the risk of the vulnerability being exploited.

But barely hours after the advisory was posted, attackers began actively exploiting the flaw to try, among other things, to upload cryptocurrency miners on vulnerable sites or to use compromised sites to launch distributed denial-of-service attacks. In virtually no time at all — and certainly before a vast majority of site owners had an opportunity to upgrade or apply mitigations — thousands of host systems around the world became potential targets for compromise.

The speed at which attackers attempted to take advantage of the newly disclosed Drupal flaw was in stark contrast to March, when it took about two weeks for the first attacks against CVE-2018-7600 to surface. Hacker activity around March’s so-called Drupalgeddon 2.0 was so low initially that it prompted security vendor Imperva to wonder if hackers were getting lazy.

“Unlike CVE-2018-7600, which took two weeks to exploit, CVE-2018-7602 was exploited within 24 hours,” says Koby Kilimnik, security researcher at Imperva. In fact, a public exploit was publicly published for CVE-2018-7602 just a few hours after the vulnerability was disclosed, he says.

“The ongoing vulnerabilities announced around Drupal and the speed through which proof-of-concept exploit code was developed only further highlights the importance and need of organizations to understand their attack surface,” says Steve Ginty, senior product manager at RiskIQ.

Responding to such threats requires organizations to be able to quickly identify vulnerable assets — including those that are likely being managed by third parties — in order to secure them appropriately. “While organizations may not be able to patch these vulnerable platforms, visibility into the scope of the impact on an enterprise allows an organization to make an informed risk decision,” Ginty says.

The trend toward faster exploitation of vulnerabilities puts enterprises between a rock and a hard place. Faulty patches and badly implemented ones can create as much or even greater problems than the security issues they are meant to address. Many enterprises prefer to thoroughly test patches before putting them into production environments — a process that can take anywhere from a couple of days to several months, depending on size. While that might be a safe approach, delaying patch deployment can expose organizations to considerable risk as well, as last week’s Drupal flaw showed.

“The challenge of maintaining security patches while preventing disruption of production systems is a huge problem for IT professionals,” says Justin Jett, director of audit and compliance for Plixer. Many security patches — including those for commonly used software like Drupal — do not alter the core functionality of the software and so can be deployed without too much risk.

“While major software releases can typically wait until thorough testing has been completed, minor security-related patches should be completed as soon as possible, if not immediately after the patch is made publicly available,” Jett says.

At the same time, past experience has shown that relying entirely on vendor patches is not always the best idea, says Imperva’s Kilimnik. “Vendors might be in a hurry to publish a patch without proper tests, so it could have a dangerous effect in your environment,” he says. “We cannot always predict how patching one system might affect the other,” so other mitigations might become necessary, he adds.

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/speed-at-which-new-drupal-flaw-was-exploited-highlights-patching-challenges/d/d-id/1331681?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Old Worm, New Tricks: FacexWorm Targets Crypto Platforms

Malicious Chrome extension FacexWorm has reappeared with new capabilities, targeting cryptocurrency platforms and lifting user data.

FacexWorm, a malicious Chrome extension, has been rediscovered targeting cryptocurrency trading platforms and spreading via Facebook Messenger. The Cyber Safety Solutions team at Trend Micro reports it’s packing a few new capabilities, including the ability to steal user data.

The extension was first detected in August 2017 and returned the following April amid reports of increased appearances in Germany, Tunisia, Japan, Taiwan, South Korea, and Spain. Like the original, it sends socially-engineered links to friends of affected Facebook account holders.

Unlike the original, it steals accounts and credentials related to FacexWorm’s targeted sites. The attack takes potential victims to websites where it injects malicious cryptomining code and redirects to the attacker’s referral link for crypto-related referral programs. It hijacks transactions in trading platforms by replacing the recipient address with the attacker’s.

The attacker gets a referral incentive every time a victim registers an account, researchers report. Targeted websites include Binance, DigitalOcean, FreeBitco.in, FreeDoge.co.in, and HashFlare.

FacexWorm arrives on victims’ machines via socially-engineered Facebook links. Those who click are redirected to a fake YouTube page where they are prompted to install a codec extension (FacexWorm) to play a video. The extension requests privilege to access and edit data on the site.

If permission is granted, FacexWorm downloads malicious codes from its command-and-control server, opens Facebook’s website, and checks to see if the propagation function is turned on. If it is, the extension requests an OAuth token from Facebook and begins obtaining the target account’s friend list. Contacts who are online or idle are sent fake YouTube links.

“FacexWorm is a clone of a normal Chrome extension but injected with short code containing its main routine,” explains Trend Micro fraud researcher Joseph Chen in a blog post on the finding. The threat downloads more code from the CC server when the browser is opened.

“Every time a victim opens a new webpage, FacexWorm will query its CC server to find and retrieve another JavaScript code (hosted on a Github repository) and execute its behaviors on that webpage.” This JavaScript code, or miner, is an obfuscated Coinhive script connected to a Coinhive pool, configured to use 20% of the target system’s CPU power for each threat.

FacexWorm exhibits other malicious behaviors like stealing account credentials for Google, MyMonero, and Coinhive. This threat targets a total of 52 cryptocurrency platforms. When it detects anyone accessing any of them or using keywords like “blockchain” or “ethereum” in the URL, it redirects the user to a fraudulent webpage.

This threat only works in Chrome. If the malicious link is accessed through any browser other than the Chrome desktop version, it redirects to a random advertisement. Trend Micro has only spotted one Bitcoin transaction compromised by FacexWorm based on monitoring the attacker’s wallet. Researchers haven’t determined how much money the threat has generated.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/old-worm-new-tricks-facexworm-targets-crypto-platforms/d/d-id/1331684?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

WhatsApp Founder to Depart Facebook Amid Privacy, Encryption Dispute

Jan Koum also plans to step down from Facebook’s board of directors.

Jan Koum, the founder of WhatsApp who sold his private messaging app to Facebook four years ago for $19 billion, reportedly plans to step down in the wake of increasing concerns over the parent company’s intentions for its user data as well efforts to weaken its encryption.

The Washington Post today reported that Koum also will leave his post on Facebook’s board of directors. Koum later wrote in a Facebook post that it “is time for me to move on,” but did not provide a timeline for his departures or any details.

According to The Post, the recent news of Cambridge Analytica’s abuse of Facebook users’ data exacerbated Koum’s discontent with how Facebook handles user data, given WhatsApp’s promise to its users that it would continue to emphasize and protect their privacy and data in the aftermath of the acquisition by Facebook.

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/whatsapp-founder-to-depart-facebook-amid-privacy-encryption-dispute/d/d-id/1331685?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

YouTube snags millions of bad videos, but is it getting the right ones?

On Monday, during its earnings call, Google’s parent Alphabet held up a shiny YouTube bauble.

Look at the results of our wondrous machine learning, Google CEO Sundar Pichai said in prepared remarks, pointing to automagic flagging and removal of violent, hate-filled, extremist, fake-news and/or other violative YouTube videos.

At the same time, YouTube released details in its first-ever quarterly report on videos removed by both automatic flagging and human intervention.

There are big numbers in that report: between October and December 2017, YouTube removed a total of 8,284,039 videos. Of those, 6.7 million were first flagged for review by machines rather than humans, and 76% of those machine-flagged videos were removed before they received a single view.

That’s a lot of hate stopped, however, it is unlikely to impress the parents of children gunned down in school shootings, who for years have endured the YouTube excoriations of Alex Jones.

Jones is a conspiracy theorist who for 5.5 years has been calling them liars. Since 2012, Jones has been pushing his skepticism about the massacre at Sandy Hook Elementary School, in Newtown, Connecticut, that left 20 students and six educators dead, with scores of videos on his Infowars YouTube channel that have never been removed.

In those videos, Jones has over the years said that the Sandy Hook shooting has “inside job written all over it,” has called the shooting “synthetic, completely fake, with actors, in my view, manufactured,” has claimed “the whole thing was fake,” said that the massacre was “staged,” called it a “giant hoax,” suggested that some victims’ parents lied about seeing their dead children, and promoted other conspiracy theories (Sandy Hook is only one of his many focuses).

Recently, Media Matters for America – a nonprofit dedicated to monitoring, analyzing, and correcting conservative misinformation in US media – announced that its own compilation of Jones’s conspiracy theories about Sandy Hook was flagged and removed.

The message the group saw when staff logged in to the nonprofit’s YouTube account included a section called “Video content restrictions” that described reasons why videos might be taken down:

YouTubers share their opinions on a wide range of different topics. However, there’s a fine line between passionate debate and personal attacks. As our Community Guidelines outline, YouTube is not a platform for things like predatory behavior, stalking, threats, harassment, bullying, or intimidation. We take this issue seriously and there are no excuses for such behavior. We remove comments, videos, or posts where the main aim is to maliciously harass or attack another user. If you’re not sure whether or not your content crosses the line, we ask that you not post it.

That made sense, the content is pretty objectionable, but what didn’t make sense is that the original source material it borrowed from was untouched. Why were the original videos from which the compilation was made not flagged and removed too?

The group appealed, and the compilation video was reposted.

Google told me that the removal of Media Matters’ Alex Jones compilation video was simply a mistake, a false positive. From a statement sent by a Google spokesperson:

With the massive volume of videos on our site, sometimes we make the wrong call. When it’s brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it. We also give uploaders the ability to appeal these decisions and we will re-review the videos.

As we rapidly hire and train new content reviewers, we are seeing more mistakes. We take every mistake as an opportunity to learn and improve our training. This incident was a human review error.

Those concerned might have expected that both the compilation and the originals would be removed as a result of the appeal but, in fact, both are considered OK by Google and fall on the right side of YouTube’s community guidelines.

The mistake shows some positive aspects of YouTube’s approach to video removal – human judgement is being used when the machine learning algorithms falter and, in this case at least, the rules were applied even-handedly in the end.

It also highlights a question that Silicon Valley is just beginning to grapple with – if you are going to have community standards that are stricter than the those required to meet your legal obligations, where do you draw the line?

Why are the videos that turn Sandy Hook parents into victims OK?

Is it simply the price of free speech or, as ex-Reddit mogul Dan McComas suggested only last week, that big organisations like Reddit, Facebook and Twitter are reluctant to make decisions, or to upset groups of users who are known to be volatile?

I think that the biggest problem that Reddit had and continues to have, and that all of the platforms, Facebook and Twitter, and Discord, now continue to have is that they’re not making decisions, is that there is absolutely no active thought going into their problems

Jones has, in fact, come close to being banned from YouTube. In February, The Hill reported that Infowars was “one strike away” from a YouTube ban.

Jones’s channel got its first strike on 23 February for a video that suggested that David Hogg and other student survivors of the mass shooting at Marjory Stoneman Douglas high school in Parkland, Florida, were crisis actors. The video, “David Hogg Can’t Remember His Lines In TV Interview,” was removed for violating YouTube’s policies on bullying and harassment.

The second strike was on a video that was also about the Parkland shooting. The consequences of getting two strikes within three months was a two-week suspension during which the account couldn’t post new content. A third strike within three months would mean Infowars would get banned from YouTube. At the time, Infowars had more than two million YouTube subscribers.

Last month, YouTube said it was planning to post excerpts from credible sources onto pages containing videos about hoaxes and conspiracy theories, in order to provide more context for viewers.

But when the Wall Street Journal provided examples of how YouTube still promotes deceptive and divisive videos, it says that YouTube execs acknowledged that the video recommendations were a problem. The newspaper quoted YouTube’s product-management chief for recommendations, Johanna Wright:

We recognize that this is our responsibility [and] and we have more to do.

Last Monday, two defamation lawsuits were filed against Alex Jones by Sandy Hook parents. Those parents claim that Jones’s “repeated lies and conspiratorial ravings” have led to death threats.

Another lawsuit has been filed against Jones by a man whom Infowars incorrectly identified as the Parkland, Florida, school shooter.

A cynical view might see nothing but profit as being responsible for YouTube keeping Jones’s content up. But that view doesn’t jibe with the strenuous technology efforts YouTube is putting into automatic filtering of violative content, and the platform certainly isn’t sparing the mass hiring of humans to review flagged content.

So what does that leave? It leaves YouTube’s community policies. Jones has violated policies against bullying twice, but he evidently hasn’t violated policies a third, three-strikes-you’re-out time.

His videos contain no nudity or sexual content, threats, nor violent or graphic content. Nor does Jones incite hatred against specific groups—at least, not those named in YouTube’s policies, which rule out promoting or condoning violence against “individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status or sexual orientation/gender identity.”

But what about those Sandy Hook parents, who claim that Jones’s conspiracy theorist have led to death threats?

This isn’t a technology problem – the machine-learning technology is doing the job it’s supposed to do and it’s getting better all the time.

This is a guidelines problem – “Sandy Hook parents” aren’t a named category in YouTube’s policies about protected groups so we’re stuck with content that casts grieving parents as liars.

“This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line,” YouTube says. Delicate? You can say that again.

For all the progress made in machine learning it still looks like we’re only just beginning to figure out how best to police something like YouTube, with its billion monthly users, in all of their weird, outrageous, amusing, informative and occasionally deeply offensive variety.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lvwPPewfwS8/

Google adds SSO verification check to G Suite

Last May, around one million Gmail and G Suite users using SAML single sign-on (SSO) were targeted by a clever type of phishing attack that Google seemed keen for everyone to know it had shut down within hours.

The speed of response was reassuring but also, in another way, unnerving. What was going on that Google felt it necessary to react in such an all-hands-on-deck way?

During the deceptively simple attack, G Suite users received an invitation to view what appeared to be a Google Doc file.

The request was convincing because it came from a known Gmail contact and the first part of the URL made it look as if it was hosted on Google’s platform.

Except that anyone clicking on the invite was secretly being logged into an account set up by the attackers, at which point they would have temporarily lost control over their G Suite email.

Behind the scenes, the attackers had set up a rogue Gmail or G Suite account, registered some applications to this through OAuth on Google’s cloud (available on a free trial!), and redirected access via an external server hosting an application under malicious control.

The attackers were simply exploiting legitimate permissions within OAuth and the way these help SSO to work, bypassing supposedly watertight security such as two-step verification along the way.

Not having entered any credentials, users might have considered what had happened innocuous until everyone in their contacts list started receiving emails from them asking that they click on the same bogus Docs file.

This was like phishing where the victim sees the hook but not the invisible line reeling them in.

The best answer Google can come up with to the problem will arrive from 7 May when G Suite users logging in using Chrome via SAML single sign-on (SSO) providers will start seeing a new prompt the first time they log in.

Once past the provider login, a ‘verify it’s you’ prompt will pop up to ask users whether they recognise the account they are being signed into.

Said Google:

This new screen adds that protection and reduces the probability that attackers successfully abuse SAML SSO to sign users in to malicious accounts.

This won’t impact individuals who sign in to G Suite services directly and those who use G Suite or Cloud Identity as their identity provider. The screen is also not shown on devices running Chrome OS.

The security depends on using the Chrome browser but should only happen once for a particular SSO provider.

It’s not a perfect defence because it’s still possible some users might be taken in by a phishing attack aimed at them in this way, but it’s certainly handy insurance.

With all the fuss over Gmail’s new privacy features it would be easy to miss security upgrades like this because they’re out of sight and mind.

Ironically, a motivation for moving to cloud email systems such as G Suite and Microsoft’s Office 365 is so that organisations can hand over at least part of the security job to the provider.

Last May’s attack serves as a warning that the need for diligence is never something organisations should try to outsource.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VGXW5Ou9GzQ/

Brit healthcare system inks Windows 10 install pact with Microsoft

The UK government’s Department of Health and Social Care has inked a deal with Microsoft to upgrade all NHS machines to Windows 10 – in a supposed attempt to boost resilience following the WannaCry incident last year.

Woman in hospital (in hospital gown) covers face with hands

On the NHS tech team? Weep at ugly WannaCry post-mortem, smile as Health dept outlines plan

READ MORE

The upgrade is part of the department’s plans to splash a further £150m over the next three years to improve the NHS’s standing against attacks. No cost or timescales were disclosed for the Windows 10 project.

Under the plans, it will also spend £21m to upgrade firewalls and network infrastructure at major trauma centre hospitals and ambulance trusts, and £39m to address infrastructure weaknesses.

However, trusts have been criticised for not doing enough to strengthen systems one year on from the outbreak of WannaCry ransomware, which disrupted one-third of NHS trusts and led to 6,900 cancelled appointments.

So far every single one of the 200 NHS trusts in the UK assessed for cyber security resilience has failed an on-site assessment, while Accounts Committee head Meg Hillier recently gave trusts a lashing for failing to agree on an action plan.

Jeremy Hunt, the Health and Social Care Secretary, said: “We know cyber attacks are a growing threat, so it is vital our health and care organisations have secure systems which patients trust.

“We have been building the capability of NHS systems over a number of years, but there is always more to do to future-proof our NHS as far as reasonably possible against this threat. This new technology will ensure the NHS can use the latest and most resilient software available – something the public rightly expect.”

Doctors in a busy hospital

NHS Digital to probe live-stream spillage of confidential patient info – after El Reg tipoff

READ MORE

The Microsoft deal will also allow NHS trusts to update systems with the latest Windows 10 security features.

Although Windows 10 updates have not, of course, been without a number of bugs.

Microsoft pulled its support for Windows XP some four years ago. However, it later transpired that Windows XP machines weren’t necessarily the main vector in spreading the Wannacry virus, with many machines simply crashing rather than spreading the infection.

Some researchers believe the bigger problem was unpatched machines on other versions of the operating system: Vista and Windows 7. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/04/30/department_of_health_inks_windows_10_upgrade_deal_with_microsoft/

3 Ways to Maximize Security and Minimize Business Challenges

What’s This?

The best strategy for choosing security tools and architecting networks is to focus on staffing and resources, risk tolerance, and business change.

While security professionals like to think that a “network is a network,” in truth, every network is bespoke – formed from accepted design patterns, business requirements, organic growth and designer preference. Consequently, it’s not feasible to choose security tools with the mindset of, “If I just had this network intrusion detection system (NIDS) and that user behavior analytics (UBA) tool, then I’d be secure for sure.” Why? Because it doesn’t address the unique challenges you need to solve to secure your unique network.

Figure 1: Unrefined data goes to all tools, resulting in poor detection and overburdened staff. (Source: Gigamon)

A better approach to choosing tools and architecting networks to minimize challenges and maximize security starts with three key areas:

IT Staffing and Talent
With IT staffing and talent, your aim is to understand skillsets and resource contention to determine if you can partner with IT to run security tools or if you’ll largely be on your own. Some questions and decisions that to consider:

  • Is your IT organization skilled or unskilled?
    If they’re skilled, you might consider more homegrown, OSS-based tools and customer infrastructure that IT can manage. This approach would leave SecOps using tools and it would change the staffing blend. If, however, your IT team is more of a help desk function with only a small number of engineer-level folks, you might need to staff SecOps to build and operate their own security infrastructure, develop their own tools and have the capacity to use those tools.
  • Is your IT organization staffed to take on security?
    Some organizations have large IT teams that can keep up with the change needed when running security infrastructure. If so, you might decide to have IT handle your tool deployment, management and architecture. If not, you might consider having IT largely handle getting you into the network, but with you taking responsibility for the tooling itself.

Organizational Tolerance and Need
In this area, it’s important to ask questions about the level of security risk that’s acceptable to your organization, and to determine what needs to happen and what should never happen.

  • What specific industry requirements do you have? What certifications must you adhere to?
    If you’re a public company, a hospital or a juice maker, you’ll have a certain set of requirements that guide your tool selection and operation. If on the other hand, you’re a start-up with only reputation risk, you might choose a completely different path.
  • Are you starting from scratch or have you been in business for 40 years?
    If you’re a startup, have a new office or your old office burned down, you can do things the right way from the start. If your network is 40 years old and there are rumors that someone is still on token ring, you’re largely going to be trying to weave security in and pull apart the cruft.
  • Do you know what is most important to protect?
    If you know what your defensible space should be, you can super-tool around that small percentage that must not burn, and you can allow some burning on the edge. For example, do you care most about crypto material, but not at all about employee Bob’s laptop? Answers to these questions can significantly inform your strategy.

Business Change
Finally, how will you weather change? Everyone does this differently. Even if you can’t predict the future, understanding how your IT and overall organization might react to a major change is a great way to inform your current tool selection and give you a glimpse into how to react.

  • What happens if you’re acquired?
    Do you have an asset list today? If not, maybe you need to install some discovery capability onto the network.
  • What happens when there’s an incident?
    Do you know which tool got the data from VLAN13? Are you sure you collected both sides of the flow? There is nothing more depressing in an incident than realizing you didn’t collect the right data and that awesome vendor you need to fly in to help is going to remind you of your folly on the invoice.
  • What if your CISO leaves?
    Have you built a security infrastructure than can easily be explained and demonstrated to incoming leadership? What if the board wants an overview of how the company “does” security? You need to have not only a robust tool set, but a well-organized one. If you provide a network map that looks like a Jackson Pollock piece, you might find that you’re not going to get that new deception tool you want.

Security isn’t easy, and none of us should add to that challenge with a bad security infrastructure. If you start by asking questions related to staffing and resources, risk tolerance and business change, you can begin to hone in on what security tools can best meet your unique network needs.

Jack is principal information security engineer at Gigamon, responsible for managing the company’s internal security team – conducting security operations, security architecture and incident response. A hands-on, seasoned operations manager with a focus on quality and … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/gigamon/3-ways-to-maximize-security-and-minimize-business-challenges-/a/d-id/1331675?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple