STE WILLIAMS

UK.gov confirms it won’t be buying V-22 Ospreys for new aircraft carriers

Britain is not buying V-22 Osprey aircraft to fly from its new aircraft carriers, the government has confirmed.

“The V-22 Osprey is not part of the resourced plan to deliver the UK Carrier Strike capability,” said junior defence minister Earl Howe. “However, the Ministry of Defence will continue to explore a variety of options to augment the capabilities of the Queen Elizabeth Class carriers in future.”

Lord Howe was responding to written Parliamentary questions from Admiral Lord West, a Labour peer and a retired former head of the Royal Navy.

Many commentators have speculated that V-22s could be bought by the UK to use for “long-range combat search and rescue” or “long-range high-speed delivery of mission essential spares and stores to the Queen Elizabeth class” aircraft carriers. Yesterday’s Parliamentary responses to Lord West will put those rumours to bed.

The Bell Boeing V-22 is a so-called tiltrotor aircraft. Powered by two helicopter-style rotors mounted on the ends of its wings, the Osprey can fly like a conventional aircraft or hover like a helicopter, meaning it can haul heavy loads and deliver them to relatively inaccessible areas. It can also perform air-to-air refuelling duties.

Lord Howe was unable to answer whether the US Marine Corps contingent which will be aboard HMS Queen Elizabeth on her maiden operational deployment will bring any V-22s along with their F-35Bs.

The Royal Navy will be deploying its Merlin helicopters aboard HMS Queen Elizabeth for all airborne jobs that can’t be done by the F-35B fighter jets aboard the carrier. These include planeguard (collecting fighter pilots from the sea if they bale out into it), routine personnel transfers and airborne radar surveillance of land and sea.

For the latter, the Merlins will be fitted with the Thales Crowsnest radar and are due to be operational by 2018. The Crowsnest Merlins will replace the Navy’s Mk.7 Sea King helicopters, known as “baggers” thanks to the big black sack on the side of them which contains the air search radar.

A Royal Navy 'bagger' Sea King Mk.7. Crown copyright

A Royal Navy ‘bagger’ Sea King Mk.7. Note the radar ‘bag’ to the left. Crown copyright

Lord Howe also revealed that HMS Queen Elizabeth will reach her initial operating capability by December 2020. Previous promises that the warship would be sailing in March from Rosyth, where she was assembled, to her home base of Portsmouth have not been met. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/24/uk_rules_out_v22_osprey_queen_elizabeth_carriers/

Prioritizing Threats: Why Most Companies Get It Wrong

To stay safer, focus on multiple-threat attack chains rather than on individual threats.

We’ve all seen them — you might even have one open right now: an Excel spreadsheet with red, greens, and yellows that tell you where your risk is. You probably follow the simple convention of focusing on low-hanging fruit first and then drill down as hard and as fast as you can on the critical and high items.

Sorry to say this, but you’ve been doing it wrong. You see, attackers are opportunistic and scrappy, yet we don’t seem to work in those variables onto our sea of reds and yellows. I refer to this as the “single versus multivariable risk assessment problem.” We have single rows with risk assigned and work them as if they are singular risks. Attackers, on the other hand, chain risks together. They leverage a low risk on a Web server and a low risk on a database server to get access to high-risk data. Two lows can equal a high? Yes, but your prioritization process doesn’t think that way.

What can you do to get a more accurate prioritization list? Focus on multiple-threat attack chains rather than threats alone. Grab a conference room, some coffee, and the leaders of each of your IT areas (network, infrastructure, application) and draw a simple diagram of your network from a 30,000-foot view. Start pretending attacks are successful using the single items from your threat list. For example, assume the low-risk item in your spreadsheet mentioning a threat on your endpoints is exploited. What do attackers have access to in terms of other threats now that the threat has been exploited? For example, can they now exploit the medium-level threat on the file server because all users have birthright permissions that allow them to authenticate to the file server? OK, follow that threat. Now that the attacker is on the file server, what threats can they leverage now?  

As you do this a couple of times and start with various threat entry points, you will start to see patterns emerge — threats that seem to be in every attack chain. That is where you should be prioritizing your work.

Let’s look at a real-world example from a client using the scenario above of the endpoint-threat starting point. What came out of the exercise was that the biggest threat repeated across all attack chains was the use of NTLMv1, an old authentication protocol for Microsoft Windows that is prone to many vulnerabilities used by attackers, to perform man-in-the-middle attacks and to brute force passwords — yet this threat was a low-risk, low-impact item in the client’s fancy Excel spreadsheet.

If you really want to provide even more accurate prioritization, at each step of the above process add how easy it is to detect this risk on a scale of 1 to 10 and the impact on the overall success of the attack using the same 1 to 10 scale. For example, if the medium-risk threat on the file server included access to the corporate intellectual property, and you have no ability to detect who accesses which files, this isn’t easy to detect (10) and the severity is high (9 or maybe a 10). The larger the numbers you have, the more likely this attack chain is actually the high-risk attack chain. This can help quantitatively cause the low-risk, high-impact threats to bubble up a bit quicker.

This process isn’t hard. It isn’t overly complicated. It doesn’t need an actuary to provide a bunch of algorithms to calculate. But it works. It has an official name, failure effect mode analysis, and has some offshoot versions you may have heard of, such as Alex Hutton’s RiskFish and the bowtie method. All approaches want you to focus on the process the attackers actually use and to calculate (or at least qualitatively evaluate) the intersection of multiple risks while taking into account your ability to prevent or detect such risks. So stop using those multicolored Excel spreadsheets and instead start documenting multivariable risks in order to better prioritize.

Related Content:

Michael A. Davis has been privileged to help shape and educate the globalcommunity on the evolution of IT security. His portfolio of clients includes international corporations such as ATT, Sears, and Exelon as well as the U.S. Department of Defense. Davis’s early embrace of … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/prioritizing-threats-why-most-companies-get-it-wrong/a/d-id/1328473?_mc=RSS_DR_EDT

US Senate Overturns Obama Consumer Privacy Rule

The FCC regulation, passed in October, was rejected in a 50-to-48 vote and is now in the House of Representatives.

The Federal Communications Commission (FCC) rule requiring internet providers to seek consumer’s permission before sharing their personal details received a setback yesterday after the US Senate voted it out, Reuters reports. The regulation now goes to the House of Representatives for approval.

This decision – 50 for and 48 against – is seen as a victory for internet providers. Democratic members of the FCC and Federal Trade Commission described it as creating “a massive gap in consumer protection law as broadband and cable companies now have no discernible privacy requirements.” While FCC chief Ajit Pai assured consumers would continue to have privacy, the Consumers Union says the vote “is a huge step in the wrong direction, and it completely ignores the needs and concerns of consumers.”

The Obama administration introduced this rule in October, but the FCC delayed it from taking effect. Following its introduction, Republicans voiced concerns that such a rule would give undue advantage to Facebook, Twitter, and Google in digital advertising.

Democrat Ed Markey says, “Republicans have just made it easier for American’s sensitive information about their health, finances and families to be used, shared, and sold to the highest bidder without their permission.”

Read details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/us-senate-overturns-obama-consumer-privacy-rule/d/d-id/1328479?_mc=RSS_DR_EDT

Sandia Testing New Intrusion Detection Tool That Mimics Human Brain

Neuromorphic Data Microscope can spot malicious patterns in network traffic 100 times faster than current tool, lab claims.

A project that started off as a medical study into cerebral palsy in children has yielded a technology that its creators say could help organizations detect cyberthreats 100 times faster than current products.

The technology was developed by Boston-area startup Lewis Rhodes Labs (LRL) and fine-tuned with the active participation of researchers from Sandia National Laboratories and is called the Neuromorphic Data Microscope.

The technology—currently implemented in the form of a PCIe based processing card—can be used to inspect large volumes of streaming data to find patterns that match known bad behavior faster and more cost-effectively than presently possible, according to the two organizations.

Typical intrusion detection systems sequentially compare relatively small chunks of network data against a library of known malicious patterns to spot threats. The Neuromorphic Data Microscopic does the same pattern matching in a much faster and more parallel manner that mimics the way the human brain processes streaming data.

“One way of thinking about it is when you try matching patterns on a computer, it is a more serial process,” says David Follett, CEO and co-founder of LRL. “The brain is massively parallel.”

It streams data – such as the things within an individual’s range of vision – past stored memory in a very efficient way to help the individual identify people, places, or things that are familiar. 

The Neuromorphic Data Microscope takes the same approach to inspecting massive volumes of streaming network data and finding patterns that suggest malicious behavior. It accomplishes in a single processor card the same level of parallelism that would take multiple racks of traditional cybersecurity systems working in parallel to deliver.

In its current form, the technology accelerates complex pattern matching by a factor of over 100 while using 1,000 times less power than conventional cybersecurity systems, Follett says. LRL will soon implement an ASIC version of the data microscope that will be capable of delivering a 10,000 times performance gain over current intrusion detection tools, he says.

Such capabilities will enable far more complex pattern matching and will allow organizations to spot attacks that are easy to miss currently, says Sandia computer systems expert John Naegle.

“We want to run much more complicated and sophisticated rules against our data to detect malicious types of patterns,” says Naegle. Because of the enormous computing resources it would take to run some of these rules, however, Sandia has had to make conscious decisions about what it can and cannot do with its available computing resources.

“This gives us the opportunity to drastically change the way we do cybersecurity,” Naegle says. “Right now tools are expensive, cumbersome, and very CPU-constrained” to allow for the kind of complex pattern matching Sandia has wanted to do. “This technology gives us an entirely different way to look at the problem and an entirely different way to look for suspicious traffic.”

Naegle describes the data microscope as similar in concept to the Snort open-source intrusion detection tool used by many organizations, including Sandia.

Organizations are under increasing pressure to find better and quicker ways of detecting malicious behavior on their networks. Cybercriminals often are able to easily circumvent many pattern-matching, signature-based intrusion detection systems by making relatively small changes to their malware. So capabilities like those claimed by Sandia and LRL could make a bigger difference.

The idea for the data microscope evolved from a mathematical model that LRL researchers developed for comparing the brains of children suffering from cerebral palsy with brains that do not have the disorder. In using the model, the researchers realized they had developed a way of doing computing that mimicked the manner in which a human brain processes information, a description of the technology on LRL’s website noted.

Sandia is using the Neuromorphic Data Microscope for cybersecurity purposes. But it can be used in a wide range of other applications involving the use of massive volumes of streaming data, Follett says. Examples include applications such as image and video processing, consumer data analysis, fraud identification, and financial trading.

Related stories:

 

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/sandia-testing-new-intrusion-detection-tool-that-mimics-human-brain/d/d-id/1328478?_mc=RSS_DR_EDT

America’s JobLink Suffers Security Breach

A third-party hacker exploited a flaw in America’s JobLink application code to access the information of job seekers from 10 states.

America’s JobLink (AJL) was recently the victim of a security breach when a hacker exploited a flaw in its application code to gain unauthorized access to information of job seekers in 10 states. AJL, a multi-state system which links job seekers with employers, has since identified and eliminated the code misconfiguration.

AJL said on March 21 that names, birthdates, and Social Security Numbers of applicants from Alabama, Arizona, Arkansas, Idaho, Delaware, Illinois, Kansas, Maine, Oklahoma, and Vermont were illegally accessed by an outside source. It explained that the code misconfiguration was introduced into the system through an update last October.

AJL is currently working with the FBI to apprehend the hacker while a forensic firm is carrying out a detailed examination of the hacked accounts.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/americas-joblink-suffers-security-breach/d/d-id/1328480?_mc=RSS_DR_EDT

Google slaps Symantec for sloppy certs, slow show of SNAFUs

Google’s Chrome development team has posted a stinging criticism of Symantec’s certificate-issuance practices, saying it has lost confidence in the company’s practices and therefore in the safety of sessions hopefully-secured by Symantec-issued certificates.

Google’s post says “Since January 19, the Google Chrome team has been investigating a series of failures by Symantec Corporation to properly validate certificates. Over the course of this investigation, the explanations provided by Symantec have revealed a continually increasing scope of misissuance with each set of questions from members of the Google Chrome team; an initial set of reportedly 127 certificates has expanded to include at least 30,000 certificates, issued over a period spanning several years.”

Googler Ryan Sleevi unloads on Symantec as follows:

“Symantec allowed at least four parties access to their infrastructure in a way to cause certificate issuance, did not sufficiently oversee these capabilities as required and expected, and when presented with evidence of these organizations’ failure to abide to the appropriate standard of care, failed to disclose such information in a timely manner or to identify the significance of the issues reported to them.

These issues, and the corresponding failure of appropriate oversight, spanned a period of several years, and were trivially identifiable from the information publicly available or that Symantec shared.”

The post gets worse, for Symantec:

“The full disclosure of these issues has taken more than a month. Symantec has failed to provide timely updates to the community regarding these issues. Despite having knowledge of these issues, Symantec has repeatedly failed to proactively disclose them.  Further, even after issues have become public, Symantec failed to provide the information that the community required to  assess the significance of these issues until they had been specifically questioned. The proposed remediation steps offered by Symantec have involved relying on known-problematic information or using practices insufficient to provide the level of assurance required under the Baseline Requirements and expected by the Chrome Root CA Policy.”

The upshot is that Google feels it can “no longer have confidence in the certificate issuance policies and practices of Symantec over the past several years” and it therefore proposes three remedies:

  • A reduction in the accepted validity period of newly issued Symantec-issued certificates to nine months or less, in order to minimize any impact to Google Chrome users from any further misissuances that may arise.
  • An incremental distrust, spanning a series of Google Chrome releases, of all currently-trusted Symantec-issued certificates, requiring they be revalidated and replaced.
  • Removal of recognition of the Extended Validation status of Symantec issued certificates, until such a time as the community can be assured in the policies and practices of Symantec, but no sooner than one year.

The first remedy will mean that Chrome stops trusting Symantec-issued certificates as outlined in the table below.

Google reckons this plan will mean “web developers are aware of the risk and potential of future distrust of Symantec-issued certificates, should additional misissuance events occur, while also allowing them the flexibility to continue using such certificates should it be necessary.”

And of course it also gives developers time to arrange new certificates from whatever issuer pleases them most.

Symantec has told The Register it is developing a response to Google’s allegations. We will add it to this story as soon as we receive it. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/24/google_slaps_symantec_for_sloppy_certs_slow_show_of_snafus/

Inside OpenSSL’s battle to change its license: Coders’ rights, tech giants, patents and more

Analysis The OpenSSL project, possibly the most widely used open-source cryptographic software, has a license to kill – specifically its own. But its effort to obtain permission to rewrite contributors’ rights runs the risk of alienating the community that sustains it.

The software is licensed under the OpenSSL License, which includes its own terms and those dating back to the preceding SSLeay license.

Those driving the project announced plans to shift to a new license in 2015 and now the thousand or so people who have contributed code over the years have started receiving email messages asking them to grant permission to relicense their contributions under the Apache Software License, version 2.

Theo De Raadt, founder of OpenBSD, a contributor to OpenSSL, and creator of a LibreSSLforked from OpenSSL in 2014 – expressed dissatisfaction with the relicensing campaign in a mailing list post, criticizing OpenSSL for failing to consult its community of authors.

“My worry is that the rights of the authors are being trampled upon, and they are only being given one choice of license which appears to be driven by a secret agreement between big corporations, Linux Foundation, lawyers, and such,” he explained in an interview with The Register via phone and email.

For years, OpenSSL went largely unappreciated, until the Heartbleed vulnerability surfaced in 2014 and shamed the large companies that depend on the software for online security to contribute funds and code.

The planned licensing change comes with the endorsement of Intel and Oracle, among the companies that pledged $3.9 million to the Linux Foundation as atonement. A portion of that funding transformed OpenSSL into something more than the shoe-string operation it had been for years.

Rich Salz, a member of the OpenSSL development team and senior architect at Akamai Technologies, in a phone interview with The Register, said that in the year before Heartbleed, two people were responsible for almost all of the changes being incorporated into OpenSSL. Now there are at least 150 contributing and making pull requests, he said.

Salz cited several reasons for seeking a new license for OpenSSL.

“If you read the SSLeay license carefully, it says among other things you cannot distribute this code under any other license,” he said. “What that means is for people who make derivations and want to license their changes, as long as their changes are derived from SSLeay license, they can’t.”

The license also includes advertising credit clauses, which Salz characterized as “obnoxious.” He said, “We want to move to a license that’s completely standard and well-known and widely accepted by the community, by the industry.”

A source familiar with software licensing, who asked not to be named because of lack of employer authorization, echoed Salz’s concerns, describing the SSLeay license as a contractual freak and a compliance nightmare. The license states that the code cannot be placed under another license, which makes it incompatible with some popular copyleft licenses because they stipulate additional terms can only ease restrictions, the source said.

The advertising clause mentioned by Salz, according to the source, requires that any mention of software including OpenSSL comes with attribution. “This does not say when you distribute, it says when you talk about it,” the source said. “That’s a restriction on use, which runs very contrary to the spirit of what the community has worked towards.”

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/24/openssl_asks_contributors_to_agree_to_apache_license/

‘Turkish’ hackers threaten to reset millions of iCloud accounts

A new band of hackers, styling itself the “Turkish Crime Family”, is claiming it has secured the details of some 200m iCloud accounts and that if Apple doesn’t pay a whopping $75,000 bitcoin or ethereum ransom (or $100,000 in iTunes gift cards) it will wipe the lot.

There are a few problems to face initially. First, Apple says its systems haven’t been breached. The company told Naked Security:

There have not been any breaches in any of Apple’s systems including iCloud and Apple ID. The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services.

So 200m accounts obtained from previously compromised third party services is OK? Obviously not, but there’s no suggestion that Apple itself is responsible for any compromised security. The Turkish Crime Family itself appears to be new on the security scene, believed to have started life in Istanbul but now resident in Green Lanes, north London, according to one report. Helpfully, the organisation has a Twitter account.

Another curious facet of the alleged breach is that asking for payment in extremely traceable iTunes vouchers seems more than slightly curious; why would you not ask for something with a less clean audit trail? The group itself disputes the amount that’s been reported and blames a media relations operative (presumably the same one who put an email address for media inquiries on the Twitter profile):

 

The organisation has posted what it claims is video evidence to the Motherboard site.

David Kennerley, director of threat research at Webroot, is among the first to wonder whether the threat is actually real.

There are a lot of questions that need to be answered such as, do these hackers really have access to the data they claim? How did they get hold of such a large amount of data? Was it a vulnerability in Apple’s infrastructure or breach of third-party tool or organisation? Or does the fault lie with good old password re-usage between sites and apps from a consumer side?

Wherever the data originates, assuming it’s genuine, Apple faces the decision of whether to pay the ransom or to tough it out. Whichever way it goes, it will want to take precautions to see that this never happens again. Kennerley says:

Whether [the breach] proves to be huge news, or no news at all – it’s always good to remind ourselves, no matter the reputation of the organisation that we trust to protect our digital lives we should always take extra measures to protect our own privacy and data.

Our advice would be to assume the data has been compromised somehow; if it turns out to be a hoax, the worst thing that can happen is that your data is more secure.

Precautions include:

  • If your data is stored anywhere online, assume it could be compromised by a faulty server, deliberate action or the host company going bust. Have a backup – so if your primary host is wiped for any reason you still have your data.
  • Use two factor authentication where possible. Apple encourages it, and here’s an article about the whys and wherefores that we wrote a few months ago.
  • Don’t use the same password everywhere. We can’t say this often enough but people still do it.
  • Pick a strong password – here’s our advice on how to do that
  • Observe the standard security hygiene protocols – don’t click on links from unknown sources, go to the website independently.

Finally, there are still people who believe their Apple hardware is completely safe from malware just because it’s Apple. It’s great kit and it works beautifully but nobody is safe – see our article on Apple security.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KqeOj7OGonI/

News in brief: WikiLeaks drops more CIA documents; ISP privacy rules killed; Instagram launches 2FA

Your daily round-up of some of the other stories in the news

WikiLeaks releases more stolen CIA documents

WikiLeaks on Thursday released another tranche of documents stolen from the CIA that show that the agency has been creating tools to bypass devices from Apple for at least a decade.

This dump, dubbed “Dark Matter”, is the second from an archive known as “Vault7”, the first tranche of which was posted by WikiLeaks earlier this month. After the first tranche was posted, detailing attacks that would require physical access to devices, vendors pointed out that many of the exploits detailed in the documents had since been patched.

Earlier this week WikiLeaks offered to work with technology companies including Apple, Google and Microsoft to help them patch the vulnerabilities detailed in the CIA documents in return for a list of demands.

Naked Security will be looking in more detail at the documents and evaluating their contents tomorrow.

Senators overturn privacy rules for ISPs

American internet users will be losing a key protection for their data after senators voted on Thursday to repeal historic rules that required ISPs to get their customers’ permission before selling on sensitive data such as their browsing history to third parties.

The senators voted to approve a resolution that not only prevents the FCC’s privacy rules from going into effect, but also bars the FCC from ever enacting similar protections, the Washington Post reported.

While industry groups unsurprisingly welcomed the result, privacy campaigners deplored it. The ACLU said: “It is extremely disappointing that the Senate voted today to sacrifice the privacy rights of Americans in the interest of protecting the profits of major internet companies.”

Instagram finally rolls out 2FA

Instagram, the image-sharing social media platform owned by Facebook, has finally caught up with best-practice security advice and is rolling out two-factor authentication, the service said on Thursday.

We – and former president Barack Obama – are big fans of two-factor authentication, and we’ve been urging you to turn it on wherever possible since, well, forever. If you’re an Instagram user, enable it by going to settings and then turning on the option to require a security code under the new “Two-Factor Authentication” option.

At the same time, Instagram said it is adding a feature that puts a blurred screen over images that have been flagged as “sensitive” but which don’t violate its guidelines: you’ll have to tap if you want to see the photograph.

Unlike 2FA, there doesn’t seem to be a way for adults to opt out of Instagram’s modesty screen, so it remains to be seen how useful or annoying that becomes.

Catch up with all of today’s stories on Naked Security


 

 

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_w7B1phdK5M/

Android Forums resets passwords after hack

Add Android Forums to the growing list of web properties that have suffered a security breach.

One in 40 members of the forum (2.5 per cent) were exposed by the hack. Moderators said they’ve been able to identify potential compromised accounts, the passwords of which have been reset. Many of the affected accounts were older and half of them had never posted to Android Forums.

Information taken includes email addresses, hashed passwords, and salt. The administrators speculate that targeted phishing emails by crooks may follow, so extra vigilance is advised. Even those not directly affected by the incident are advised to change their passwords, as a precaution.

The Neverstill Team, which runs the site, apologised for the incident and promised to “reinvigorate” its security efforts. “Among our newest efforts is site-wide HTTPS support, as well as a new 2-step authentication requirement for our staff,” a statement by the developers added.

Android Forums’ breach notice

El Reg learned of the breach following a tip-off from a reader who was notified of the problem. Members of the site can find its breach notification statement here (registration required). ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/23/android_forums_breach/