STE WILLIAMS

SOC in Translation: 4 Common Phrases & Why They Raise Flags

By keeping an ear out for out for catchphrases like “Just ask Stu” or “I’ve got a bad feeling about this,” CISOs can overcome the barriers that get between business leaders and their security teams.

Having worked in many different security environments, I’ve picked up on more than a few phrases that you hear only in the security operations center (SOC). These catchphrases frequently need translation — especially as CISOs and the entire C-suite look to get more involved with their organizations’ security practices.

Below are a few to listen for, along with what they mean for the business.

“That’s not the true source.”
The true source? When you hear this, someone is likely performing an investigation and has hit a confounding barrier. The translation: “I’m analyzing network traffic whose origin is other than what’s listed in the Source IP field.” The cause is likely one of these conditions:

  • Proxy: A proxy device is masking the origin.
  • DNS recursion: DNS servers use recursive queries to resolve hosts not in their cache. This causes many DNS requests to appear to originate from a DNS server and not the origin client.
  • Unusual protocols/spoofing: Some protocols will actually communicate “backward” during their conversations (e.g., FTP active data transfer). Visibility on the wild Internet will also expose analysts to spoofed communications or the responses to victim networks around the world (e.g., DDoS backscatter).

If you hear “not the true source” a lot in your SOC, you may have visibility blind spots that are inhibiting the investigative process.

“Clear the channel before you start hunting.” 
OK, full disclosure: This one has been directed at me quite a bit, and I’ve heard this phrase in every SOC where I’ve worked. It translates to: “We have more alerts than we know what to do with and not enough analysts to deal with them. Please attend to all the alerts before you explore the data looking for your own outliers.” 

SOC managers who find they are uttering this phrase often should take a step back and consider:

  • Analysts’ morale. Analysts aren’t often satisfied with working “the channel.” You see, the channel never ends. Perhaps dedicated hunting time allotted per day/week regardless of the alert queue will keep analysts’ morale up. Also, you’ll be amazed what they’ll find. All of the biggest incidents I’ve ever been a part of did not start with clearing the channel; they all resulted from hunting.
  • Technology gaps. Analysts may not have the tools they need to conquer their alert volume. If it takes 30 minutes or longer to analyze an event, there is something missing. 
  • Whether alert volume is too high. Analysts often don’t have the agility to “tune” alerts fast enough to keep up with the alert volume. Cumbersome change management results in no changes at all. 

Your very best analysts want to hunt. If these analysts move on to different opportunities due to job dissatisfaction, you’ll be saying this phrase even more.

“Just ask Stu.”
Well, replace Stu with the gold-star guy/girl in your organization who has all the answers. Here’s the translation: “Our daily activities and processes are so complicated and have evolved so rapidly, there’s only one guy who knows how the whole thing works. His name is Stu — go talk to him.” Every place where I’ve worked had a “Stu,” and in fact, Stu is the real name of one of them. (He knows who he is — hi, Stu!) If you’re a manager and hear this phrase, you must do two things: 

  1. If Stu likes money, give Stu a raise. If money isn’t what makes Stu happy, find out what does, and give him that. You can’t lose Stu.
  2. Build a process to capture everything Stu knows as artifacts in your system. Your system must become Stu. Capturing knowledge from your workforce is not a “one-time thing” — it’s a continuous process, and it’s never complete. 

If you’re a colleague of Stu’s, learn everything you can. Shoulder surf him. Steal his bash history. Read the books on his desk while he’s not there. Whatever it takes. Study him.

“I’ve got a bad feeling about this.”
Your analysts are developing an intuition! This is great! Translation: “I can’t describe it yet, but this (IP address, user-agent string, URI, username, etc.) just looks wrong.” When you spend enough time as an analyst slogging through the mundane task of reacting to events that turn into nothing burgers, there will come a time when you see something where you sit up in your chair and goosebumps appear on your arm. This is analyst intuition, and it’s very hard to code and train for (although some companies are getting there). Here are the best ways to foster it:

  • Make sure analysts have the context and the speed to ask the easy questions and get rapid results. This exposes them to a lot of data and affords them the opportunity to ask questions of the data that might not be asked otherwise.
  • Keep the analysts happy and caffeinated. Analysts with a grin on their face, and whose eyes are focused on the target, will reach the intuitive phase.
  • Training, conferences, communities. Expose the analysts to everything new and possible. New perspectives are often all it takes for an intuitive sense to bubble up.

Ask any SOC team, and members are sure to tell you they’ve heard these phrases at one point or another. The question is, which ones are they using at your organization? By keeping an eye out for these essential phrases and understanding the true meaning behind them, business leaders can overcome the barriers that get between them and their security teams.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX and learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Save $200 off your conference pass until March 23 with Promo Code DR200.

Daniel Smallwood is senior security engineer at JASK, the company modernizing security operations with its Autonomous Security Operations Center (ASOC) platform. Prior to JASK, Daniel spent more than 16 years in security and software development for companies including Alert … View Full Bio

Article source: https://www.darkreading.com/operations/soc-in-translation-4-common-phrases-and-why-they-raise-flags/a/d-id/1331295?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Gartner Expects 2018 IoT Security Spending to Reach $1.5 Billion

Regulations, breach concerns will push spending to over $3 billion by 2021, analyst firm says.

Enterprises worldwide will spend $1.5 billion this year protecting their IoT networks and connected devices against a range of security threats, according to new estimates from Gartner.

That figure represents a 28% increase from the $1.2 billion spent on IoT security last year and reflects growing enterprise concern over vulnerabilities in IoT and connected networks. Gartner says by 2021, such concerns will push IoT security spending to over $3.1 billion.

“Exposure to IoT-based threats may come from unauthorized devices that connect to enterprise systems and that the enterprise does not control,” says Ruggero Contu, research director at Gartner. In that sense, IoT networks and devices present the same problem to enterprises that any shadow IT does, he says.

“There is also the potential issue of organizations being vulnerable to devices owned by third-party business partners or suppliers,” he says. One example is a business that uses facilities like smart buildings – which they might not own – and may be a source of new threats.

IoT-based attacks are a reality, Contu says. A recent survey by CEB Inc., a research firm that Gartner acquired last April, found that nearly 20% of organizations with IoT networks have experienced at least one IoT-related attack already.

Enterprise IoT vulnerabilities pose a threat to the devices themselves, the data in them, the services they may handle, and the broader enterprise network. As malware like Mirai hammered home, attackers can also take advantage of vulnerable IoT devices to build massive botnets for launching DDoS attacks and distributing malware.

Concerns over such issues are growing as organizations across the spectrum are connecting more devices to the Internet to establish a pervasive digital presence, according to Gartner.

Operational Technology (OT) such as industrial control systems and elements of smart grids such as smart meters, vehicles, and smart buildings are all examples of enterprise IoT, Contu notes. “Internet connectivity allows for improvements in operations — such as predictive maintenance,” he says. It also enables the delivery of more customized services, improvements in customer experience, and data aggregation for business strategy improvements.

Despite the broadening IoT footprint, many organizations have not prioritized or implemented security best practices for protecting the environment, according to Gartner.

Where IoT security has been implemented, it is usually at the business-unit level with some cooperation from IT departments. However, there is little effort to coordinate IoT security via a common architecture or through a consistent security strategy, the firm said in its report. IoT vendor, product, and service selection at many enterprises remains largely ad hoc and few IoT security practices have been codified into policy or repeatable best practices.

IoT’s Inherent Insecurities

The lack of security by design in the IoT devices is another huge problem. Many devices that enterprises have begun connecting to the Internet have little by way of security protections and, worse, are not equipped even to receive OS updates, security patches and over the air fixes, Contu says.

Security vendor Avira points to 2017 scanning data from Shodan that showed over 128.7 million IoT devices exposed to the Internet in the US alone out of which 25 million were vulnerable to a total of 45 different exploits.

“It’s what we always see with big pushes in technology,” says Bryan Singer, director of security services for IOActive. “There’s a race to be first to market and unfortunately, when that happens, we’re not paying enough attention to full supply chain security,” he says.

“The bottom line is that IoT devices and apps are so nascent right now that [they really don’t] represent the maturity of code design we see elsewhere,” Singer says. Hardware and software is not often fully vetted for security issues and many IoT startups are just not disciplined enough when it comes to testing software design. “With the explosive growth of devices, we’re seeing explosive IoT vulnerabilities because devices are designed and deployed without security in mind.”

Troublingly, over the next few years expect to see a lot of companies turn to hosted services for their IoT data collection and management needs, Singer says. This will create new problems over data control and security, he says.

Brian Contos, CISO of Verodin, says that from a security industry perspective there was an expectation early on that the manufactures of many of the products that constitute the IoT would bake security into their devices. Unfortunately, that has simply not happened in a broad enough manner, he says.

“And there are so many IoT device types and vendors that it’s challenging to determine what risks they bring and what levels of controls can be implemented to mitigate that risk,” he says. The net result is that just like happened with IT security: everybody’s playing security catch-up with IoT security as well, Contos says.

Tools

Gartner predicts that concerns over IoT risks will drive spending for tools and services that can help organization discover and manage IoT assets on the network, perform security assessments of IoT hardware and software, and conduct penetration testing. Professional services will account for $946 million of the $1.5 billion in total that organizations will spend on IoT security this year, according to Gartner. By 2021, IoT security service spending will more than double to nearly $2.1 billion.

IoT endpoint security tools, such as those for asset discovery and management, are another area where Gartner expects enterprises to spend a lot of money over the next few years. In 2018 organizations will spend upwards of $370 million on endpoint security and over $630 million in 2021. Spending on products that secure IoT gateways will more double from $186 million this year to $415 million in 2021.

“It makes sense that IoT security spending is on the rise and will continue to increase moving forward,” says Tyler Reguly, manager of software development at Tripwire.

The enterprise attack surface just continues to broaden with everything, including kitchen gadgets such as slow cookers, coffee pots, and refrigerators being interconnected, he says. “These new devices with their lack of centralized management and no formal patching processes are an administrator’s worst nightmare.”

Related Content:

 

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/endpoint/gartner-expects-2018-iot-security-spending-to-reach-$15-billion-/d/d-id/1331334?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook fallout: How to protect your data

Is it time to end your Facebook life?

Not deactivate, mind you – actually end things once and for all.

In the wake of Facebook having failed to protect user data from being drained by Cambridge Analytica, we’re talking about what’s involved in permanently deleting data that Facebook holds on us.

That’s likely to be too extreme for many of us. But at the very least, it’s definitely time to check Facebook privacy settings, audit Facebook apps, and consider turning off API sharing.

But first, a quick recap: over the weekend, news emerged about Facebook having lost control of 50 million users’ data.

Facebook, after a week of questioning from investigative reporters at the New York Times and the Observer, suspended data analytics firm Cambridge Analytica and its parent company Strategic Communication Laboratories (SCL), as well as data analytics specialist and Cambridge Analytica founder Christopher Wylie.

How do we escape?

If you’re not ready to part with Facebook entirely, you should at least take a look at who and what you’re sharing your information with on Facebook. That would entail the obvious:

Check your privacy settings

We’ve written about this quite a bit. Here’s a good guide on how to check your Facebook settings to make sure your posts aren’t searchable, for starters.

That post also includes instructions on how to check how others view you on Facebook, how to limit the audience on past Facebook posts, and how to lock down the privacy on future posts.

Those are just part of our 3 ways to better secure your Facebook account, so it’s also worth checking out that article to make sure you’re doing all three.

Next, it’s time to….

Audit your apps.

You should always be careful about which Facebook apps you allow to connect with your account, as they can collect varying levels of information about you.

Case in point: the recent revelations about Cambridge Analytic center around an app, thisisyourdigitallife, that not only took personal data from the 270,000 users who willingly signed up for this personality test, it also scraped the profiles of users’ friends – which is how we got to that astronomical number of 50 million users having their information plundered without permission.

Unless you’ve locked down your privacy settings correctly – see above – the apps, games and websites that your friends use can also access your personal details, photos and updates.

If you yourself have used Facebook to sign in to a third-party website, game or app, those services may continue to access your personal data.

To audit which apps are doing what:

1. On Facebook in your browser, drop down the arrow at the top right of your screen and click Settings. Then click on the Apps tab for a list of apps connected to your account. This takes you to the App Settings page.

2. Check out the permissions you granted to each app to see what information you’re sharing and remove any that you no longer use or aren’t sure what they are for.

3. Below the summary of which apps are sucking what out of your neck is an innocuous looking gray box called Apps Others Use, with this brief description: “People who can see your info can bring it with them when they use apps. Use this setting to control the categories of information people can bring with them.”

Click Edit and there you will find a list we call “Holy mackerel, people can get all that?!

Make the changes and click Save to button up your privates.

If you’re using the Facebook app you can access the same information by pressing the burger menu at the bottom right of your app, then choosing Settings and Account Settings. You’ll then find a menu option for Apps from which you can remove or restrict apps.

Turn off API sharing.

The Electronic Frontier Foundation (EFF) put out this guide to opt out of platform API sharing.

It does so with an apology: we shouldn’t have to “wade through complicated privacy settings in order to ensure that the companies with which you’ve entrusted your personal information are making reasonable, legal efforts to protect it,” but, well, recent events make clear that we can’t leave it up to Facebook to protect our privacy.

1. As above, visit the App Settings page.

2. Click the Edit button under Apps, Websites and Plugins. Click Disable Platform.

3. If that’s too much, you can, again, limit what information can be can be accessible to apps that others use. See above!

And finally, if you’re ready to disengage entirely, there’s the cut-it-out-completely option:

Delete your profile.

This is a lot more serious than simply deactivating your profile. When you deactivate, Facebook still has all your data. To truly remove your data from Facebook’s sweaty grip, deletion is the way to go.

But stop: don’t delete until you’ve downloaded your data first! Here’s how:

1. On Facebook in your browser, drop down the arrow at the top right of your screen and click Settings.

2. At the bottom of General Account Settings, click Download a copy of your Facebook data.

3. Choose Start My Archive.

Be careful about where and how you keep that file. It does, after all, have all the personal information you’re trying to keep safe in the first place.

You ready?

Have you downloaded the data? Have you encrypted it or otherwise stored it somewhere safe? OK, take a deep breath. Here’s comes the doomsday button.

Go to Delete My Account.

There. That’s done. Now all you have to do is listen to friends and family lament your Facebook death. Maybe it will start some conversations about why you felt deleting your profile was necessary.

If you want to share your Facebook exodus stories with us in the comments section below, please do: we’re all ears.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/R-PEvvNd4DU/

Bomb hoax sent to 400 schools blamed on warring Minecraft gamers

A bomb threat email sent to about 400 schools and colleges in the UK is believed to have been a hoax sent by disgruntled Minecraft players who wanted to smear a rival’s server.

According to Sky News, some 24,000 threatening emails were sent, causing hundreds of schools to evacuate on Monday.

The messages looked like they came from Minecraft server VeltPvP, but the company said the account had been spoofed and that the company had nothing to do with it. VeltPvP apologized, saying that it’s being “harassed by a group of cyber criminals that are trying to harass us in anyway possible.”

North Yorkshire Police said in a statement that the bomb threats are considered to be a hoax.

Our Cybercrime Unit Detectives, supported by local officers, have looked at these incidents and it is not believed there is any genuine threat.

Humberside Police said 400 schools were affected across the country.

Sky News quoted Detective Superintendent Tony Cockerill:

We have spoken to all schools who have contacted us, reassured them that there is no need to evacuate and offered them security advice.

…and to Assistant Chief Constable Vanessa Jardine of Greater Manchester Police:

I want to reiterate that there is not believed to be any direct threat following these reports which at this stage are believed to be malicious hoax communications.

Sky News talked to somebody who it believes is one of those responsible for the emails. He said that the threats were meant to get the VeltPvP domain suspended.

Those involved in the gaming spat have claimed that their opponents have engaged in illegal acts to harass them.

One of the gamers allegedly behind the bomb threats accused VeltPvP of launching distributed denial of service (DDoS) attacks and of targeting other rival Minecraft servers. The gamers are also reportedly flinging abuse at each other via images and videos portraying individuals connected to VeltPvP.

Sky News quoted the alleged hoax spreader:

What that network has done is horrible.

You know what else is horrible? Terrifying children, their parents, their teachers and school personnel in 400 schools, all over a petty gaming squabble.

Horrible indeed. He told Sky News that he regretted frightening the children whose schools were evacuated:

It is horrible, it’s not the nicest thing.

The person purported to be behind the bomb threat, believed to be in the US, also told Sky News that he understood he could be arrested.

There are undoubtedly a horribly large number of people who hope that happens very soon, before more schoolkids gets pulled into this mean-spirited, petty squabbling.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/R4Z73kE0sec/

Police ask Google for location data to narrow suspect lists

Police in North Carolina have hit on a simple if potentially controversial way to firm up suspect lists – use location data from Google to work out which devices were being used near the scene of crimes.

According to WRAL, police in Raleigh used warrants in at least four recent investigations to make the search giant reveal the IDs of every device within certain map locations.

Based on one or a combination of GPS, Wi-Fi and cellular location data, police were first given a list of anonymised time-stamped identifiers corresponding to every device within the map coordinates they were interested in.

From an example warrant, this area of interest was as small as 150 metres from specific GPS coordinates, covering two narrow time ranges of around an hour each.

Once police could see the location of each device, a second pass then focused on specific IDs, asking for additional data on their movements outside the initial search area.

Finally, police asked Google to supply the names and addresses tying the devices they were interested in to their real owners.

Google, it seems, is a common denominator because even when a device is not running Android, it is still very likely to be using Google services, for example the Gmail or Maps apps.

Stopping Android from reporting location data is also not as easy as some assume with a report last year finding that even when location services were turned off, SIM chips removed, and all apps de-installed from a device, Google can still triangulate that device’s location from cell tower data.

Innovative it might be, but news of the police technique has sharply divided opinion. WRAL explains:

City and county officials say the practice is a natural evolution of criminal investigative techniques.

Which sounds reasonable given that the examples documented by WRAL related to serious crimes, including two murders, a sexual assault and a possible case of arson. It doesn’t appear this was being used lightly.

However, legal experts have major reservations:

They’re concerned about the potential to snag innocent users, many of whom might not know just how closely the company tracks their every move.

The problem is that while treating device location data as evidence sounds logical, the inferences that can be drawn from it are fraught with danger.

The obvious limitation is proving that a device’s registered owner was the one using it at the time and location police are interested in. There are numerous instances where that might not be the case, including theft or a deliberate attempt to implicate an innocent person.

It’s not that this data is not of legitimate interest, but how it might change the nature of police investigations should the technique become common practice.

It’s already standard for police to ask for text and phone call data for named suspects. This turns that on its head in an important way. WRAL quotes Raleigh defense attorney Steven Saad:

This is almost the opposite technique, where they get a search warrant in the hopes of finding somebody later to follow or investigate.

Put another way, if location data is requested while building a case based on a variety of evidence, that might be legitimate. The danger is that this data becomes the incriminating evidence from which the case is built.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OGE9zgEEQsM/

Bitcoin’s blockchain tainted with links to child abuse imagery

Researchers from Germany’s RWTH Aachen and Goethe universities claim to have discovered links to child abuse images embedded within the Bitcoin blockchain.

The links were uncovered during an analysis of the non-financial content that users have knitted into the blockchain.

The blockchain is a ledger that records of all of the Bitcoin transactions that have ever taken place and it’s designed to make deletion of data a nearly-impossible task.

It does this by making sure that every one of the ten thousand or so Bitcoin node operators (individuals or organisations who validate bitcoin transactions and blocks) has a complete copy of it.

There are node operators all around the world, in many different legal jurisdictions.

The researchers believe that the presence of the non-financial content they’ve discovered, and the absence of barriers preventing something even worse, could make possession of the blockchain illegal:

While most of this content is harmless, there is also content to be considered objectionable in many jurisdictions, e.g., the depiction of nudity of a young woman or hundreds of links to child pornography. As a result, it could become illegal (or even already is today) to possess the blockchain

Although court rulings do not yet exist, legislative texts from countries such as Germany, the UK, or the USA suggest that illegal content such as child pornography can make the blockchain illegal to possess for all users

As of now, this can affect at least 112 countries … This especially endangers the multi-billion dollar markets powering cryptocurrencies such as Bitcoin.

The ability to store non-financial data is part of Bitcoin’s design.

The researchers’ search through Bitcoin’s stash of non-financial data uncovered 274 links to websites hosting images of child abuse, of which 142 were so-called Dark Web sites (Tor hidden services). They also discovered an image embedded into the blockchain that depicts “mild nudity of a young woman” whose age is uncertain.

Alongside the links and images, they also claim to have uncovered a bizarre collection of other digital artefacts – discoveries that highlight the pros, cons and potential for legal landmines that this kind of storage creates.

The following items are just some of the things lurking inside every single copy of the blockchain:

  • A pair of leaked cryptographic keys
  • Software for breaking the copy protection of DVDs
  • The text of a book
  • A cross-site scripting detector designed to detect XSS vulnerabilities in online blockchain parsers (which demonstrates the potential for embedding parser-exploiting malware directly into the blockchain);
  • Wedding photos
  • Emails
  • Chat logs
  • Personally identifiable information including phone numbers, addresses, bank accounts and passwords
  • A backup of the WikiLeaks “cablegate” data

It’s a collection that raises questions (different questions, in different parts of the globe) about intellectual property, copyright, data privacy and data retention.

For example, in the European Union individuals have a right to ask for their personal data to be deleted if it’s not needed, or if it has been used unlawfully.

How will that work on an ownerless system that’s designed to be the digital equivalent of engraved stone tablets?

Deleting things from the blockchain isn’t impossible, but it is extremely hard (which is, of course, entirely the point) because every block of transactions is cryptographically linked to the blocks that came before it.

In order to delete some unwanted data from the blockchain, 51% of the nodes on the Bitcoin network would have to agree to the deletion, and then recalculate all of the blocks that had been added to the blockchain after the original insertion of unwanted content.

If you want to delete something that was added two years ago then 51% of the nodes on the network have to redo two years’ worth of transactions (that’s a number in the hundreds of thousands).

And they can’t share the work of that recalculation between them – all of the nodes, thousands of them, would each need to redo all of that work independently.

Bitcoin bookkeeping is designed to be really hard work for computers, and the global Bitcoin network already consumes a vast amount of electricity just keeping up with business as usual (estimates go as high as 30 TWh per year).

For Bitcoin users, breathless analysts and blockchain-based startups, the technology’s lack of central control is a blessing. For node operators, it could yet prove to be the opposite.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CoBx9S3Ztj4/

Symantec cert holdout sites told: Those Google Chrome warnings are not a good look

Many high profile UK sites still use Symantec certificates just days before Google will begin the process of dropping support for them with the next and upcoming releases of its Chrome browser.

Google’s looming disavowal of digital certificates issued by Symantec will occur across two effective dates, April and October.

Symantec certificates issued prior to 1 June 2016 will stop working with the Chrome 66 (stable) release* on 17 April 2018. The Chrome 70 release, expected in the week of 23 October 2018 will spell the end of trust for all Symantec-issued certificates, as explained in a blog post by Google here.

Google plans to “reduce, and ultimately remove, trust in Symantec’s infrastructure in order to uphold users’ security and privacy when browsing the web” last September after its Chrome team lost “confidence in the trustworthiness of Symantec’s infrastructure” following a series of alleged infractions against industry best practice.

These sanctions were imposed by the community rather than Google alone. The road to this particular perdition is explained in a long thread on Mozilla’s Dev Security Policy mailing list here.

Who needs a fix?

Security researcher Scott Helme makes use of web crawlers to collect daily data on the top 1 million sites. He created a script to go through the certificates collected by his crawlers, before parsing them all to see who is still using a Symantec certificate that will soon be distrusted.

“There are 11 sites in the top 10,000 sites on the web that will break in M66, 502 in the top 1 million sites [unless they replace their certificate],” Helme reports. “M70 is further away [October] but there’s still 4,971 sites that will break when that version is released.”

Helme’s latest figures are an update from a similar exercise he carried out in February, when he discovered that 8,000 sites that will stop working in April or October unless they replace their certificate. Within this group 1,321 will stop working in April, a figure that has since dropped to 502. “This list is not exhaustive, there’s bound to be a few more that were missed.,” Helme told El Reg. “What I can say though is that the ones in the list are definitely affected.”

Sites whose digital certificates are slated for disavowal with the release of Chrome 66 next month include the RAC.

The RAC's SSL cert is slated for disavowal within days

RAC’s SSL cert is slated for disavowal within days

Left as things stand, surfers using Chrome 66 will see a big, red warning when they visit the RAC’s website. This is already happening for users of beta versions of Google’s Chrome 66 browser technology or users of Chrome Canary (which is always several versions ahead).

Warning in Chrome 66 beta thrown up by the RAC's Symantec SSL cert

Warning in Chrome 66 thrown up by the RAC’s Symantec-issued SSL cert

Surfers can just click past such warnings but this is undesirable.

In response to queries from El Reg, the RAC said it was aware of the issue and the offending digital certificate will be swapped out before Chrome 66 goes mainstream, on 19 April. “This is something our team is aware of and new certificates will be applied to our sites shortly, and ahead of the next Chrome coming out of beta,” a spokesman for the motoring organisation told El Reg.

Children’s charity the NSPCC is also affected by the same Symantec cert browser warning issue issue. El Reg also notified the NSPCC but we’ve yet to hear back from that quarter.

Children's charity the NSPCC needs to change its digital cert within days

Children’s charity the NSPCC needs to change its digital cert within days

Several prominent UK organisations need to re-up their certificates before October. These include ScotRail and banks in the RBS Group (Natwest, Royal Bank of Scotland and Ulster Bank), retailer House of Fraser and broadband outfit Gamma Fibre Ethernet.

RBS digital cert warning in Google Chrome console

RBS digital cert warning in Google Chrome console

IT firm SonicWall also needs to swap out its digital certificate before an October deadline.

El Reg identified issues in the named sites on Tuesday 20 March after going through a list supplied to us by Helme in October. Other organisations that needed to swap out their certificate then have already done so since. Businesses that have crossed “changing our digital certificates” from their to-do list include the UK National Lottery and car-park firm NCP.

Reg readers can verify these findings by going to any of these sites and looking in the Developer Tools bundled with their Chrome browser. In the console there will be an error message confirming these sites will stop working in with the release of either M66 (Chrome 66 in April) or M70 (Chrome 70 in October).

Symantec sold off its entire CA business to DigiCert last August in preparation for exiting the market. Website operators have the option to transition to DigiCert or other providers. Those using a Symantec certificate will need to replace it soon or else risk they’ll inadvertently erect digital barriers to prospective customers.

“My worry is that the wider community doesn’t seem fully prepared for the distrust and the impact it will have,” Helme warned. ®

* Chrome Beta users get access on March 15, 2018

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/21/google_chrome_distrust/

UK surgeon suspects his PC was hacked to target Syrian hospital

A British surgeon whose instructions over the internet helped to guide operations in war-torn Aleppo fears his PC was hacked in order to target a makeshift hospital that was subsequently bombed.

Consultant David Nott gave remote instructions via Skype and WhatsApp that helped doctors in Syria carry out operations in early September. Footage of the process at work was broadcast on the BBC’s Newsnight in September 2016.

Dr Nott reckons his computer was then targeted by hackers seeking to pinpoint the M10 hospital, the Daily Telegraph reports. Less than a month later, the hospital was destroyed by a bunker buster-type bomb allegedly dropped by Russian warplanes.

The strike on 3 October 2016 – which scored a direct hit on the trauma hospital’s operating theatre – killed two patients, injured medical workers and resulted in its permanent closure.

Dr Nott said he suspects the precise targeting was made possible after hackers lifted coordinates for the hospital held on his computer. The consultant had carried out many operations in the course of training local physicians while in Syria but only supervised in one operation remotely. After taking advice from those working on the ground he intends to abandon the practice.

Pinpointing the location of a system after hacking one of its components is plausible even though there are other possible explanations for what went down in Aleppo and plenty of doubt over whether the doctor’s suspicions are correct.

Computer scientist Professor Alan Woodward, of the University of Surrey in England, told El Reg: “The details are a bit light in how his computer was hacked, but assuming it was then it would be easy to find the IP address and phone number of recipients.

“I’m not sure it necessarily needed to be a hack whilst he was actually communicating – easier to do it once the hackers knew who he was and from there they could collect the data they needed.”

The Syrian government would likely have access to technology that could pinpoint the physical locations of phones or computers, Professor Woodward added.

“Frankly it’s amazing that any form of comms works in these areas under these conditions but it’s difficult not to conclude that the Syrian government can access the metadata needed to pinpoint mobile numbers and IP addresses.”

Hospitals in Syria have been targeted, possibly in a cynical attempt to drive out populations from disputed areas by making them close to uninhabitable because of a lack of healthcare. Local doctors might be watched by spies or through other means.

The M10 hospital has been bombed at least 17 times. Dr Nott nonetheless thinks the precise coordinates of the operating theatre were obtained by clandestine computer hacking. If this is what happened, it might well have taken place after the broadcast rather than during the operation itself. Dr Nott has since changed both his computer and mobile phone. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/21/syria_hospital_bombing_hack_theory/

Cybersecurity Spring Cleaning: 3 Must-Dos for 2018

What’s This?

Why ‘Spectre’ and ‘Meltdown,’ GDPR, and the Internet of Things are three areas security teams should declutter and prioritize in the coming months.

With each successive data breach, the stakes for companies seem to get higher and higher, with more individuals affected and the costs for remediation escalating. That’s why it’s no surprise that a report published last July by insurance giant Lloyd’s of London estimates that a theoretical global cyberattack could trigger roughly $53 billion in economic losses – a figure that is comparable to record-shattering natural disasters such as 2012’s devastating Superstorm Sandy.

This forecast has serious ramifications for information security teams. It demonstrates that organizations that are following the latest security best practices of 2017 may still need to overhaul their cybersecurity strategy to combat tomorrow’s newest, most highly-evolved threats. With spring just around the corner, along with the deadline for the EU’s May 24 General Data Protection Regulations (GDPR) deadline, now is a good time for businesses to focus on “spring cleaning” data and company data collection and protection policies.

Here are three areas where security teams can declutter and reprioritize for spring 2018.

Fallible Hardware, Beefed up Security
Just a few days into the new year, security experts discovered a 20-year-old flaw within the processors underpinning the majority of computing devices, unveiling vulnerabilities for almost every individual and business the world over. Called Spectre and Meltdown, the bugs leverage data exfiltration techniques to steal network data after penetrating the network perimeter.

While it’s impossible to stop every threat from entering the network perimeter, security teams should seek out tools that can stop attempts at this kind of data exfiltration in their tracks. Among these tools are a class of so-called data loss prevention (DLP) tools that offer a line of defense when advanced threat detection capabilities that guard the network gateway fails.

New Regs, Increased Measurement Monitoring
It may seem counterintuitive to suggest that security teams “declutter” by doing more reporting on the activity taking place on their network. But the fact is, in the run-up to GDPR if your existing security tools aren’t keeping tabs on potentially anomalous traffic taking place over the network – especially those related to data collection – your company will be ill-prepared to meet the new GDPR compliance regulations, and a bevy of other rules going into effect in the coming months.

Short- and Long-Term Strategy for Internet of Things
Even if your organization hasn’t yet embarked on a wide-scale IoT deployment you probably will in the near future. IDC Forecasts worldwide spending on the Internet of Things to Reach $772 billion in 2018. As teams continue to beef up their traditional enterprise networks, now is the time to also begin thinking about how they can secure the oncoming enterprise IoT.

What will this entail? Organizations can start by deciding whether IoT devices will leverage the same gateways and network defenses used for standard connectivity on their existing network. Teams may find it more effective to deploy a dedicated network and administration team to manage the high-frequency, low-energy, beacon-sensor transmissions that characterize the IoT in parallel with larger network connectivity.

The Better Business Bureau and the National Cyber Security Alliance offer a valuable checklist for digital spring cleaning strategies. But security teams will need to go above and beyond to make sure their plans, policies, and tools are ready to defend against current and future advanced threats. What better time than now to get started?

 

Paul Martini is the CEO, co-founder and chief architect of iboss, where he pioneered the award-winning iboss Distributed Gateway Platform, a web gateway as a service. Paul has been recognized for his leadership and innovation, receiving the Ernst Young Entrepreneur of The … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/iboss/cybersecurity-spring-cleaning-3-must-dos-for-2018/a/d-id/1331279?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Online Sandboxing: A Stash for Exfiltrated Data?

SafeBreach researchers extend leaky sandbox research to show how services like VirusTotal and Hybrid Analysis could be used to steal data from air-gapped systems.

Online sandboxing services are an extremely important resource for threat hunters to use to consolidate knowledge and collaborate about emerging threats. However, new research out this week shows that the tables can be turned on organizations using services like VirusTotal or Hybrid Analysis by using them as a way to exfiltrate data from heavily locked down systems. 

Published by researchers with SafeBreach, the report released today adds a new layer to previous work the team did on leaky sandbox services used by cloud-based antivirus platforms. Presented at BlackHat last summer, that earlier research demonstrated how researchers could exfiltrate data off of air-gapped and otherwise disconnected systems by tricking cloud-enhanced anti virus agents into sending stolen data onto a criminal’s server. 

This time around the SafeBreach team used that same principle of hiding a wolf in sheep’s clothing to offer a proof-of-concept for how online sandboxing services could be used as an outbound channel for stolen sensitive data. Unlike the previous research, this time around the hidden data never needs for an attacker to communicate out of the sandbox in order for them to find their stash.

Here is the way it works: a piece of custom malware is designed to be found on the system and uploaded to an online sandboxing service like VirusTotal, either by a human security analyst or a tool integrated with the online service. Unbeknownst to the victim, the stolen data is encoded and embedded in that malware. And so is a secret and searchable passphrase.

“One of the main objectives that needs to be addressed by the attacker is how to make the data that they incorporate into the malware visible to him – to be identified by him after the malware is uploaded to the server – but still be hidden enough that any other user cannot identify that the data is (taken) from an organization, or from any malicious contents inside it,” says Dor Azouri, one of the primary researchers for SafeBreach on this project.

Azouri explains that this can be achieved by developing a “magic” string of text and either encrypting or encoding it in such a way that the attacker can make a simple query into the public sandboxing platform in order to pull up that specific piece of malware that’s been uploaded to the database.

By passively collecting the data in this way, the attacker doesn’t need to emit outbound network traffic or run an HTTP server or authoritative DNS server, according to Azouri, noting that the process is very low-profile and can be done on even extremely isolated systems. The downside, he says, is that the attacker must know – or at least bet heavily – that their targeted victim is using an online sandbox engine as part of their normal security processes.

According to a VirusTotal spokesperson, the SafeBreach research presented here isn’t a bug in VirusTotal. It is just a way for attackers to misuse the same kind of feature you’d have in any number of services widely used by enterprises today that allow the uploading, publishing and retrieving of data. Of course, the point from SafeBreach is that this method is meant to be used as a last-mile connection between an attacker and an otherwise well-fortified system. A sensitive system like this wouldn’t be using any other kind of SaaS-based information sharing platform, but the security team might not think twice about sending off a sample to VirusTotal from that system.

Nevertheless, VirusTotal’s spokesperson contends that this isn’t a likely or practical channel for attackers.

“The approach laid out by Safebreach does not make practical sense from an attacker perspective,” the spokesperson says. “As an attacker, you don’t want to use a channel that is being inspected by the whole security industry to exfiltrate data when there are thousands of other public internet services that could accomplish the same thing much more discreetly.”

Related Content:

 Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Early Bird Rates expire Friday March 23. Use Promo Code DR200 save $200.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/cloud/online-sandboxing-a-stash-for-exfiltrated-data/d/d-id/1331327?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple