STE WILLIAMS

You *did* encrypt that USB drive… didn’t you? [Chet Chat Podcast 265]

In this episode of the Chet Chat podcast, Sophos expert Chester Wisniewski interviews New York technology journalist Paul Wagenseil about the latest security issues.

If you enjoy the podcast, please share it with other people interested in security and privacy and give us a vote on iTunes and other podcasting directories.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/atF1pXL7RIw/

Manic miners, hideous hackers, frightful flaws, vibrating mock cock app shock – and more

Roundup Phew, we made it to the weekend. Let’s take a look at everything that went down in IT security beyond what we’ve already covered this week.

The week started badly after an anonymous individual managed to bork the Parity Ethereum wallet and lock up $280m with of the crypto-currency – an act that may or may not have been accidental. And speaking of alt-coins and non-accidents, criminals are really keen to get you mining digital cash for them, using your computers and your electricity supply.

So-called drive-by-mining software, which uses the spare CPU cycles of a computer visiting a website, has been around for a while. A new strain of Monero-crafting JavaScript code called Papoto came to light after its developers rather stupidly offered it to an ethical white hat hacker in the UK – who promptly blew the whistle on Twitter…

Thankfully many antivirus and ad-blocking programs are getting good at spotting and blocking such code, but we’re certainly not out of the woods yet. Mining code running on smartphones is also on the rise, with one researcher finding that Google’s Play Store was once again hosting crafty coin crafters.

One example is an Android crossword puzzle that was worryingly smart. To evade detection it only runs the coin mining code at night, when people are asleep, or when the phone is plugged in to charge – nothing kills a battery like persistent coin mining, so digging up cyber-dosh when hooked up to the mains is a neat idea.

Another miner was found in an Android app called Reward Digger: this one actually told users the coins were being generated for the user, while not mentioning that it was also secretly mining coins for the developer. Mobile phone users are going to become increasingly popular as processor speeds increase and due to the fact few people use security software on their smartphones.

Hardcore hacking

Over to the Windows desktop world, and the headache of miscreants hijacking PCs via Dynamic Data Exchange (DDE) documents is getting much, much worse.

DDE has been around for decades, first making an appearance in Windows 2.0 back in 1987 and was a good idea at the time, allowing, for instance, an Excel spreadsheet to be embedded and editable in a Word document. The downside is that hackers have realized that this is a very handy way to trick marks into executing malicious code smuggled into the files.

Now McAfee has spotted that APT28 – aka the Fancy Bear crew thought to be part of Russian military intelligence – has adopted the technique. There are patches available from Microsoft to combat techniques exploiting DDE, so make sure you are fully protected.

Speaking of potential state-sponsored hacking, Symantec has spotted a new crew called Sowbug that’s going after government targets in South America and Southeast Asia, with successful attacks against Argentina, Brazil, Ecuador, Peru, Brunei and Malaysia in a two-year campaign.

The group is looking for specific government data relating to Asian police, and is very stealthy, in some cases hiding out on networks for up to six months. It obfuscates its custom malware – dubbed Felismus – by pretending to be file extension for Windows and Adobe.

It’s not known who is behind Sowbug, but it would be a country with advanced hacking capabilities interested in global policy towards Asia. Any guesses?

Meanwhile hackers managed to hijack and deface hundreds of school websites across the US with a pro-Daesh-bag message and images of Saddam Hussein on Monday.

“Team System Dz” – a hacking crew with plenty of form in this area – claimed responsibility for the mass defacements. Most of the affected organizations were hosted by web hosting firm SchoolDesk. An example of one of the hacks was recorded by defacement archive Zone-h here.

Frightful flaws

On the flaw front there’s news of an old flaw that might be much worse than first thought. Earlier in the month we reported on a flaw found in the library code of Infineon trusted platform modules, which are used to generate encryption keys in a huge amount of devices, from computers and phones to security keys and identity cards.

At first people weren’t too worried because the keys generated weren’t that weak – you’d need around $30,000 of computer time to crack data secured by the vulnerable modules. But better techniques have since been developed, and as a precaution Estonia has announced that it is cancelling and reissuing every ID card in the country, because the cards rely on Infineon’s busted code.

Estonia is particularly touchy about it because it has one of the most internet-focused governments out there and is highly dependent on the cards. It also has Russia as a neighbor, and fears President Putin and his pals are coming to claim back the Baltic States – and may kick things off with a little meddling in the national ID card system.

Bracket Computing has added detection of advanced persistent threats to its Bracket Security Software product.

Dubbed ServerGuard, the software runs in what the company calls a metavisor, an agent-like software layer that sits between guest VMs and the hypervisor. The metavisor can monitor activity in a guest VM, but is immutable.

ServerGuard takes advantage of that position to inspect guests for changes that suggest the presence of malware, such as changes to files that can only be written with root access. Bracket’s CEO told The Register he feels that watching that sort of thing would have stopped plenty of recent attacks.

If ServerGuard sees the fingerprints of such an attack, policy-driven responses such as snuffing out a VM come into play. ServerGuard and the metavisor can run alongside on-prem or cloudy VMs.

Another flaw story just came in, although this one is more psychological. For years we have been told to trust HTTPS sites as more secure, but hacklers have got wise to that.

A lot of new phishing webpages are being set up with HTTPS enabled – about one every two minutes according to security shop Wandera. The company scanned new security certificate applications for a day and found new TLS/SSL cert registrations came in at an average rate of 587,436 an hour, and of those 38 were affiliated with phishing sites.

Wandera warns that mobile users are particularly at risk, since the small screen makes URL checking a pain, and users may just see the HTTPS padlock on the phishing page and assume it is legit. The top domains for phishers were Apple, WhatsApp, Amazon and Netflix.

And finally, a story that will send shivers down your spine in more ways than one. It turns out a software flaw might be recording remote lover’s most intimate moments.

The problem comes with an app controlling a vibrator from teledildonics maker Lovense. The sex toy is designed so it can be controlled remotely over the internet and monitors the phone’s microphone to let you can whisper sweet nothings in your partner’s ear while pleasuring them from afar.

One small problem however – the Android version of the app was also taking temporary audio recordings of the sounds around the smartphone, recording potentially telling noises. Thankfully the manufacturer assures us the sounds stay on your phone, not its servers, and the app has now been fixed to avoid generating the recordings. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/11/security_roundup/

Parity’s $280m Ethereum wallet freeze was no accident: It was a HACK, claims angry upstart

A crypto-currency collector who was locked out of his $1m Ethereum multi-signature wallet this week by a catastrophic bug in Parity’s software has claimed the blunder not an accident – it was “deliberate and fraudulent.”

On Tuesday, Parity confessed all of its multi-signature Ethereum wallets – which each require multiple people to sign-off transactions – created since July 20 were “accidentally” frozen, quite possibly permanently locking folks out of their cyber-cash collections. The digital money stores contained an estimated $280m of Ethereum; 1 ETH coin is worth about $304 right now. The wallet developer blamed a single user who, apparently, inadvertently triggered a software flaw that brought the shutters down on roughly 70 crypto-purses worldwide.

That user, known as devops199 on GitHub although has since deleted their account, claimed they created a buggy wallet and tried to delete it. Thanks to a programming blunder in Parity’s code, that act locked down all wallets created after July 20, when Parity updated the multi-signature wallet software following a $30m robbery.

failure

Parity calamity! Wallet code bug destroys $280 MEEELLION in Ethereum

READ MORE

One of those now-frozen Ethereum wallets belongs to Cappasity, a startup an online marketplace for AR and VR 3D models. It says it had 3,264 ETH in the knackered Parity money store, worth about $1m at current prices, and isn’t likely to get the funds back any time soon. Cappasity amassed the Ethereum from punters buying ARtokens, which can be exchanged for designs when the souk launches later this year. The biz still has access to the Bitcoins it received for ARtokens.

Now Cappasity has alleged the wallet freeze was no accident: someone deliberately triggered the mass lock down, we’re told, and there’s evidence to prove it. By studying devops199’s attempts to extract and change ownership of ARToken’s and Polkadot’s smart contracts, it appears the user was maliciously poking around, eventually triggering the catastrophic bug in Parity’s software

“Our internal investigation has demonstrated that the actions on the part of devops199 were deliberate,” said Cappasity’s founder Kosta Popov in a statement this week.

“When you are tracking all their transactions, you realize that they were deliberate… Therefore, we tend to think that it was not an accident. We suppose that this was a deliberate hacking. We believe that if the situation is not successfully resolved in the nearest future, contacting law enforcement agencies may be the right next step.”

This rather gives a lie to the idea that this was a one-off accident. Instead it looks as though devops199 was deliberately trying to break the multi-sig system and took a number of tries to do so.

While the Ethereum in the wallets is untouched, the Bitcoin alternative is not accessible. Parity has yet to issue an update on its progress to recover the currency, and did not reply to requests for comment today. That’s not making customers like Cappasity very happy. If someone calls the cops on this, quite how the police would handle the case is unclear, given the current levels of tech cluelessness displayed by law enforcement on matters technical. So don’t hold your breath on a speedy resolution. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/10/parity_280m_ethereum_wallet_lockdown_hack/

The teen who bought a car bomb on the Dark Web

A British teenager was found guilty this week of trying to buy a car bomb on the Dark Web.

According to the UK’s National Crime Agency, the cops cuffed 19-year-old Gurtej Randhawa, of Wightwick, in the West Midlands, in May after he accepted a package delivered to his home address that he thought was a remote-detonated explosive device.

An investigation led by the National Crime Agency’s Armed Operations Unit (AOU) had indicated that Randhawa had tried to purchase what’s technically known as a Vehicle Borne Improvised Explosive Device (VBIED). NCA officers swapped out the package with an inert dummy device before they allowed it to be delivered to the address Randhawa had specified.

Investigators waited until he tried to test the device. Then, they arrested Randhawa and two women, aged 18 and 45, who were later released without charge.

Randhawa had earlier pleaded guilty to attempting to import explosives, but he denied maliciously possessing an explosive substance with intent to endanger life or cause serious injury. He was found guilty of the latter charge by Birmingham Crown Court on Tuesday, according to The Register.

Tim Gregory, from the NCA’s Armed Operations Unit, said in the NCA’s statement that the car bomb Randhawa tried to buy “had the potential to cause serious damage and kill many people if he had been successful in using it.” He also said that Randhawa wasn’t involved in organized crime. Nor was he linked to terrorism, Gregory said (though it’s hard to fathom a car bombing that wouldn’t fall under the category of terrorism).

Randhawa is in custody and will face sentencing on 12 January 2018.

As The Register notes, we don’t know how investigators first got wind of Randhawa’s plans. Was he already on a watch list? Was the bomb sniffed out while in transit? Or perhaps the would-be bomber tried to buy it off a Dark Web shop that had been infiltrated by law enforcement? We just don’t know, though those are all possibilities and they’ve all happened before.

When somebody gets busted on the Dark Web, people’s minds often turn to Tor, as in, has the FBI or another law enforcement outfit cracked it?

They have reasons to worry: as we’ve noted, although it’s difficult, there are attacks that can strip Tor users’ anonymity away. The most often cited is probably the correlation attack, a sophisticated technique rumoured to have been used in the 17-nation Dark Web bust Operation Onymous. Correlation attacks would likely rely on law enforcement or intelligence agencies having access to a significant number of Tor’s entry guard or exit node computers.

There are many, much simpler ways to get busted for criminal acts carried out on the Dark Web… Besides having a Dark Web-purchased car bomb delivered to your home address, that is. Crooks have given themselves away with missteps like these:

  • A suspected Dark Web drug lord was undone by his own beard. US cops managed to grab him without the hassle of extradition when he left France for the first time ever in order to attend a beard contest in the US.
  • People get caught when they slip out from under Tor and go somewhere on the regular web to get faster downloads. That’s how the US Department of Homeland Security (DHS) have identified several Tor users suspected of using a Dark Web site to post links to child abuse imagery: they allegedly got the material from a file-sharing service that offered faster downloads than Tor.
  • Sting operations. The mother of all Dark Web stings is arguably Playpen, the Dark Web site dedicated to child sex abuse that the FBI took over, turned into a honeypot, and used to inflict police malware onto the computers of tens of thousands of computers worldwide. It resulted in hundreds of criminal cases against Tor users that are still playing out in the courts.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/K3DA6oTJEmU/

WikiLeaks drama alert: CIA forged digital certs imitating Kaspersky Lab

The CIA wrote code to impersonate Kaspersky Labs in order to more easily siphon off sensitive data from hack targets, according to leaked intel released by Wikileaks on Thursday.

Forged digital certificates were reportedly used to “authenticate” malicious implants developed by the CIA. Wikileaks said:

Digital certificates for the authentication of implants are generated by the CIA impersonating existing entities. The three examples included in the source code build a fake certificate for the anti-virus company Kaspersky Laboratory, Moscow pretending to be signed by Thawte Premium Server CA, Cape Town. In this way, if the target organization looks at the network traffic coming out of its network, it is likely to misattribute the CIA exfiltration of data to uninvolved entities whose identities have been impersonated.

Eugene Kaspersky, chief exec of Kaspersky Lab, sought to reassure customers. “We’ve investigated the Vault 8 report and confirm the certificates in our name are fake. Our customers, private keys and services are safe and unaffected,” he said.

Hackers are increasingly abusing digital certs to smuggle malware past security scanners. Malware-slinging miscreants may not even need to control a code-signing certificate. Security researchers from the University of Maryland found that simply copying an authenticode signature from a legitimate file to a known malware sample – which results in an invalid signature – can result in antivirus products failing to detect it.

Learn client-server C programming – with this free tutorial from the CIA

READ MORE

Independent experts reckon the CIA used Kaspersky because it’s a widely known vendor.

Martijn Grooten, security researcher and editor of industry journal Virus Bulletin, said: “The CIA needed a client certificate to authenticate its CC comms, couldn’t link it to CIA and used ‘Kaspersky’, probably just because they needed a widely used name. No CA hacking or crypto breaking involved. Clever stuff, but not shocking. Not targeted against Kaspersky.”

Revelations about the abuse of digital certificates by the US spy agency came as Wikileaks released CIA source code and logs for a malware control system called Hive, as previously reported.

Security expert Professor Alan Woodward criticised the release with a reference to the Equation Group (NSA hacking unit)/Shadow Brokers leak. “Wikileaks is now releasing source for exploits in Vault 7. Do they remember what happened last time such exploit code was leaked? Standby for another WannaCry.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/10/cia_kaspersky_fake_certs_ploy/

Microsoft president says the world needs a digital Geneva Convention

Microsoft president Brad Smith appeared before the UN in Geneva to talk about the growing problem of nation-state cyber attacks on Thursday.

Smith, also Redmond’s chief legal officer, last month publicly accused North Korea of the WannaCry ransomware attack.

During the UN session on internet governance challenges, Smith made the case for a cyber equivalent of the Geneva Convention. He started off by noting the sorry state of IoT security before arguing that tech firms and government each have a role to play in reining in the problem.

“If you can hack your way into a thermostats you can hack your way into the electric grid,” Smith said, adding that the tech sector has the first responsibility for improving internet security because “after all we built this stuff”.

Microsoft is doing its bit by using a combination of technology and legal action to seize hacked domains at the centre of attacks. Redmond has helped customers in 91 countries by seizing 75 such domains, Smith said.

In addition, Microsoft spends $1bn on security innovation a year.

International tensions are increasingly spilling out into cyberspace including alleged Russian meddling through leaks and social media propaganda during last year’s US presidential election and attacks on banks hooked up to the SWIFT banking network and digital currency exchanges, supposedly by units of North Korean intelligence. Further back there’s the infamous Stuxnet sabotage campaign against Iranian nuclear facilities, a joint US/Israeli operation.

“Nation states are making a growing investment in increasingly sophisticated cyber weapons,” Smith said. “We need a new digital Geneva Convention.”

“Government should agree not to attack civilian infrastructures, such as the electrical grid or electoral processes,” he said, adding that nation states should also agree not to steal intellectual property.

Existing rules for political advertising in print and broadcast media should be extended to social media, Smith suggested. A framework to extend existing international law into the realm of cyber-conflict already exists in the shape of the Tallinn Manual.

Smith argued that tech companies needed to be neutral in cyber-conflict and help their customers wherever they might be.

Workers and consumers also have a part to play, particularly when it comes to resisting phishing emails.

“90 per cent of attacks begin with someone clicking on an email… We need to protect people from their bad habits,” he noted. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/10/microsoft_president_calls_for_digital_geneva_convention/

How did someone hijack your Gmail? Phishing, keylogger or password reuse, we’re guessing

Google has teamed up with computer scientists at the University of California, Berkeley, to find out how exactly hijackers take over its users’ accounts.

The eggheads peered into online black markets where people’s login details are bought and sold to get an idea of the root cause of these account takeovers and the subsequent theft of people’s sensitive personal information. Apparently, just over one in ten netizens have reported attempts by miscreants to commandeer their social network and email accounts.

Unsurprisingly, passwords are mainly stolen via phishing attacks or keyloggers, or are reused by people on multiple websites and services that are later hacked, spilling the keys to their other accounts. In a report published on Thursday, the team noted:

Our research tracked several black markets that traded third-party password breaches, as well as 25,000 blackhat tools used for phishing and keylogging. In total, these sources helped us identify 788,000 credentials stolen via keyloggers, 12 million credentials stolen via phishing, and 3.3 billion credentials exposed by third-party breaches.

While our study focused on Google, these password stealing tactics pose a risk to all account-based online services. In the case of third-party data breaches, 12 per cent of the exposed records included a Gmail address serving as a username and a password; of those passwords, 7 per cent were valid due to reuse. When it comes to phishing and keyloggers, attackers frequently target Google accounts to varying success: 12-25 per cent of attacks yield a valid password.

However, because a password alone is rarely sufficient for gaining access to a Google account, increasingly sophisticated attackers also try to collect sensitive data that we may request when verifying an account holder’s identity. We found 82 per cent of blackhat phishing tools and 74 per cent of keyloggers attempted to collect a user’s IP address and location, while another 18 per cent of tools collected phone numbers and device make and model.

By ranking the relative risk to users, we found that phishing posed the greatest threat, followed by keyloggers, and finally third-party breaches.

Per Thorsheim‏, an infosec bod who founded the PasswordsCon conference, praised Google’s “solid research.”

“I’m impressed,” he told us. “This is very useful for both research and practical improvements. Having said that I’m afraid many don’t have the mandate, budget or understanding that this isn’t just a threat to Google, it is a threat to almost anything online.”

Google has applied insights gleaned from its research to better protect its user accounts, we’re told: for example, through its recently announced advanced protection program that uses two-factor authentication tokens. It hopes other online services take a look at the findings and shore up their defenses, too. Above all, Google’s indirectly saying: if your Gmail account gets hacked, it’s your fault for losing your password, and not because we did a Yahoo!

The research was presented at this year’s Conference on Computer and Communications Security (CCS) conference under the title, Data breaches, phishing, or malware? Understanding the risks of stolen credentials. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/10/google_password_hijack/

Inhospitable: Hospitality & Dining’s Worst Breaches in 2017

Hotels and restaurants are in the criminal crosshairs this year.PreviousNext

The good news for this year is that the megabreaches at large retail chains like the ones that plagued Target, Home Depot, TJX and the like have been largely absent from the news cycles in 2017. But that doesn’t mean we’re out of the woods with point-of-sale breaches just yet. In fact, the hackers may be turning their sights to hoteliers and restaurants as department stores, grocery chains and other traditional retailers start to improve their security practices. The following high-profile incidents are evidence of this mounting trend. 

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/endpoint/inhospitable-hospitality-and-dinings-worst-breaches-in-2017/d/d-id/1330325?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why Common Sense Is Not so Common in Security: 20 Answers

Or, questions vendors need to ask themselves before they write a single word of marketing material.

I believe it was Voltaire who noted that “common sense is not so common.” Most of us can relate to this sentiment quite well, just based on the various scenarios we encounter day to day. But I think that this quote is particularly apt for the security field.

For example, how many times have you seen breaches or intrusions that could have been prevented with simple industry best practices? Or, how many times have you encountered people who are fixated on a particular type of product or service, even though it may not fit well with their security architecture or where they are maturity-wise as an organization?

I find this is particularly true within the security market. Amid all the confusion and noise, I would expect security vendors to try and rise above the madness in an attempt to articulate their true value. Instead, it just looks like everyone is trying to shout the same message a bit louder than the next person. So in the spirit of trying to be helpful, let me present: 20 questions security vendors need to ask themselves before they write a single word of marketing material.

Image Credit: DuMont Television/Rosen Studios. Public domain, via Wikimedia Commons.

  1. Do you know your buyers? Ramming hype and buzz down people’s throats may win you some meetings and get you some attention. But it generally won’t get you the attention of those who would seriously evaluate your solution. The real decision makers have been around the block a few times and are not as easily fooled as you might think.  It pays to understand them, and then to tailor your marketing material to your intended audience.
  2. Do you understand the value of your solution? If you don’t understand the value that your solution provides to a security team, you can’t begin to explain why they should buy it.
  3. Do you understand where your solution fits within a security program? If you don’t know how your solution fits within my security program, how can you position it to me in a sales meeting?
  4. Does your solution add value to the security program? I have too many tools as it is. If I am going to acquire another one, it really needs to add some value.
  5. Do you know what life is like day to day in a security program? If you don’t understand the challenges I face, don’t be surprised if I roll my eyes when you tell me you have the solution I’ve been waiting for.
  6. Does your solution help to ease the pain or merely create more? If you’ve ever worked in an operational security position, you know the pain I’m talking about.  If I think a solution is going to add to that pain, I’m not going to buy it.
  7. Is your solution an alert cannon/false-positive generation system? I have plenty of alerts and false positives already.  If your plan is to deluge me with even more, count me out.
  8. Do you understand the problems your buyers face? How can you proffer a solution if you don’t grasp what the real problems are in the first place?
  9. Does your solution solve one of those problems? It is far easier to get my attention and make a sale when your solution solves a problem I am actually looking to solve.
  10. Do your buyers understand what problem you solve? This is perhaps one of the most important points when it comes to marketing materials. Try communicating the value of your solution in the language of the buyer. If I can’t understand what you’re selling me, it’s going to be hard for you to pitch it to me.
  11. Are buyers in the market for solutions that solve the problem you solve? Some security products are solutions looking for a problem. A solution has to be both good and relevant in order to sell.
  12. Does your solution really do what you say it does? It is possible to keep up appearances for some amount of time, but sooner or later, the truth comes out.
  13. How many markets do you really cover? Don’t tell me you span 10 different security markets. There is no way that is true, so save your breath.
  14. Are you anchoring your product marketing around the buzzword of the day? Be careful — once this buzzword passes, so might the value of your messaging!
  15. Are you practicing ambulance chasing? Tempted to say that your solution would have prevented the latest big breach to hit the news? Don’t. No one wants to hear it.
  16. Are you running after the latest shiny object du jour? Big data? Security analytics?  Artificial intelligence? Next-generation endpoint? Do you really know what those words you’re throwing around mean, and how they are internalized by your potential buyers?
  17. Is that really a white paper, or is it a sales pitch in ink form? I enjoy reading white papers that provide me some insight and value and pique my interest in a solution. Not marketing fluff disguised as a white paper.
  18. Does your marketing material feature studies from institutes that are known to find whatever results you want them to, for the right price? Yeah … just keep moving right along.
  19. Are you prepared to give meaningful content at your events or in your webinars? What goes a long way when I’m making a buying decision is whether I actually learn something.
  20. After an interaction with you, do prospective buyers leave you with something you can take home and apply? For example, do they offer analytical techniques you can apply to identify suspicious or malicious traffic?

What questions do you want vendors to ask before they make their sales pitch? Share your thoughts in the comments. 

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Josh is an experienced information security leader with broad experience building and running Security Operations Centers (SOCs). Josh is currently co-founder and chief product officer at IDRRA. Prior to joining IDRRA, Josh served as vice president, chief technology officer, … View Full Bio

Article source: https://www.darkreading.com/endpoint/why-common-sense-is-not-so-common-in-security-20-answers/a/d-id/1330351?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Steps for Sharing Threat Intelligence

Industry experts offer specific reasons to share threat information, why it’s important – and how to get started. PreviousNext

Image Source: Bee Bright via Shutterstock

Image Source: Bee Bright via Shutterstock

Threat information-sharing first started getting more attention and interest in the cybersecurity industry after the 9/11 terror attacks.

So you’d think by now it would be a routine process, especially with the volume of high-profile data breaches in the past few years. But while there has been much progress between the federal government and the vertical flavors of the Information Sharing Analysis Centers (ISACs), threat information-sharing still has been put on the back burner by many organizations.

“What’s happened is that CISOs are so busy today that information sharing has become the kind of thing that they know will make them a better CISO, or at least a better person, but they put it off,” says Paul Kurtz, founder and CEO of TruStar Technology. “They don’t always recognize the benefits of information sharing.”

[See Paul Kurtz discuss threat intelligence-sharing best practices at Dark Reading’s INsecurity conference].

Kurtz says the key principles of threat information-sharing are:

1. Information sharing is not altruistic. The objective of data exchange is to identify problems more quickly and mitigate attacks faster. When an industry vertical shares common threat data and other companies in the field don’t have to reinvent the wheel, everyone benefits.

2. Information sharing is also not about breach notification. Organizations need to share event data early in the security cycle – before an event happens – such as information about suspicious activity.   

3.  Sharing data with other organizations about exploits and vulnerabilities is legal so long as you don’t share personally identifiable information. For example, a victim’s email address is usually not shared. Typical types of information that are fair game include suspicious URLs, hash tags, and IP addresses. The Cybersecurity Information Sharing Act of 2015 provides more detail here.

4.  The sharing system must be easy to use. Make sure the system is user-friendly and can easily integrate with your established workflow within a SOC, a hunting team, or a fraud investigation unit.    

Greg Temm, chief information risk officer at the Financial Services Information Sharing and Analysis Center (FS-ISAC), cautions that organizations need to have patience with threat intel-sharing.

“Threat intelligence takes time,” Temm says. “We might have lists of suspicious activity, but what we really want are the reasons why threat actors are making their attacks. What’s really significant is whether the bad threat actors are working for a nation state, are cybercriminals in it for the money, or possibly hacktivists looking to make a political point. Getting to the bottom of that takes a combination of the shared data, analytics, and the threat intelligence tradecraft.”

Neal Dennis, a senior ISAC analyst at the Retail Cyber Intelligence Sharing Center (R-CISC), says companies that don’t know where to start or don’t have deep pockets for security tools should contact their industry ISAC. “A lot of our members are smaller retail companies that don’t have the resources of a Target or Home Depot, so it makes sense for them to seek of the retail ISAC for threat information and guidance on potential tools to deploy,” Dennis says.

Here are some tips on how to get started with sharing threat intelligence.

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/6-steps-for-sharing-threat-intelligence-------/d/d-id/1330386?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple