STE WILLIAMS

Why Cybersecurity Burnout Is Real (and What to Do About It)

The constant stresses from advanced malware to zero-day vulnerabilities can easily turn into employee overload with potentially dangerous consequences. Here’s how to turn down the pressure.

Cybersecurity is one of the only IT roles where there are people actively trying to ruin your day, 24/7. The pressure concerns are well documented. A 2018 global survey of 1,600 IT pros found that 26% of respondents cited advanced malware and zero-day vulnerabilities as the top cause for the operational pressure that security practitioners experience. Other top concerns include budget constraints (17%) and a lack of security skills (16%).

As a security practitioner, there is always the possibility of receiving a late-night phone call any day of the week alerting you that your environment has been breached and that customer data has been publicized across the web. Today, a data breach is no longer just a worse-case scenario; it’s a matter of when, a consequence that weighs heavily on everyone — from threat analyst to CISO.

Mental Health Effects
The constant stresses of cybersecurity can easily turn into an employee overload with potentially dangerous consequences:

  • Sharpness will suffer. Security pros can become “asleep at the switch,” causing them to overlook important details that can greatly increase the chances of a serious security incident.
  • Burnout and employee turnover. The constant pressures will lead some employees to quit or take an extended hiatus, which isn’t necessarily a bad thing. I took a break from security for an opportunity to explore something new and exciting in cloud computing. I returned to security refreshed after a short break. Being able to recognize when you’re burnt out from the daily security grind is a key part of keeping your career longevity and mental health intact.  

How Organizations Can Help
There are some crucial precautionary steps organizations can take to help reduce cybersecurity practitioners’ stress and help keep mental health in good form.

  • Add more security leaders. Don’t just appoint one person. Being a sole security leader is an unmanageable stress burden in and of itself. Worse, if that individual leaves, the supporting team is left to try and pick up the pieces on their own. There should be multiple leaders to share the load and tackle specific problems.
  • Give your team a Zen space to take a break. It’s common in our industry to provide employees with a place to socialize and play games. Ensuring your teams can take real screen time breaks every two hours can also help them stay calm, alert, and rejuvenated. Overworked practitioners are prone to make mistakes, and they’ll burn out much quicker.
  • Call in backup. In-house security teams can get overwhelmed quickly. The limited resources can sometimes force practitioners to wear too many hats. Testing, prevention, and even response activities can be alleviated by enlisting a managed security services provider. (Disclaimer: Trustwave is one of many companies that offer these services.) External resources can support your team by eliminating time-consuming menial tasks or focus on critical remediation steps during a security crisis. Having an extra set of hands can make a world of difference to your internal security team.
  • Training and preparation. Practitioners must stay calm, cool, and collected in the middle of a cyber-war scenario. Being prepared for the speed and the intensity of an incident will make it far easier to remain focused when the worst-case scenario happens. It’s paramount that your security team has up-to-date, on-hand playbooks and can reference recent training experiences during chaos. Providing executives with a playbook of their own and ensuring you are aligned with your PR and communications teams can also help make sure your organization is ready to respond to a security incident.

If you are a C-level executive or manage a group of security practitioners, you must think about the pressures on your team and how to help them manage it. You should also consider your security maturity level, the tools and training your team has at its disposal, the partners that you might have on board to help you, and how effective that help might be.

As a practitioner, be vocal about your state of mental well-being and stress level. Be self-aware, and don’t be ashamed. Make sure to reach out to your organizational leaders to let them know what kind of resources, support, or training you need to do your job to the best of your ability. You will find that this transparency will help increase the quality of your job and personal life immensely.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Chris Schueler is senior vice president of managed security services at Trustwave where he is responsible for managed security services, the global network of Trustwave Advanced Security Operations Centers and Trustwave SpiderLabs Incident Response. Chris joined Trustwave … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/why-cybersecurity-burnout-is-real-(and-what-to-do-about-it)/a/d-id/1333906?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Human Negligence to Blame for the Majority of Insider Threats

In 98% of the assessments conducted for its research, Dtex found employees exposed proprietary company information on the Web – a 20% jump from 2018.

Nearly two-thirds (64%) of insider threats are caused by users who introduce risk due to careless behavior or human error, according to new research from Dtex. This compares to 13% of threats due to compromised credentials and 23% caused by intent on harming the organization.

“That 64% number is huge and something we think companies should focus on,” says Rajan Koo, head of Dtex’s insider threat research team. “We find that by reducing the number of negligence incidents, companies can cut down on the potential of their employees being compromised.”

In related research released this week, Endera reported that companies suffer from at least three workforce-related incidents per week, adding up to 156 incidents per year. And, according to Egress Technologies, more than four out of five companies (83%) have had employees expose customer or business data.

Lock Down Those Links
In 98% of the assessments conducted for its research, Dtex found employees exposed proprietary company information on the Web – a 20% jump from 2018. Typically they send out a document via an insecure link to a colleague or third-party company using file-sharing tools that are unsanctioned by the company, Koo said.

“What happens is people will send a link from their personal Google Docs or Dropbox account, not realizing that the link is not secure,” he explains. “In our research, we’ve found that these documents get indexed on Google and other search engines so the bad guys can easily find them publicly on the Web. We recommend that people lock down any links they send with a user name and password.”

The study also found that in 95% of the assessments, employees looked to circumvent company security policies – a notable jump from 60% last year. In many instances, people are using private VPNs and TOR browsers in the hope of shielding their activities, Koo says. While often employees are simply looking to bypass security so they can do their work more efficiently, Dtex has found the use of such tools is often motivated by malicious intent.  

Dtex also runs assessments that track whether a person is a flight risk, which Koo defined as a person with a “propensity to leave.” The company found employees engaging in such behavior in 97% of its assessments. 

“What we’ll do is track people who have spent a lot of time updating their LinkedIn profile or posting their resume and then watch to see if they’ve made a data transfer to a USB,” Koo says. “In almost every organization, people tagged with a high propensity to leave typically take data with them. For each organization we’ve studied, we find at least one example of this a year.”

Koo says security pros have become really good at protecting the perimeter from malware attacks. But as the perimeter erodes with more people working from home, the introduction of cloud-based apps, and the entrance of a younger, digitally fearless workforce who may log onto a corporate network from an insecure outside network, a new crop of user behavior intelligence platforms has surfaced.

These platforms enable companies such as Dtex, Endera, and others to leverage user behavior analytics to more efficiently detect insider threats.

Avivah Litan, a vice president and distinguished analyst at Gartner, says this emerging field of user behavior analytics has been a missing piece in corporate security profiles – until now.

“Dtex and other companies, along with the traditional SIEM vendors, have solutions … that sit on the user’s device and can see things that you can’t see from the cloud,” Litan says. “Companies need to take a look at monitoring users, but do it in a way that respects privacy.”

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/human-negligence-to-blame-for-the-majority-of-insider-threats-/d/d-id/1333937?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Attack Campaign Experiments with Rapid Changes in Email Lure Content

It’s like polymorphic behavior – only the changes are in the email lures themselves, with randomized changes to headers, subject lines, and body content.

A new email Trojan campaign spotted by security researchers has added another twist in evasive attacker behavior: Researchers with GreatHorn report that the waves of attacks they’ve observed since yesterday afternoon are rapidly randomizing the email content characteristics of their lures.

“Masquerading as a confirmation on a paid invoice, the attack is sophisticated in that it lacks the consistency of a typical volumetric attack, making it more challenging for email security tools to identify and block,” says E.J. Whaley, solutions engineer at GreatHorn. 

Attackers have long leaned on metamorphic and polymorphic malware techniques to make swift changes to the code in order to evade detection. It was only a matter of time before they started taking that philosophy to the scam delivery vehicles. This method is in contrast to what Whaley says is typical in most Trojan phishing email lures, where they’ll stick to a single pattern with “slight customizations.” Instead, these attacks are switching up subject lines, email content, email addresses, display name spoofs, and destination URLs. 

“Body content generally follows a pattern that confirms the receipt of a payment for an invoice but uses slightly different language to evade capture,” Whaley says.

While the subject lines vary, they all seem to cluster around references to receipts or invoices. Sometimes the attacks are very targeted — using spoofed names of a fellow employee of the target — and sometimes they use random names, sent from numerous compromised email accounts. The bulk of the compromise accounts are coming from South American companies. 

The attackers are trying to trick users into downloading a Word template with a VBA downloader trojan embedded within. GreatHorn says the attack was detected in about one in 10 accounts in its user base.

Paid invoice attacks like these are increasingly at the top of phishers’ playbooks. Last year six of 10 phishing messages used some variation of “invoice” as its subject.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/attack-campaign-experiments-with-rapid-changes-in-email-lure-content/d/d-id/1333938?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook hoax? Can you sniff out gas station card skimmers using Bluetooth?

There’s a “helpful tip” making the Facebook rounds, and it’s a little bit helpful but a lot not so much.

It’s about using Bluetooth to detect credit card skimmers at gas stations:

Here is a helpful tip:
When you pull up to a gas station to fill your car. Search your phone for Bluetooth devices. If a sequence of letters and a sequence of numbers shows up in your device list do not pay at the pump. One of the pumps have a card reader installed. All card readers are bluetooth.

The post refers to a card “reader,” but what it means is card “skimmer.”

The first is a legal way for you to pay, while the latter is a piece of thief-ware, be it a plastic gadget clumsily glued on to the face of an ATM or gas pump or technology that’s installed internally.

Credit card skimmers are devices that capture details from a payment card’s magnetic stripe, then (sometimes) beam them out via Bluetooth to nearby crooks.

The “sometimes” is just one thing that makes this viral post less than helpful.

Security journalist Brian Krebs has cataloged all sorts of skimmers, including some that send information to fraudsters’ phones via text message.

So convenient! …and so not Bluetooth.

From a thief’s point of view, Bluetooth has limitations, notably that Bluetooth has limited range, so any thief who uses a Bluetooth-enabled skimmer needs to hang around nearby.

It also means that anybody else using Bluetooth in the vicinity could get an eyeful of “Oooo, payment card details up for grabs!”

That includes, of course, all of us law-abiding, viral-post-reading phone users.

So yes, the post is correct in saying that the Bluetooth sensor on a mobile phone can indeed be used to detect some card skimmers, but it’s incorrect because these sensors can’t detect them all.

As Naked Security’s Paul Ducklin points out, some skimmers use Wi-Fi, some use the mobile phone network, and others just store their data quietly on an SD card that the crooks come back for later on.

But that’s only one thing that makes this viral post less than helpful.

Bluetooth names tell you “everything and nothing”

Your phone may well pick up on nearby Bluetooth devices, but the names alone don’t really help, Paul says:

Just doing a scan for nearby Bluetooth device names tells you everything and nothing. You might as well decide if a gas station is crooked based on whether the fuel price ends in an odd or an even number of cents per gallon, and here’s why: sniffing or skimming devices might not show up at all, or they could have innocent-sounding names like “Car radio” or “My iPhone”.

On the other hand, the perfectly harmless video game that the kid in the next car is playing might be announcing itself with some sort of scary-looking autogenerated name like “AF09E856”.

Two green tips that really do flummox skimmers

If you want to stop skimmers dead in their wireless/texted messages/stored-SD-card-enabled tracks, there’s an age-old technology that the thieves haven’t yet managed to crack remotely – it’s called cash:

If you think that the chance of being skimmed is lower if you go to the cashier and pay, then simply do that every time. If you’re worried about gas station skimming in general, you can always use cash — as it says on the bill, ‘This note is legal tender for all debts, public and private.’

Using sweet green cash (that’s the color in the US, at any rate!) is one way to avoid getting your payment card skimmed at the gas pump.

Here’s another green technology that blocks gas-stop skimmers: a bike!

That’s Paul’s solution:

Switch to a bicycle, like I did, and laugh in the face of gas stations for ever.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/W1QD6frjlgA/

Sorry, we didn’t mean to keep that secret microphone a secret, says Google

Earlier this month, Google attempted to cozy up to harried commuters with the news that they could thenceforth ask their Nest home security and alarm system if, say, they needed an umbrella, or how gnarly their commute would be.

All that’s being brought to you courtesy of Google Assistant being enabled on Nest Guard, which is the alarm, keypad, and motion-sensor component in Google’s Nest Secure system.

Of course, for those smart devices to respond to their owners, they need to hear their voices… and to hear their voices, the gadgets, obviously, need a microphone.

A microphone that’s been there all along, but which Google completely left out of product documentation, as you can see on this archived page that predated Google’s 4 February 2019 announcement.

Well, that’s just dandy, some Nest owners said. Google’s had a secret microphone planted in our houses all this time:

In its announcement, Google added that Nest Guard does have one built-in microphone, but that it’s not enabled by default.

Google, of course, focused on the benefits to consumers, not the “surprise! We planted a secret surveillance device in your home!” takeaway that some of its users are now experiencing.

On Tuesday, the company told Business Insider that the omission of the microphone from product documentation was made in “error.”

A Google spokesperson:

The on-device microphone was never intended to be a secret and should have been listed in the tech specs. That was an error on our part.

An honest mistake? Possibly. But when you’re a company that does things like “accidentally” scoop up data transmitted over consumers’ unsecured Wi-Fi networks, including emails, as Google did in the Wi-Spy Street View data breach… and then it turns out that the breach was far from a surprise, given that Google staff knew about it for years…

…well, it’s understandable how a Google “woops!” might not sound particularly credible to some consumers.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/W3pe999DOCM/

Hacker Lauri Love denied bid to get computers back

Hacker Lauri Love has failed to get his computers back six years after the UK’s National Crime Agency took them as part of a criminal investigation.

In 2013, British authorities arrested Love for alleged hacking into US institutions, and seized his computers. However, he wasn’t charged because the information on his computers was encrypted.

Love has been trying to get his equipment back, including two laptops, several storage devices and a tower PC. He sued the NCA in February under the 1987 Police Property Act. The law allows people to retrieve property seized during criminal investigations.

Unfortunately for Love, district Judge Margot Coleman didn’t deem the application valid. According to a report in the Register, Coleman drew a distinction between the equipment and the encrypted data held on it. She said:

I think you conceded – certainly I’ve made the finding – the information contained on that hardware is not yours. And you’re therefore not entitled to have it returned to you; it doesn’t belong to you.

Coleman criticised Love – who represented himself against NCA barrister Andrew Bird – for being evasive and refusing to answer questions, instead countering with other questions. She refused to accept his commitment not to decrypt the information on the computers if they were returned. She said:

His refusal to answer questions about the content of the computers has made it impossible for him to discharge the burden of establishing that the data on his computers belongs to him and ought to be returned to him.

The NCA had harvested 124Gb of data from Love’s computers onto a separate drive but has been unable to decrypt the data because he won’t provide the decryption keys. The UK courts had ordered him to hand them over in 2016, but then threw out the ruling on appeal because the NCA used a civil action rather than normal police powers in its case.

The hacker had already launched an earlier legal attempt to retrieve his equipment, which he abandoned after being arrested again in 2015 as part of an extradition claim by the US.

The US government wants to prosecute him on US soil. Separate charges filed by the State of Virginia, the State of New York, and the State of New Jersey accuse him of hacking the Department of Energy, the Federal Reserve, NASA, the Environmental Protection Agency, the US Army and the US Missile Defense Agency. He also hacked HHS, the U.S. Sentencing Commission, FBI’s Regional Computer Forensics Laboratory, Deltek, Inc. and Forte Interactive, Inc, according to the charges.

After a UK court initially granted the extradition order, Love explained that he would commit suicide if he was sent to the US. He was diagnosed with several conditions including Asperger’s Syndrome, and he finally won his request not to be extradited on appeal.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7XnB3NkE6mc/

Password managers leaking data in memory, but you should still use one

Researchers have uncovered a surprising security weakness in password managers – several popular products appear to do a weak job at scrubbing passwords from memory once they are no longer being used.

An analysis by Independent Security Evaluators (ISE) uncovered the problem to different degrees in versions of 1Password, Dashlane, LastPass and KeePass.

The good news is that all managers successfully secured passwords when the software wasn’t running – when passwords, including the master password, were sitting in the database in an encrypted state.

However, things went downhill a bit when ISE looked at how these products secure passwords in both the locked state (running prior to entering the master password or running after logging out), and the fully unlocked state (after entering the master password).

Rather than generalise, it’s best to describe the issues for each product.

1Password4 for Windows (v4.6.2.626)

This legacy version keeps an obfuscated version of the master password in memory which isn’t scrubbed when returning to a locked state. Under certain conditions, a vulnerable cleartext version is left in memory.

1Password7 for Windows (v7.2.576)

Despite being the current version, the researchers rated it as less secure than 1Password4 because it decrypts and caches all database passwords rather one at a time. 1Password7 also fails to scrub passwords from memory, including the master password, when moving to a locked state. This compromises the effectiveness of the lock button, requiring the user to completely exit the program.

Dashlane for Windows (v6.1843.0)

Exposes only one password at a time in memory until a user updates an entry at which point the entire database is exposed in plaintext. This remains true even when the user locks the database.

KeePass Password Safe (v2.40)

Database entries are not scrubbed from memory after each is used although the master password was, thankfully, not recoverable.

LastPass for Applications (v4.1.59)

Database entries remain in memory even when the application is locked. Furthermore, when deriving the decryption key, the master password is “leaked into a string buffer” where it is not wiped, even when the application is locked (note: this version is used to manage application passwords and is distinct from the web plugin).

Clearly, if passwords – especially master passwords – are hanging around in memory when the application is locked, this raises the possibility that malware could steal this data after infecting a computer.

The counter-argument is that if malware infects your computer, pretty much everything on that system is at risk whether it’s obfuscated in memory or not. No security application can possibly guarantee to defend against this sort of threat.

The response?

Some of the affected vendors have publicly defended their products, claiming that the issues discovered by the researchers are part of complex design trade-offs.

LastPass also claimed it had cured the problems found in its product and pointed out that an attacker would still require privileged access to a user’s PC.

Is this the end for password managers?

In short, no. Our advice is to continue using password managers because the issues found are still heavily outweighed by the known advantages of using one and will probably be tidied up through updates anyway.

What matters is that researchers prod these products for weaknesses and that the vendors do everything they can to fix them as quickly as possible.

If in doubt, one idea is to shut down (i.e. close) a password manager when it’s not being used.

And, of course, don’t forget to use two-factor authentication whenever you can. That way, even if someone has your password, they still can’t log in as you.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ayoo31z3lLM/

Welcome to the sunlit uplands of HTTP/2, where a naughty request can send Microsoft’s IIS into a spin

Updated Oops! Microsoft has published an advisory on a bug in its Internet Information Services (IIS) product that allows a malicious HTTP/2 request to send CPU usage to 100 per cent.

An anonymous Reg reader tipped us off to the advisory, ADV190005, which warns that the condition can leave the system CPU usage pinned to the ceiling until IIS kills the connection.

In other words, a Denial Of Service (DOS).

HTTP/2 is a major update to the venerable HTTP protocol used by the World Wide Web and is geared toward improving performance, among other changes. Windows Server 2016 was the first Microsoft server product to support it, and Windows 10 (versions 1607 – 1803) is affected by the issue.

The problem, according to Microsoft, is that the HTTP/2 spec allows a client to specify any number of SETTINGS frames with any number of SETTINGS parameters. Those parameters usually include helpful stuff like the characteristics of the sending peer, and different values for the same parameter can be advertised by each peer.

Excessive settings can make things go a bit wobbly as IIS works on the request and sends the CPU usage sky high until a connection timeout is reached and the connection closed.

The good news is that this week’s “non-security update” deals with the problem. Microsoft flung out patches on 19 February in the form of KB4487006, KB4487011, KB4487021 and KB4487029 to deal with it.

The company has added the ability to set thresholds on the number of HTTP/2 SETTINGS in a request but has declined to set any defaults, leaving it to the IIS Admin to configure.

This is assuming that administrators can actually find the setting. The link for the Knowledge Base article (KB4491420) that Microsoft suggested users review went nowhere at the time of writing, and the current documentation for IIS cheerfully tells admins that there are no new configuration settings specific to HTTP/2.

We’ve contacted Microsoft to learn more and will update with any response.

The issue itself was discovered by Gal Goldshtein of F5 Networks. ®

Updated to add at 15:13 UTC

After we brought the broken link to its attention, Microsoft posted the support article detailing defining those pesky thresholds.

Alas, there is no cosy GUI for admins. You’ll need to edit a couple of registry entries and reboot to see the thresholds applied. As promised, Microsoft is not about to define any presets for the values. It’s up to the admin to decide.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/21/http2_iis_microsoft/

Data breach rumours abound as UK Labour Party locks down access to member databases

The UK’s Labour Party has been forced to lock down access to membership databases and campaign tools over concerns the info was being sucked up by breakaway MPs, in a possible breach of data protection laws.

The party’s general secretary, Jennie Formby, yesterday said Labour had “become aware of a number of attempts to access personal data” on its systems by “individuals who are not, or are no longer, authorised to do so”.

The inference was that one or more of the Labour MPs that have this week left the party to form The Independent Group had slurped members’ details to take with them for use in future campaigns.

Under the UK’s Data Protection Act 2018 (s170), it is an offence to obtain or retain personal data without the consent of the controller – which means someone downloading a database of members’ deets is likely to find themselves in hot water.

Formby noted this in her email – which was shared on Twitter by political journo Robert Peston.

“Anyone accessing, using or otherwise processing data without authority or for an unauthorised purpose is at risk of action by the [Information] Commissioner’s Office,” the message read.

Formby also pointed out that the info will likely reveal a person’s political opinions, which makes it “special category” data that is entitled to increased protections under the law.

However, a data controller also has responsibilities to make sure data is properly protected, which includes ensuring that people who aren’t entitled to access data are unable to do so. The General Data Protection Regulation (PDF) states the controller is responsible for ensuring personal data is:

“Processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).”

Last year, Bupa was fined £175,000 after one of its staffers made off with more than half a million customers’ personal information and tried to sell it on the dark web. The ICO said the firm should have had measures in place to stop the bulk download.

No doubt in light of these responsibilities, Labour shut off access to Organise, the party’s volunteer management and comms tool, and Contact Creator, the tool used to produce materials and monitor campaigns – irking volunteers and other Labour MPs, like Walthamstow’s Stella Creasy, in the process.

But Formby’s email suggested that the info had already been accessed – and it isn’t clear whether this occurred before the person, or people, had left the Labour Party. If it happened after, the party could be open to criticism for having failed to revoke access to the databases.

It’s also possible – depending on how the data was obtained – that charges of a breach of the Computer Misuse Act could be levelled at the miscreant(s).

In November, Mustafa Ahmet Kasim – a car industry worker who used a colleague’s login details to snag customer data and pass it to phone scammers – was sentenced to six months in prison after pleading guilty to the charge of causing a computer to perform a function with intent to secure or enable unauthorised access to personal data.

More broadly, the party could also face questions over the number of people with access to its databases, which appears to include MPs and both paid and voluntary campaigners.

Despite Formby’s strong words in the email to MPs, it isn’t clear whether the party has reported the incident to the ICO. When asked, the ICO didn’t confirm either way, but did offer this statement:

Organisations have a legal duty to ensure the security of personal data they hold. Any organisation which believes personal data it holds has been accessed illegally should report the matter to the ICO.

The party’s sudden interest in data protection follows years of sailing close to the wind when it comes to laws on direct marketing. David Lammy MP was fined £5,000 after 35,000 automated calls were made during his campaign to be named mayoral candidate – and Labour is not alone in this.

The Labour Party did not respond to a request for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/21/data_breach_labour_locks_down_member_databases/

Black-hat sextortionists required: Competitive salary and dental plan

Extortionists are promising salaries of more than a quarter of a million pounds to skilled infosec folk willing to put on a black hat, according to research outfit Digital Shadows.

Those salaries are on offer to people willing to blackmail and extort money out of “high net worth individuals” – and at the upper end of the scale have even reportedly topped £840,000.

A group of mischief-makers calling themselves “thedarkoverlord” would post job advertisements “with specifications and salaries that would rival those offered by most corporate businesses. Recruits were tempted with £50,000 ($64,000) per month, with add-ons and a final salary after the second year of £70,000 ($90,000) per month,” Digital Shadow said.

“Those with Chinese, Arabic or German skills could earn an added 5 per cent on their salary or commission,” the firm added.

The report, titled A Tale of Epic Extortions, describes how these particular criminals target rich folk through the usual vectors of compromised credentials and scanning their known public presences for vulnerabilities, ready to deploy ransomware.

On top of that, the crims studied by the firm aren’t above sextortion, the dark art of monetising sexually explicit photos and videos of a victim by threatening to reveal them publicly unless large sums of money are handed over. Digital Shadows reckoned “the scale and persistence of the campaigns rocketed over 2018”, claiming to have “collected and analyzed a sample of sextortion emails in which 89,000 addresses received over 790,000 sextortion attempts”.

“The extortionist provides the user with a known password as ‘proof’ of compromise, then claims to have video footage of the victim watching adult content online, and finally urges them to pay a ransom to a specified Bitcoin (BTC) address,” said the report. “A later iteration of the campaign involves the extortionist trying to support their credibility by sending another email that refers to a Cisco ASA router vulnerability (CVE-2018-0296). The extortionist suggests that the vulnerability allowed them to access the victim’s machine.”

Compromised creds were found being traded on a forum called TheRealDeal, with one particular group of miscreants using the handle “thedarkoverlord” being the focus of Digital Shadows’ research. Once TheRealDeal folded, thedarkoverlord reappeared on the KickAss black hat forum, allegedly selling “stolen data” to “other extortionists and fraudsters”.

The infosec company also pointed to thedarkoverlord’s use of crowdfunding techniques, in particular from the Hiscox insurance company hack of April 2018 where it threatened to leak information about claims brought over the 11 September 2001 terrorist attacks on the US. Digital Shadows said thedarkoverlord “began crowdfunding the publication” of decryption keys via Bitcoin, pressuring the insurance company to pay up before the great unwashed met the payment target for publication.

“In true TDO style, the use of a crowdfunding platform has allowed them to increase their publicity in the online community, while providing an additional revenue stream for their extortion antics,” said the researchers.

To mitigate against such attacks and techniques, Digital Shadows recommended folks use a password manager (going with the flow of current advice to use them despite mostly theoretical weaknesses in some implementations), enable multi-factor authentication for web-facing accounts, enabling regular backups to mitigate against ransomware, and properly configuring security options on NASes. As for sextortion emails, they simply say “users should ignore these emails” while keeping a close eye out for signs that such emails are part of a mass campaign rather than specific, targeted blackmail attempts. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/21/black_hats_sextortion_275k_salaries_helpers/