STE WILLIAMS

Russian state-sponsored hackers have been sniffing Middle East defence firms, warns Trend Micro

The Russian hacking crew known variously as APT28, Fancy Bear and Pawn Storm has been targeting defence companies with Middle Eastern outposts, according to Trend Micro.

A new report from the threat intel firm says that the Russian state-backed hacking outfit went on a spree of targeting defence firms in the Middle East back in May last year. Using credential-phishing tactics, APT28* used the email accounts of targets they had already hacked to fire phishing emails at further targets using known contacts for a higher strike rate.

According to Trend, around 38 per cent of the attacks fired off by the Russians were targeted at defence companies, with banking, construction and government targets making up the main portion of the others.

“Surprisingly, the list also included a couple of private schools in France and the United Kingdom, and even a kindergarten in Germany,” commented the threat intel firm.

Further, Trend said APT28 were port-scanning mail servers, including Microsoft Exhcange Autodiscover boxen, on TCP ports 443 and 1433 in the hope of finding vulnerable machines to exploit, and use as a staging post in their ongoing campaign.

A syringe in a vial of medicine

What a bunch of dopes! Fancy Bear hackers take aim at drug-testing orgs

READ MORE

Close examination of APT28’s spam-sending tactics revealed that they like using VPNs to try and hide their traces, with Trend stating: “Pawn Storm regularly uses the OpenVPN option of commercial VPN service providers to connect to a dedicated host that sends out spam. The dedicated spam-sending servers used particular domain names in the EHLO command of the SMTP sessions with the targets’ mail servers.”

What should you do if you’re targeted by APT28? Trend’s advice was straightforward: keep an eye on your infrastructure for any unusual access patterns, patch your systems as and when updates become available from vendors, and educate your employees not to click on links in unexpected emails.

APT28 was recently and publicly called out by Western governments for its hacking campaigns against Georgia, a former Soviet republic that has been leaning away from Vladimir Putin’s neo-Soviet Russia in recent years. ®

Sponsored:
Webcast: Why you need managed detection and response

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/19/apt28_middle_east/

Facebook Got Tagged, but Not Hard Enough

Ensuring that our valuable biometric information is protected is worth more than a $550 million settlement.

On January 29, Facebook agreed to a $550 million settlement of a class-action suit based on violations of Illinois’ Biometric Information Privacy Act (BIPA). The settlement will compensate Facebook users in Illinois for Facebook’s use of facial recognition technology, known as “tagging,” without the user’s consent and in violation of BIPA. While many people were surprised by the amount of the settlement, more were shocked that Facebook agreed to pay it.

The technology at issue was the nearly automatic tagging of friends and acquaintances in photos that users uploaded to Facebook. During the uploading process, Facebook’s systems scanned the pictures, found matches using facial recognition technology, and suggested that users “tag” their Facebook friends who resembled those in the photographs. Given the number of photos that have been uploaded to Facebook, many speculate Facebook could have faced about $35 billion in fines under BIPA. Rather than balking at the $550 million settlement, perhaps we should ask why the amount wasn’t larger.

Over the past few years, there has been a substantial increase in the number of laws that protect personal information, including biometrics, throughout the world. However, there are relatively few specific biometric privacy laws in the United States. Biometrics is the measurement and analysis of unique physical or behavioral characteristics such as fingerprints, DNA, or voice patterns, particularly as a means of validating an individual’s identity. Accordingly, biometric privacy is the right of an individual to keep their biometric information private and to control how that information may be collected and used by third parties. This freedom arises out of a person’s general right of privacy.

The right of privacy is one of the most hotly debated topics in the Bill of Rights. Often, the debates over the right of privacy involve people’s religious beliefs, social mores, and opinions about what people can do in their own homes. But, in this instance, the right of privacy confronts something even more powerful and more difficult to overcome — the desire of businesses to make more money by using the resources available to them.

In this case, the resource is information: data about individuals and what makes each of them unique, including their DNA, facial features, fingerprints, and voices. Consequently, this right-to-privacy debate is over whether people get to control how businesses collect and use their personal information.

Facebook was using facial recognition to add a component to its product to keep people interested, stay on its site longer, and give its advertisers more opportunities to market products. And it worked. For instance, my friends and I troll Facebook the day after an event to see what pictures of ourselves have been posted. In doing so, we also view advertisements on our feeds, and many of us have purchased some items we’ve seen.

So, what’s so wrong with that? In reality, Facebook’s practice probably isn’t that offensive to many people. We expect our pictures to be posted and for other people to recognize us. We also accept that most companies are constantly trying to entice us to buy their products.

But what if you had to give your fingerprints to enter a building you were visiting, and the building manager sold those fingerprints to a third party on the Dark Web? Our fingerprints and other biometric information are specific to us; therefore, their unauthorized use can have disastrous effects. You don’t have to watch crime shows to imagine how these fingerprints could be used by nefarious actors.

It’s fair to say most people would not be happy about the sale of their fingerprints, but would that sale be illegal? It depends. Biometric privacy laws are meant to protect individuals from having their fingerprints and other biometric information stolen or used in an unauthorized manner, thus providing a definitive answer regarding the legality of such sales.

I believe I should be able to control all uses of my personal information. I don’t want people or businesses using my name, telephone number, or email address without my consent, but I’m even more protective of my biometric information. It is unacceptable to think that the DNA I provide to a genetic testing agency to learn about my ancestors could be used for other purposes. I just want to know if my family truly came from Ireland. I don’t want a pharmaceutical company reaching out because it got my results and wants to sell me a drug for a disease that runs in my family.

To avoid these types of liabilities, businesses that wish to utilize biometrics should first determine if BIPA or other biometric privacy law applies to their situation. Compliance under each of these laws is slightly different. If BIPA applies, then the business is required to give the type of informed consent referenced above. To that, businesses must:

  • Provide written notice to affected individuals of the collection and use of the biometrics, including the specific reason for collection and use of the information and how long it will use and retain the biometric information (before collecting the biometrics).

  • Obtain each individual’s written consent to such collection and use of the biometrics (again, before collecting the biometrics).

  • Keep the biometric information confidential and only disclose the information if the individual consents, it is required for the completion of a financial transaction requested by the individual, or disclosure is required by law, warrant, or subpoena.

  • Institute appropriate administrative, technical, and physical safeguards for the protection of biometric information in its care.

  • Implement retention and destruction policies documenting that the biometrics will only be retained for so long as they are needed or within three years of the individual’s last interaction with the business, whichever occurs first, and ensuring that the information is appropriately disposed of at the end of such period.

Businesses should be guided by the basic principle of “only collect that which you need and only keep it for so long as it is needed,” and they cannot sell, lease, or otherwise profit from another person’s biometric information.

I hold that more states should follow Illinois’ example and enact biometric privacy laws so individuals have control over the use of their biometrics and companies that use biometric information without consent can be held accountable. Furthermore, states that have enacted these laws should be more proactive in enforcement. A $35 billion fine will have a far greater deterrent effect than a $550 million settlement. I say, tag a few companies hard. The others will fall in line, and our information will be protected.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Billee Elliott McAuliffe is a member of Lewis Rice practicing in the firm’s corporate department. Although she focuses on information technology and privacy, Billee also has extensive experience in corporate law, including technology licensing, cybersecurity and data privacy, … View Full Bio

Article source: https://www.darkreading.com/risk/facebook-got-tagged-but-not-hard-enough/a/d-id/1337285?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DDoS Attack Targets German Food Delivery Service

Liefrando delivers food from more than 15,000 restaurants in Germany, where people under COVID-19 restrictions depend on the service.

Cybercriminals have launched a distributed denial-of-service (DDoS) attack against German food delivery service Takeaway.com (Liefrando.de), demanding two bitcoins (about $11,000) to stop the flood of traffic. The attack has now stopped, according to a report from BleepingComputer. 

The COVID-19 virus has caused Germany to implement severe restrictions on the restaurant industry. As a result, Germans have grown more reliant on delivery services, which are still operating. One of these is Liefrando, which delivers food from more than 15,000 restaurants.

Founder and CEO Jitse Groen shared an update of the incident via Twitter, along with a note from the attackers indicating they planned to target other websites. The company’s German division then announced its systems had entered maintenance mode to ensure data security in the attack. Food orders were accepted but couldn’t be processed; Liefrando had to issue customer refunds. 

Security experts anticipate these types of acts, intended to exploit essential services in times of crisis, will continue as restrictions due to COVID-19 remain in place. “Deplorably, we will likely see a further avalanche of cyberattacks targeting most susceptible online businesses,” says ImmuniWeb founder and CEO Ilia Kolochenko. As a result, many organizations may be forced to pay cybercriminals or invest in DDoS protection services to defend against advanced attacks.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Security Lessons We’ve Learned (So Far) from COVID-19.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/ddos-attack-targets-german-food-delivery-service/d/d-id/1337359?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook accidentally blocks genuine COVID-19 news

Fake news, bogus miracle cures: Facebook has been dealing with a lot, and COVID-19 isn’t making it any easier.

Like many other companies, Facebook is trying to keep its employees safe by allowing them to opt for working remotely, so as to avoid infection.

But when humans are taken out of the content moderation loop, it might suggest that automated systems are running the show. Facebook is denying that a recent content moderation glitch has anything to do with workforce issues, but it’s also saying that automated systems are to blame for being overzealous in stamping out misinformation.

On Tuesday, Guy Rosen, Facebook’s VP of Integrity, confirmed user complaints about valid posts about the pandemic (among other things) having been blocked by mistake by automated systems:

On Wednesday, a Facebook spokesperson confirmed that all affected posts have now been restored. While users may still see notifications about content having been removed when they log in, they should also see that posts that adhere to community standards are back on the platform, the spokesperson said.

Facebook says it routinely uses automated systems to help enforce its policies against spam. The spokesperson didn’t say what, exactly, caused the automated systems to go haywire, nor how Facebook fixed the problem.

They did deny that the issue was related to any changes in Facebook’s content moderator workforce, however.

Regardless of whether the blame should lie with humans or scripts, The Register reports that it took just one day for COVID-19 content moderation to flub it. On Monday, Facebook had put out an industry statement saying that it was joining Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube to scrub misinformation contained in posts about COVID-19. (Speaking of which, just for the record, health authorities say that neither drinking bleach nor gargling with salt water will cure COVID-19).

We are working closely together on COVID-19 response efforts. We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world. We invite other companies to join us as we work to keep our communities healthy and safe.

Within one day, its automated systems were, in fact, squashing authoritative updates. From what the Register can discern, the systems-run-amok situation was first spotted by Mike Godwin, a US-based lawyer and activist who coined Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.”

On Tuesday, Godwin said that he’d tried to post a non-junky, highly cited story about a Seattle whiz-kid having built a site to keep the world updated on the pandemic as it spreads, as in, it gets updated minute by minute.

When Godwin tried to share the story on Facebook, he got face-palmed:

Other users reported similar. One such carries quite a bit of Facebook cred: Alex Stamos, formerly Facebook’s chief security officer and now an infowar researcher at Stanford University, weighed in:

A Facebook post about keeping its workers and its platform safe said that it requested any of its workers who can work at home to do so. However, that’s not an option for all of the company’s tasks, it specified:

For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons.

According to Stamos, content moderation is one of the tasks that can’t be done at home due to Facebook’s privacy commitments. So which is it: were content moderators sent home as Stamos suggested, leaving the machines in charge? How does that jibe with Facebook’s statement that staffing had nothing to do with the glitch?

Either way, this crisis is pointing to some kinks needing to be worked out in the human/script content moderation process. Facebook workers have a lot on their plate when it comes to keeping users connected with family, friends and colleagues they can no longer see face to face, and when it comes to keeping us all properly informed, as opposed to drinking bleach or wasting our time on other snakeoil posts.

The last thing we need is to be kept from reading about things that whiz-kids are cooking up. Let’s hope that Facebook gets this figured out.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yXPQA1Px0uY/

Delayed Adobe patches fix long list of critical flaws

Notice anything missing from last week’s Microsoft Patch Tuesday?

Obscured by a long list of Microsoft patches and some fuss about a missing SMB fix, the answer is Adobe, which normally times its update cycle to coincide with the OS giant’s monthly schedule.

It’s mostly a practical convenience – admins and end-users get all the important client patches at once, which includes Adobe’s ubiquitous Acrobat and Reader software.

And yet March’s roster was Adobe-less. This week the company made amends, issuing fixes for an unusually high CVE-level 41 vulnerabilities, 21 of which are rated critical.

It’s not clear what caused the delay although it might simply be their number and the need to finalise patches before making them public.

The two patching hotspots are the 22 CVEs in Photoshop and 13 in Acrobat and Reader.

Of these, 16 uncovered in Photoshop/CC for Windows and macOS are rated critical compared to a more modest 9 in Acrobat and Reader.

That said, Reader is ubiquitous on Window and Macs, which is why admins will probably zero in on those as the top priority.

The Acrobat/Reader criticals include five use-after-free CVEs, a buffer overflow, memory corruption, a stack-based buffer overflow, and an out-of-bounds write.

Interestingly, these cluster heavily around only two categories of the recently completely revised MITRE Corporation Common Weakness Enumeration (CWE) Top 25 most dangerous software flaws, specifically CWE-119 and CWE-416.

The first of those generic programming weaknesses, CWE-119 (Improper Restriction of Operations within the Bounds of a Memory Buffer), is by some distance the most common class of software weakness as measured by the number of CVEs associated with it and their severity.

A similar concentration of CWE-119 weaknesses is true for many of the critical flaws in Photoshop. The answer for Acrobat/Reader DC is to update to version 2020.006.20042 (APSB20-13), while for Photoshop it’s version 20.0.9 for Photoshop CC 2019, and version 21.1.1 for Photoshop 2020.

Most of the Acrobat/Reader flaws allow arbitrary code execution which would be exploited by persuading users to open a malicious PDF, so these should be patched as soon as possible.

At least there is some good news – as far as anyone knows, none of the vulnerabilities are being exploited in the wild.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/TahXlWxB91g/

Cryptojacking is almost conquered – crushed along with coinhive.com

Cryptojacking may not be entirely dead following the shutdown of a notorious cryptomining service, but it isn’t very healthy, according to a paper released this week.

Cryptomining websites embed JavaScript code that forces the user’s browser to begin mining for cryptocurrency. The digital asset of choice is normally Monero, which is often used in cybercrime because of its enhanced anonymity features.

Some cryptomining sites sought the visitor’s permission to co-opt their browser, often in exchange for blocking ads. Others did it surreptitiously (which is what we call cryptojacking). Either way, one name kept cropping up in these cases: Coinhive.

Coinhive provided Monero cryptomining scripts for use on websites, retaining 30% of the funds for itself. It showed up on large numbers of cryptomining and cryptojacking sites. Researchers tracked them with a tool called CMTracker.

Monero underwent a hard fork and its price plummeted. This contributed to Coinhive shuttering its service in March 2019, claiming that falling prices made it economically unviable.

Given Coinhive’s popularity, how prevalent is cryptojacking now? That’s what researchers at the University of Cincinnati and Lakehead University in Ontario, Canada explored in their paper, called Is Cryptojacking Dead after Coinhive Shutdown?

The researchers checked 2,770 websites that CMTracker had previously identified as cryptomining sites to see if they were still running the scripts. They found that 99% of sites had ceased activities, but that around 1% (24 sites) were still operating with working scripts that mined cryptocurrency. Manual checks on a subset of the sites found that a significant proportion (11.6%) were still running Coinhive scripts that were trying to connect to the operation’s dead servers.

So, where do these new scripts come from? The researchers found them linking back to eight distinct domains with names like hashing.win and webminepool.com. Searching on the eight domains surfaced 632 websites using their scripts. By far the most popular was minero.cc.

Browser-based cryptominers often seek out certain online properties like movie streaming sites to help ensure that victims stay connected, the paper said. However, they can use tricks like hidden pop-under windows to maintain a connection even after the user closes a browser tab, and technologies like WebSockets, WebWorkers and WebAssembly to make connections more robust and take direct advantage of client hardware.

The researchers said:

Cryptojacking did not end after Coinhive shut down. It is still alive but not as appealing as it was before. It became less attractive not only because Coinhive discontinued their service, but also because it became a less lucrative source of income for website owners. For most of the sites, ads are still more profitable than mining.

Will browser-based cryptojacking stay suppressed? A lot depends on its profitability. Should Monero or some other cryptojacking-friendly currency grow sufficiently in value, there will doubtless be another rush to capitalise on it.

This study didn’t look at server-side cryptojacking. This has been a scourge for companies like Tesla, which saw cryptojacking hackers compromise its cloud-based servers in early 2018. Something similar happened to the LA Times. The advantage in those attacks is that the servers keep mining, whereas a home user may shut down their laptop or desktop at the end of the day.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9hDRNF7x_6k/

Oh-so-generous ransomware crooks vow to hold back from health organisations during COVID-19 crisis

Updated Ransomware operators of DoppelPaymer and Maze malware stated that they will not target medical organisations during the current pandemic.

Laurence Abrams, who runs the security news site Bleeping Computer, reports that he made contact with “the operators of the Maze, DoppelPaymer, Ryuk, Sodinokibi/REvil, PwndLocker, and Ako Ransomware infections to ask if they would continue targeting health and medical organizations during the outbreak.”

The DoppelPaymer operators responded that “we always try to avoid hospitals, nursing homes … we always do not touch 911 (only occasionally is possible or due to missconfig in their network) … if we do it by mistake – we’ll decrypt for free.”

Maze operators also responded, sending Abrams a “press release” stating that “We also stop all activity versus all kinds of medical organizations until the stabilization of the situation with virus.”

These statements are no cause for celebration. According to security company Coveware, the top ransomware types at the end of 2019 were Sodinokibi, Ryuk, Phobos, Dharma and DopplePaymer. If you suffer a ransomware attack, chances are that it will be the wrong type.

Most common ransomware attacks in late 2019, figures from Coveware

Most common ransomware attacks in late 2019, figures from Coveware

Better news is that Emsisoft and Coveware are offering free help to healthcare providers during the outbreak.

Emsisoft also includes an appeal to the criminals, stating: “We also know you are humans, and that your own family and loved ones may find themselves in need of urgent medical care… Please do not target healthcare providers during the coming months and, if you target one unintentionally, please provide them with the decryption key at no cost as soon as you possibly can. We’re all in this together, right?”

Ransomware is hugely disruptive, in effect closing down access to IT systems and data for those affected. What needs to be done? In its report on ransomware in 2019 in the US, Emsisoft said lax security standards were largely to blame, noting that “[while] 966 government agencies, educational establishments and healthcare providers were impacted by ransomware in 2019, not a single bank disclosed a ransomware incident. This is not because banks are not targeted; it is because they have better security.”

Well, unless they are Travelex, perhaps. Emsisoft added that “unless governments improve their cybersecurity posture, cyberattacks attacks against them will continue to succeed.”

Could organisations easily do more to protect themselves? Microsoft has reported that 1.2 million Office 365 accounts are compromised every months, which could be cut by 99.9 per cent if organisations enforced multi-factor authentication.

Security is hard and can be inconvenient, but getting some basic best practices in place might at least reduce the number of times ransomware slingers rake in the cash and crow over their “success”.®

Updated at 1300 to add:

Shortly after we published this piece, a threat analyst from Emsisoft contacted us to note that Maze’s operators had announced just a few days ago that it had hit a medical research company in London. The Reg has seen a screenshot from a workstation of the affected company. MazeOps, come on. Don’t be that *£$*@$$.

Sponsored:
Webcast: Why you need managed detection and response

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/19/ransomware_crooks_promise_to_hold_back_from_health_organisations_during_crisis_so_generous/

Quantifying Cyber Risk: Why You Must & Where to Start

Quantifying cybersecurity risks can be a critical step in understanding those risks and getting executive support to address them.

(image by Egor, via Adobe Stock)

Risk. According to Mirriam-Webster the word has several meanings. First is “possibility of loss or injury: PERIL.” A little down the list comes, “the chance of loss or the perils to the subject matter of an insurance contract, also: the degree of probability of such loss.” Now, from a business perspective, we’re getting somewhere.

The cybersecurity world is accustomed to talking about risk in colorful terms. “Code red,” “condition yellow,” and the like have long been used to discuss the immediate risk environment. But as cybersecurity has become an issue for business executives as much as technology managers, the language has changed and risk has shifted to a quantitative conversation.

A sign of maturity

Brian Riley, senior director of global cyber risk management at Liberty Mutual says, “Putting numbers or metrics around risk allows you to have a different level of conversation about what that means.” He explains that the differences not only allow the conversations to take place with different business groups, but are indicative of a growing maturity in the field of cyber risk.

One sign of cybersecurity maturity is adoption of a common language and analytical framework to describe risk in terms other lines of business understand.

There are a number of organizations that have developed such tools. For example, the International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST) have created sweeping, comprehensive standards. And a tool like the Factor Analysis of Information Risk (FAIR) is a practical framework that helps organizations uphold those standards – specifically the ones that relate to cyber risk.

Frameworks make the team work

The FAIR model builds its framework on a series of definitions, beginning with assets and continuing to risks, which are broadly defined as the probability that a loss will occur to an asset. Various kinds of loss, such as productivity, replacement, and reputation are defined as the types of threats that can lead to those losses. The threats are then placed into a multi-dimensional context of severity and likelihood, all expressed in numbers rather than descriptive terms.

While Liberty Mutual’s Riley makes extensive use of FAIR in his work, it’s not the only tool he brings to the job.

“The work that Mitre has done with their ATTCK framework of the adversarial tools, tactics, and common knowledge create a taxonomy that allows an organization to think about specific attack tactics and the security controls that can be applied to those tactics in a repeatable way,” he says. The consistent repeatability is something that professionals see as critical for not only addressing risk within cybersecurity but for talking about risk within the larger context of the business.

“I think [FAIR] helps you put the cyber risk on the same level as other elements of risk that we’re addressing at the enterprise level,” Amit says. “So if I’m looking at financial risk, operational risk, a competitive landscape, these are all at the end of the day quantified to a degree with some sort of a range that revolves around the specific threat.”

He explains, “Here’s a scenario. Here’s our exposure. Here’s the risk associated with that.” And expressing that risk in a quantified way that other professionals within the business can understand means that the risk can be addressed by the entire organization.

Steve Durbin, managing director of the Information Security Forum, says that the common understanding within the organization is critical. “The challenge for security is to be able to translate security metrics into a form of reporting that is relevant and understandable to a senior audience and aligns with and supports the assessment of business performance and ultimately business risk,” he says.

That assessment will, at some level, need to be expressed in the dollars and cents terms that are the core of executive discussion.

Brass tax

“For board-level metrics, analytics data must often be combined with some sort of cost-benefit analysis,” says Heather Paunet, vice-president of product management at Untangle.

Amit agrees, giving an example of the discussion a CISO can have with the executive board.

“Here’s why I’m trying to reduce this certain scenario’s risk from $2 million to $8 hundred thousand. So I’ve got a $1.2 million risk reduction. And in order to perform that activity, I’m asking for investment in the magnitude of $200,000. So $200,000 for $1.2 million. Now you’re starting to make sense as far as my return on investment,” he says.

And boards of directors are increasingly interested in having CISOs and risk managers make sense in board meetings. “It would be a very foolish board indeed today that said it had no interest in understanding the company’s security posture and what steps were being taken to protect its critical assets,” says Durbin.

Fortunately for both boards and the professionals charged with providing information, “The industry is gradually maturing in space to have more quantifiable metrics around what risks look like across most frameworks,” according to Riley.

The point, ultimately, is what cybersecurity professionals can do about the risks they see. “It’s our job to figure out what portion of a loss scenario, what portion of the risk element that we’re measuring do we have control of in that loss scenario,” says Amit. He asks, “What is it that you have control over that can change the outcome of a particular scenario?”

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/theedge/quantifying-cyber-risk-why-you-must-and-where-to-start-/b/d-id/1337335?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Achieving DevSecOps Requires Cutting Through the Jargon

Establishing a culture where security can work easily with developers starts with making sure they can at least speak the same language.

When it comes to developer and security teams, the word of the day is friction. On one hand, developers are focused on creating and moving as fast as possible. On the other, security teams are typically injected into the process at inopportune times to remediate software vulnerabilities.

This incohesive dynamic interrupts the flow and speed developers like to operate, causing developers to see security as a roadblock rather than a group they should be working with hand-in-hand.

Business leaders understand the general importance of establishing a culture of “DevSecOps” within their environment, where developers and security work seamlessly together to accomplish a common goal. But getting them on the same page is a mounting challenge. Both have been traditionally fragmented areas of business and work in deeply technical environments with completely different agendas.

As with any department, developers have jargon they use that is foreign to those not deeply involved in the intricacies of software development. Establishing a culture where security can work easily with developers starts with making sure they can at least speak the same language.

To help bridge the gap, I broke down some of the common initiatives and terms that developers use and security pros should know to help get them on the same page and best integrate security into their processes.

• “Shifting left”: This should really be called “expanding left” (but that’s a point and argument for another day). Shifting left simply means that software and systems testing should happen early in the life cycle to catch defects and bugs quickly. With developers focused on speed and innovation, finding bugs late in the software development process slows them down too much. Shifting left aims to avoid mistakes from impeding development. 

This matters from a security standpoint for a few reasons. With increasingly fast DevOps cycles, developers inevitably have a greater responsibility to secure code because security demands their attention more frequently. Traditional security gates simply aren’t enough to keep up with this fast pace, hindering not just speed but innovation, too. Security pros also have to shift left to help embed security right into development processes. When this happens, an organization goes from a DevOps to a DevSecOps culture, where both teams contribute significant value by working in tandem.  

• “Continuous integration”: Continuous integration (CI) means that code changes are automatically integrated from multiple contributors for one piece of software. For developers, this process is incredibly interactive and quick. When you hear developers talk about CI, they mean the changes being made across their teams are being implemented automatically to improve the software in development as a collaborative effort.

For security, this is valuable to know because of the real opportunity to enforce secure coding practices and vulnerability assessments at this stage. For example, static code analysis early in the software development life cycle can help identify bugs that would normally hit production. When vulnerabilities are removed preproduction, it’s a win not only for the security folks but the developers as well because they don’t have to rewrite the code further down the line (or roll anything back).

• “Regression testing”: Regression testing is pretty simple. It’s the end-to-end testing of a new or updated application to make sure that updates or modifications don’t negatively impact how the application operates for end users. And it’s a process where security should definitely be involved.

During regression testing, security should seize the opportunity for collaboration. Just as developers want to test the product for functionality issues, security should examine the app to identify weaknesses that could be exploitable. This way, proper security gates can be put in place to mitigate vulnerabilities before production.

When security is involved in the regression testing process, the engine is tested from both a functionality and vulnerability standpoint before being rolled out — resulting in high-performing and more secure software.

• “Canary rollout” and “failing forward”: These two pretty much go hand-in-hand. A canary rollout is when developers roll out a new or updated software version to a small subset of users to assess how well it operates prior to a mass rollout. Failing forward is when developers find an issue during a canary rollout and make adjustments on the fly rather than rolling back to an old version or impeding development.

Just as developers test for performance issues within a smaller environment during a canary rollout, security can be involved in the process as well by monitoring a small subset of users within a lower risk environment. It’s almost like a test drive to see how well the final product will fare once unleashed into the wild.

Achieving DevSecOps
It obviously takes more than learning new terms and initiatives to embed security into developer processes, but one of the most pivotal first steps security teams can take is collaborating closely with developers to understand where security should be involved to add real value.

It’s understandable that development teams want to innovate as quickly as possible, and security teams should be empathetic of that. It’s also understandable that security teams want developers to help ensure their innovations don’t ignore proper security, and development teams should be empathetic of that.

The key and only path forward is a combined approach where both sides collaboratively work together to achieve the common goal of pushing out innovative software quickly that is also as secure as possible. Until that mindset shift happens, however, both parties will continue to point to the other as hindering progress.

Related Content:

 

Mario DiNatale serves as head of platform security at ZeroNorth. Prior, he was CTO at Kyber Security and CIO of the Town of Hamden. In addition, Mario acts as a mentor and adviser to numerous startups. Even while acting in an executive capacity, he still remains regularly … View Full Bio

Article source: https://www.darkreading.com/application-security/achieving-devsecops-requires-cutting-through-the-jargon/a/d-id/1337235?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cyber Resilience Benchmarks 2020

Here are four things that separate the leaders from the laggards when fighting cyber threats.

These days, companies that want to compete must go digital. But the digital world has become complex and, in some cases, downright scary. Protecting against the Web’s ever-present threats — via cybersecurity — is a tough problem for businesses of any size. But going about it the wrong way by sinking money into the wrong or ineffective solution can result in more than a depleted bank account. It can also ruin the company’s brand, reputation, and future earning potential.

The recent “Third Annual State Of Cyber Resilience” report published by Accenture examines how organizations are dealing with their cybersecurity needs and the techniques they can use to do it better. However, the gap between the top organizations and the laggards is huge. The top firms make the most of their security investments, but the laggards have much lower threat detection rates, great adverse impacts and downtimes after a cyberattack, and more customer data being exposed. Accenture says companies experience an average of 22 security incidents annually, which equals a potential saving of $6 million per year for the laggards.

Here are four things that separate the leaders from the laggards:

1. They use the right metrics.
As costs rise and the number of third-party threats grows, it’s even more critical that the money spent on security actually delivers effective and efficient results. Companies that get digital right spend to enhance operational speed, extract value from new investments, and sustain what they have. The laggards zero in on measuring their cyber resilience, but the leaders want to know how quickly they’re getting to that destination. In fact, leaders say, the top three metrics of cybersecurity success emphasize speed.

According to Accenture, leaders take pride in how fast they can detect a security breach, mobilize a response, and return to business as normal. They also measure their resiliency — the number of systems that were compromised or stopped, and for how long — and how accurately they were able to pinpoint cyber incidents. While leaders look for speed of threat detection, mitigation, and recovery, the nonleaders are more concerned with the outcomes they want to achieve: cyber operational technology (OT) resiliency, repetition (the portion of breaches that come from repeated attempts of the same type), and cyber IT resiliency.

The nonleaders ought to rethink their priorities to gauge and ramp up how fast they detect, respond, and recover from cyber threats. They should replicate the methods leaders use to assess cybersecurity performance to attain higher levels of resilience.

2. They value speed.
Bouncing back from a security incident quickly is critical to minimizing damage and reducing the impact on the organization. That’s why leaders who embrace speed say that 83% of the security incidents they experienced made little or no impact on their organization’s operations.

These leaders make the most of current technology. Artificial intelligence (and machine learning) was cited as the No. 1 source to detect and respond to incidents quickly. Such tools enable security leaders to find and remediate damage nearly three times faster than companies that don’t use such tools, the report finds.

The nonleaders should think hard about putting money into technologies that enable them to measure their cybersecurity performance through metrics such as faster detection, faster mitigation, and shorter recovery times.

But there are a lot of vendors and tools out there, and many of them are unclear about exactly what benefits they can offer in terms to time to mitigate. If they are clear, they typically only talk about known attack patterns. However, since the threat landscape keeps evolving, organizations must ensure proper safeguards against emerging patterns too. Because there’s no time to waste in mitigating the effects of an attack, companies must carefully scrutinize their security-provider service-level agreements and make sure they align with the company’s needs.

3. They reduce impacts.
The third point relates to the second, in that failing to take advantage of the most advanced security technology means that attacks can last longer and create greater disruption and higher costs for an organization. Fifty-five percent of the top companies had a business impact that lasted for more than a day. Nearly all (93%) of the laggards made the same claim. Getting the organizational impact down to less than a day is hard — even the leaders struggle to do it — but right now it’s a more urgent challenge for the nonleaders who have plenty of room to up their game.

One of the big reasons for failure is that many organizations operate with low degrees of security automation and rely on humans to fend off attacks. However, as anyone who’s been paying attention to cybersecurity knows, human error is one of the most-cited reasons for things to go terribly wrong.

That’s one reason why, over the last year, 13% of the security leaders faced charges of regulatory violations versus 19% of the nonleaders. Also, 19% of the latter incurred fines, as opposed to only 9% of leaders. Given that the EU’s General Data Protection Regulation can levy fines of over $100 million for violations, it’s clear that noncompliance could result in fines that are even higher than the already considerable downtime costs.

4. They’re team players.
When quizzed on how much collaboration matters, 79% of the respondents in the Accenture survey opined that working with law enforcement, government, and the broader security community will be essential to fighting cybercrime in the future. On that note, organizations that do this best — the ones that employ more than five ways to unite strategic partners, the security community, and internal resources to grow awareness and understanding of cybersecurity issues — are twice as good at protecting themselves against attacks than those who take a less-thorough approach.

On top of this, corporate governance is also undergoing some changes. Reporting security matters to the CEO has increased by 8 percentage points, but reporting to the board has shrunk by 12%. Direct reports to the CIO are down about 5% year-on-year — reducing a possible conflict of interest between both realms — with a general drift to the CTO of about 10% over the same period, the Accenture report highlights.

Staff and employee training is one more big area for improvement. Thirty percent of the security leaders said they train more than three-quarters of the people who need training on new security tools. Among nonleaders, the figure is only 9%.

Conclusion
If there’s anything the Accenture report shows, it’s that everyone — even the security leaders — can do better. Whether they are leaders or laggards, organizations should look at hard at where they’re falling short and make every effort to improve.

In every case, putting money into boosting operational speed, extracting value from security investments, and stewarding what they have will put an organization on the right road to effective cybersecurity. Those who do this best tend to choose advanced technologies that help them detect and respond to cyberattacks fast. Once they settle on a security solution, they roll it out quickly.

In fact, the number of leaders who invest over one-fifth of their budget in advanced technologies has grown twofold over the past three years. As a result, these leaders have become more confident in their ability to extract more value from their investments and are outperforming companies that don’t take the same rigorous, proactive approach to cybersecurity.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Beyond Burnout: What Is Cybersecurity Doing to Us?

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/cyber-resilience-benchmarks-2020/a/d-id/1337288?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple