STE WILLIAMS

When Compliance Isn’t Enough: A Case for Integrated Risk Management

Why governance, risk, and compliance solutions lull companies into a false sense of security, and how to form a more effective approach.

The governance, risk, and compliance (GRC) approach to risk management is proving insufficient as companies grapple with myriad tools amid a false sense of security. Instead they now are turning to integrated risk management (IRM) and risk quantification to inform strategies.

“What we are seeing, and have seen over the last five years, is a pivot away from more of a compliance-focused approach around IT and security risk that you’d typically find in a GRC program, or even in utilizing GRC technology,” says John Wheeler, global research leader for Gartner’s Risk Management Technology division. His focus is on IRM, which involves different ways to address risk and potentially transfer risk vehicles; for example, cyber insurance.

GRC, now around for nearly two decades, stemmed from a growing need to address the broad landscape of compliance mandates security pros face year after year, Wheeler says. While helpful in meeting said mandates, companies that invested more in GRC-specific tools found themselves in a “potpourri” of products either purpose-built to address a specific compliance requirement or limited in its ability to understand risks unique and specific to the organization.

“For many organizations, they may have a false sense of security,” he adds. “If they think they are compliant with regulations, risks are addressed … [this] couldn’t be further from the truth.”

It is imperative companies understand their individual risk profile, Wheeler continues; out of that will come a greater ability to meet compliance mandates that are relevant to the business. Rather than focus on GRC, many are turning to IRM so they can comprehend how IT risk, and cybersecurity requirements and posture, fits into and aligns with broader operational risk.

“[IRM is] taking it beyond technology into the realm of people and process risk, and ultimately all the way up to overall strategic risk of an organization, such that they can understand their security and IT risk aligned with where the organization is headed strategically,” he explains.

IRM is a “forward-looking risk posture” in that it considers the most strategic initiatives a business is taking on, and where it’s headed, as opposed to reporting on historical security incidents. While past events are important and can inform an enterprise approach to security, they make up only a small piece of the picture – and one senior executives and board members can’t fully appreciate as it has little relevance to what they’re hoping to achieve in the future.

Context is Key: Why IRM is Different  

The core of IRM is the ability to perform risk assessment at an asset-based level, which aligns with the IT or cybersecurity world, says Wheeler, who spoke about the approach at this week’s FAIR Conference, held in Washington, D.C. Most IT and security pros assess the risk of their hardware, software, and data assets to determine which of these are most critical.

“That is important, but what they lack is context of how those assets are also tied into the broader business,” he says. They need to take the risk assessment of a given process, and the people involved, and tie those into asset-based risk assessment to realize how they intersect.

For example, you may have a server on the network deemed critical, but in reality, it doesn’t support any critical business processes, so it doesn’t need to be highly ranked. At the same time, you may have an asset labeled non-critical, located outside the core network and tied into a highly critical business process. For that reason, it will need to be treated differently. These risk assessments can help IT better understand how different systems relate to one another; in doing this, they can better prioritize their work efforts and resource allocation, he adds.

IRM is helpful in informing the development of new products and services, says Wheeler, as it provides a vertical view of risk through the company. This is “essential” in helping businesses address digital risk management as it relates to the creation and delivery of new digital products and services, an issue of great importance to CEOs who want to use these to grow.

“To do that effectively, they need to have that vertical view of risk down through the organization to give them better understanding and visibility into the risk they face with digital products and services,” Wheeler says. “Not only for developing a business case, but then as it progresses from business case to design and delivery, understanding how risk profile changes.”

Navigating Shifts and Challenges

Wheeler acknowledges adopting IRM comes with its obstacles: while security pros can use tools and methodologies to better quantify risk, he says, it will never be precise in its calculation.

“It’s unlike, say, financial risk, when you get into credit risk or market risk, where you can be very precise in the amount of risk that needs to be mitigated or transferred,” he explains. The goal of this exercise should be “directionally correct,” as he puts it, instead of entirely exact. With that expectation, organizations can focus on creating and maintaining a dialogue around IT and cybersecurity risk, and make decisions based on the directionally correct data they have.

He also points to a shift occurring within many organizations, which are seeing more and more risk borne by people within the business as opposed to technology experts and leaders. As this is happening, tech is moving into a frontline activity as it supports products and services. This accountability will drive a desire within the business to be engaged and understand the risk.

With that engagement, an understanding must be made. IT and security pros can provide risk data, but everyone must keep in mind the focus of the risk itself as opposed to the process of calculating the risk amount. In his personal experience, Wheeler says much of the conversation between business and technology devolves into a discussion of how a risk amount was calculated – which avoids the goal of addressing risk in a way that drives the business forward.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Beginner’s Guide to Denial-of-Service Attacks: A Breakdown of Shutdowns.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/when-compliance-isnt-enough-a-case-for-integrated-risk-management/d/d-id/1335917?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Twitter’s new policy bans financial scams

Around about a year ago, it looked like Elon Musk was promoting a great deal: send a little bit of Bitcoin to the wallet of a blue-checkmark verified Twitter account, and get back 10x your money!!!!

…except, of course, he wasn’t. It was a scam: some flimflammer had gotten hold of a verified account, kept the handle (Knip), and changed the display name next to “Promoted by” to read “Elon Musk.”

At the time, Naked Security’s Maria Varmazis wondered how in the world the behavioral red flags of the hijacked account hadn’t set off any warning bells at Twitter:

This verified account was inactive for a few months and then suddenly sprang to life, tweeting about cryptocurrency and asking for deposits. The display name was changed and the avatar was reset. In isolation, just one of these behaviors might not mean much, but in series, they paint a picture of an account that’s likely up to no good.

We don’t know what kept Twitter from spotting a string of behavior that led up to such an egregious scam: whoever it was had made withdrawals of at least $3,000 from the $10,000 worth of Bitcoin in their wallet at the time Maria checked.

Crackdown on scams

But now, we’re pleased to report that Twitter is finally cracking down on these kind of financial scams.

On Monday, the platform unveiled a new policy that prohibits using “scam tactics” to weasel money or private financial information out of others. It’s outlawing behavior that involves deceiving others into sending money or personal financial information via phishing, deception or fraud.

One of the examples of scam tactics that Twitter listed matches the Elon Musk scam: Deceiving others into sending money or personal financial information by operating a fake account or by posing as a public figure or an organization.

Twitter calls this type of fraud a “relationship/trust-building scam,” which sounds a lot like what we refer to as confidence scams. These are scams that involve a conman or woman gaining their victim’s trust, whether it’s by pretending to be Elon Musk or the love of your life. They try to convince their marks to send money, whether it’s because they have spare money/Bitcoins they want to sprinkle upon their fans out of the goodness of their hearts, or they need to buy airfare to visit or bail money when they purportedly get arrested en route, or for any other of an endless variety of boo-hoo stories.

Don’t try to pull any of that on Twitter, its new policy says:

Using scam tactics on Twitter to obtain money or private financial information is prohibited under this policy. You are not allowed to create accounts, post Tweets, or send Direct Messages that solicit engagement in such fraudulent schemes.

Here are some other examples Twitter gave of prohibited, deceptive tactics:

Money-flipping schemes. You may not engage in “money flipping” schemes (for example, guaranteeing to send someone a large amount of money in return for a smaller initial payment via a wire transfer or prepaid debit card).

Fraudulent discounts. You may not operate schemes which make discount offers to others wherein fulfillment of the offers is paid for using stolen credit cards and/or stolen financial credentials.

Phishing scams. You may not pose as or imply affiliation with banks or other financial institutions to acquire others’ personal financial information. Twitter said to keep in mind that other forms of phishing to obtain such information are also in violation of its platform manipulation and spam policy.

It’s been too easy to pose as somebody else on Twitter

Twitter’s new policy doesn’t come a day too soon.

Cryptocurrency giveaway and other types of financial scams have exploded on Twitter, where it’s been ridiculously easy for fraudsters to impersonate celebrities and influencers.

While the Twitter user names that show up in your URL are unique, display names are personal identifiers that show up on your profile page and on your posts. Users can set them to anything, and unfortunately, that means that fraudsters can pretend to be somebody you trust, including a cryptocurrency somebody.

For example, we’ve seen it happen to the popular exchange BitStamp, to Litecoin founder Charlie Lee, and to Vitalik Buterin, co-founder of Ethereum.

What’s still OK to post?

Financial disputes are still OK on Twitter. It’s just when accounts engage in deceptive scamming, phishing or other fraud tactics that Twitter’s stepping in. These are the types of financial disputes in which it’s not going to intervene:

  • Claims relating to the sale of goods on Twitter.
  • Disputed refunds from individuals or brands.
  • Complaints of poor quality goods received.

See something? Report it

If you spot fraudulent financial content, you can report it, like so:

  • Select Report Tweet from the little gray dropdown arrow.
  • Select It’s suspicious or spam.
  • Select the option that best tells Twitter that the Tweet is suspicious or spreading spam.
  • Submit your report.

What Twitter might do to malfeasants

There are a number of actions Twitter might take when it finds users violating these policies:

    • Anti-spam challenges that might ask for additional information or for the account to solve a reCAPTCHA.
    • Blacklisting URLs. Twitter may flag potentially unsafe URLs with a warning and even block them from being posted.
  • Tweet deletion and temporary account locks. First offenders might just get their Tweets deleted or a temporary account lock. Repeat offenders will be permanently suspended.
  • Permanent suspension. Twitter’s going to permanently delete accounts that commit “severe” violations, which it says includes things like operating accounts where the majority of behavior is in violation of its policies, or playing Whack-A-Mole by creating accounts to replace or mimic a suspended account.

Staying safe on social media

(Watch directly on YouTube if the video won’t play here.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RF4--MivPx4/

Patch released for Windows-pwning VPN bug

VPN vendor Forcepoint has patched a security flaw that could have given attackers unfettered access to its users’ Windows computers.

Security company SafeBreach Labs discovered the vulnerability in Forcepoint’s VPN client software. The software used to be called the Stonesoft VPN client before Raytheon Websense rebranded as Forcepoint and bought it in 2016. It provides a secure connection between Windows endpoints and the Forcepoint Next Generation Firewall. You’d use it to log in securely to your company’s servers over public Wi-Fi, for example.

The vulnerability lies in the client software’s choice of directory paths when loading a critical software module. It loads on bootup as sgvpn.exe, which is an executable digitally signed by Forcepoint, running under a privileged NT AUTHORITYSYSTEM account.

sgvpn.exe then tries to find another file called sgpm.exe, which is the VPN’s policy manager. It looks in two locations: C:Program.exe and C:Program Files (x86)ForcepointVPN.exe.

The problem is that it isn’t supposed to look in those locations.

In its article detailing the bug, Forcepoint explained that the incorrect directory paths are due to an unquoted search path vulnerability. sgvpn.exe creates a command sent to the Windows command line that includes the executable and a command line argument that tells the operating system how to run it.

Windows best practice dictates that if you’re sending a directory path and executable to the command line that includes a space, you include a quoted string to separate the executable part from the argument. Because Forcepoint didn’t do that, the command line misinterpreted the command, thinking that it included the erroneous directories.

This flaw enables an attacker to insert their own sgpm.exe file in one of the incorrect locations, and the sgvpn.exe executable will run it. Because sgvpn.exe runs under an account with administrative privileges, the attack code would have administrative access to the system.

The Forcepoint VPN client vulnerability also executes the attack code natively on the system without any checks. Because Forcepoint signed sgvpn.exe, an attacker can evade application whitelists that only run code signed by approved developers, SafeBreach explained.

Because sgvpn.exe loads on startup, it also means that an attacker could introduce a persistent attack, the company added:

…once the attacker drops a malicious EXE file in one of the paths we mentioned earlier, the service will load the malicious code each time it is restarted.

Exploiting the bug wouldn’t be easy for an attacker that didn’t already have some foothold on the system, because it would take administrative privileges to get the attack file into the targeted directories in the first place. If an attacker already has administrative access to your system, you’re already in trouble.

Nevertheless, Forcepoint gave the bug a CVE number of 2019-6145 and a base severity score of 6.7 (Medium).

What to do?

According to its knowledge base article, published 19 September 2019, the company has patched the flaw in version 6.6.1 of the Forcepoint VPN Client for Windows.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/urFdXrwNd_M/

Google wins landmark case: Right to be forgotten only applies in EU

Be careful what you put online, we constantly tell kids: the internet never forgets.

Well, unless you’re European, that is. In Europe, people have the right to ask the internet to develop select amnesia when it comes to what Google Search captures and retains in its expansive maw.

It’s called the right to be forgotten (RTBF): a right bestowed in 2014 when the European Court of Justice (ECJ) ruled that people are entitled to having the internet forget them.

Since 2015, Google and the French data privacy regulator, CNIL, have been wrestling over how wide a net that implies. Does the amnesia only include results returned to Europeans? Or does it pertain to Google’s worldwide list of domains?

On Tuesday, the ECJ ruled in Google’s favor: RTBF is EU-only, it decreed.

Recap

In June 2015, the French data protection agency told Google that it doesn’t care if a URL’s got .fr, .uk or .com glued to the end. If a European makes a legitimate request to be forgotten in search results, make it so on all your search engines in all countries, it said.

Google’s response came a month later: Ain’t happening, it said. Google filed an informal appeal saying that it would defy CNIL, that the ECJ’s ruling wasn’t global in nature, and that any move to make it so would be “a troubling development that risks serious chilling effects on the web”.

In September 2015, CNIL rejected Google’s appeal, saying that its decision didn’t mean that CNIL was trying to apply French law extraterritorially:

It simply requests full observance of European legislation by non-European players offering their services in Europe.

In February 2016, facing fines from the CNIL, Google gave in, extending RTBF to all its domains. It did what EU privacy regulators had been asking it to do and what France legally forced it to do: it submerged RTBF search results on all domains, making it impossible for people to simply hop off the .fr version of Google to go find the material on another Google domain – say, Google.com.

That same year, Google introduced a geoblocking feature that prevented European users from being able to see delisted links. But it also challenged the 100,000 ($109,901; £88,376) euro fine that CNIL had tried to impose, and that’s how the battle wound up in front of the European Court of Justice.

How the RTBF works

The RTBF allows European citizens to ask search engines not to display specific URLs linked to their name if the information contained on those webpages is “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed.”

Filling in Google’s link removal request form doesn’t enable people to completely scrub the web clean of any mention of them, but it can make it much harder to find things people would rather not pop high up in search results.

From the get-go, the RTBF has raised loads of thorny questions as it balances an individual’s privacy rights against freedom of speech. Some of those questions: Does it matter if the subject is a celebrity or a politician? Who determines the shelf life of data? Do people have a right to bury heinous crimes, such as murder? What if a person runs for office after hosing down their search results?

Google unenthusiastically launched the RTBF form in May 2014. In short order, it said that it was going to flag the censored search results that would result.

Regardless, Google was inundated with RTBF requests. One man who tried to kill his family wanted a link to a news article about it taken down. Other requests came in from a politician with a murky past and a convicted paedophile. By the end of the first day, 12,000 Europeans had submitted the form. For a while, the rate hummed along at 10,000 requests per day. Nearly a third of the requests related to a fraud or scam, one-fifth concerned serious crime, and 12% were connected to arrests having to do with child abuse imagery.

By May 2018, the initial flood had ebbed. Google was refusing a majority of them anyway: it was accepting between 42% and 44% of the requests per year. According to its most recent transparency report, as of 7 Sept. 2019, it had cumulatively granted 45% of RTBF search requests, or about 846,000 links.

The decision

The court decision on Tuesday held that search engines such as Google aren’t required to honor delisting requests worldwide:

Currently, there is no obligation under EU law, for a search engine operator who grants a request for de-referencing made by a data subject … to carry out such a de-referencing on all the versions of its search engine.

That doesn’t let search engines off the hook completely, though: the court included a reminder that they’re expected to have measures in place “discouraging internet users from gaining access, from one of the Member States, to the links in question which appear on versions of that search engine outside the EU.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XQWHqX4_TKA/

Microsoft rushes out fix for Internet Explorer zero-day

Windows users always struggled to live securely with Internet Explorer – and now it’s been superseded in Windows 10, it’s as if they’re now struggling to live securely without it.

Witness this week’s rush by Microsoft to patch two high-priority flaws affecting IE versions 9 to 11, one of which is a zero-day the company says is being exploited in real attacks.

The zero-day (CVE-2019-1367) was reported to Microsoft by Clément Lecigne of Google’s Threat Analysis Group. It’s a remote code execution (RCE) flaw in the browser’s scripting engine that could allow an attacker to:

… install programs; view, change, or delete data; or create new accounts with full user rights.

No further details have been made public in the advisory, but as with most browser vulnerabilities, exploitation would involve luring unpatched users to a malicious website.

No big deal?

Because IE is only used by a few percent of users, in theory this minimises the scope of the flaw.

However, because IE code still lurks in every version of Windows, including Windows 10, the number of people actively using it might not be the whole story.

Some will have activated it on their Windows 7 and 8 computers in the past, which means they could still be vulnerable if it’s set as the default browser or they can be persuaded to visit an infected website using it.

On Windows 10, IE has to be consciously activated, so anyone who’s not done this should be OK because Microsoft’s Edge or another unaffected browser will be the default.

Interestingly, the update must be done manually, during which the installer assesses whether the user’s systems needs it or not – this implies Windows 10 users at least should be safe.

IE scripting flaws aren’t exactly unheard of, as demonstrated by a proof of concept exploit from earlier this year, or CVE-2018-8653 from late 2018.

Microsoft Defender flaw

The second part of this week’s update patches CVE-2019-1255, a denial of service vulnerability in Windows’ built-in security engine, Microsoft (formerly Windows) Defender.

Essentially, an attacker could exploit this to “to prevent legitimate accounts from executing legitimate system binaries.” In other words, to stop it from working correctly.

The updated version is Microsoft Malware Protection Engine version 1.1.16400.2.

IE 10 support ended in January 2016. As for version 11, as far as we can tell from Microsoft’s documentation, this will be supported for as long as the versions on which it is integrated are themselves supported. For some Windows 10 versions, that implies support far into the future.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JKx5VMBH6xs/

Column

The Future of Account Security: A World Without Passwords?

First step: Convince machines that we are who we say we are with expanded biometrics, including behaviors, locations, and other information that makes “us” us.

Passwords
Passcodes
Passphrases
No Passwords
Two factors
Three factors
Biometrics
Facial (smack head against phone…)
Prove you exist
Prove it’s you that’s proving you exist …

… at which point most people will have thrown the device away, gone back to their notepad or Post-it note, and dug out the old password from years back that still gets them into everything under the sun, which, let’s face it, is probably root/calvin, root/toor, admin/letmein, or something akin to these.

We’re all creatures of habit. We look for simplicity and ease of use because we’re inundated on an hourly basis by applications, systems, phones, cars, fridges, and even the toaster asking us to identify ourselves before we get any meaningful service or a warmed-up waffle.

And herein lies the problem: A long time ago, someone, somewhere, in a mainframe (probably in another galaxy) decided that we needed to associate each human with an account, unique to them, and then protect it (little did they know) with a code that only that one human would ever be able to use or remember.

There’s short sighted and then there’s not being able to see to the end of your nose. The password flaw is something that it’s inventor, recently-deceased Fernando Corbató, was keen to point out. This is, let’s face it, on par with the Y2K flaw but without the immediate consequences. We keep living with the issue; heck, we have a World Password Day (first Thursday in May) when we actually celebrate that we can’t fix something that’s arguably been the bane of our existence since the ’60s! The password is to us like the common cold is to healthcare.

The challenge is one of balance. We need/want safety and security, but we like privacy (that is debatable, I know). We also want usability (as shown by all the blinky stuff we keep buying in the hopes of an easier life). Unfortunately, these three forces are acting upon ALL the various options out there vying for supremacy on the password battlefield, and, presently, no one has really come up with something that would keep all parties happy. Remember, our audience is everyone from the NSA/Mossad folks securing their systems to my mother and her computer login to Tesco supermarkets for home crockery delivery. Whatever we come up with must solve this entire spectrum of users.

Some progress has been made in the realm of passwordless solutions, some of which do a fantastic job of uniquely managing credentials in a manner that allows for seamless transactions across multiple platforms. Others can take existing credential techniques and mask them behind a much more collaborative, intuitive, and manageable front end, creating vaults that actually do work, and solutions that tie together all the myriad technologies out there. But, in the end, what they are doing is helping to navigate the mess that is underlying a well-built 1960s veneer: a set of credentials assigned to us, by us, for us, or for our use still has to be part of an access solution. Only now, instead of one mainframe, we have 1,001 apps, systems, websites, programs, ERP systems, etc., all clamoring to understand who we are and whether we should be allowed in.

So, what are we to do? What are our options, and where will we 5-10 years from now? Will we still be fending off “Summer2019!” as the default corporate password, or will we have finally put the ’60s to rest and moved on?

In the short term, we have to convince the machines that we are who we say we are, so let’s take biometrics and expand it to include behaviors, locations, and other information that makes “us” who we are to the outside world.

Long term, take that concept of “us” and who we appear to be and start to look at our very existence, our experiences, our lives, and our memories. I’m talking about taking neural information, directly from the gray matter between our ears that would demonstrate that we know the location, the bank, the account, the office, the card, and, if we’re smart about it, we correlate that with the device itself knowing “us.” Therefore, our very existence and interactions become our key. Essentially, we don’t have to prove who we are — we just have to be ourselves.

Will that solve all the password problems we collectively grapple with daily? Probably not, but it should at least eradicate 123456 or the more complex version of adding 789.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “‘Playing Around’ with Code Keeps Security, DevOps Skills Sharp

Chris is one of the world’s foremost experts on counter threat intelligence and vulnerability research within the information security industry. He has led or been involved in information security assessments and engagements for the better part of 20 years and is credentialed … View Full Bio

Article source: https://www.darkreading.com/risk/the-future-of-account-security-a-world-without-passwords-/a/d-id/1335846?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Beginner’s Guide to Denial-of-Service Attacks: A Breakdown of Shutdowns

DoS attacks come in many varieties (not just DDoS). This simple set of descriptions will help you understand how they’re different – and why each and every one is bad.

Denial-of-service (DoS) is a basic cyberattack mechanism that prevents a victim from doing business by denying them access to their network, server, or customer. It’s an attack concept so simple that many different variations have arisen on the single basic theme.

(image: Bits and Splits, via Adobe Stock)

These variations, like weeds rising up to choke a garden, arise to choke out the productive applications in an enterprise ecosystem. And, like weeds, there are many different varieties of these thorny, choking vines from the underworld ready to make your security life miserable.

It’s important to know the different sorts of DoS attacks because they have different remedies. Just as different weedy plants can be dealt with in different ways, the counter-measures for DoS attacks are different depending on whether they target the network or applications, and precisely which method of attack they use.

One thing you might have noticed is that we’ve referred to DoS attacks rather than DDoS. The reason is that DDoS (Distributed Denial of Service) is a particular sort of DoS attack, one in which the attack comes from many different sources so that it’s more difficult to defend against.

Whether distributed or from a single source, DoS attacks can be divided into three broad categories based on the part of the infrastructure under attack. First, are application-layer attacks, which take aim at application servers or parts of the application software stack. Next come protocol attacks, which use one of the basic networking protocols, like arp, syn, or ping to do their dirty work. Finally, there are the DoS attacks that are most widely assumed when people talk about DoS — the volumetric attacks that simply try to use sheer traffic volume of one sort or another to choke off access to a victim’s network.

Before we head off into this rogue’s gallery, one absence should be noted: You won’t find a discussion of ransomware here. It’s true that ransomware is, technically, a denial of service attack, since it denies the victim access to their own data. It has grown and expanded so much, though, that it deserves it’s own article, and it will have one.

In addition, it works in one way that’s very different from the DoS attacks we’ll discuss here: While all of these block customer access to applications and data, they don’t alter the data or applications themselves. Ransomware, conversely alters the files and systems in ways that prevent users from interacting with them. Ransomware affects those files/systems value to the user — and may also result in the destruction of those items. Each type of attack is damaging, but the differences make treating them separately worthwhile.

Let’s take a look at these dangerous and irritating pests, with a special eye toward understanding how they differ and how defense should differ, as well.

{Continued on Next Page}

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/the-beginners-guide-to-denial-of-service-attacks-a-breakdown-of-shutdowns/b/d-id/1335904?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google takes sole stand on privacy, rejects new rules for fear of ‘authoritarian’ review

Google has blocked a proposed revision of the charter of the Privacy Interest Group (PING), a part of the W3C web standards body, over concerns that establishing an unchecked “authoritarian review group” will create “significant unnecessary chaos in the development of the web platform.”

The PING’s job is to ensure that technical specifications recommended by the W3C respect the privacy of people using the web. The group does so by providing a “horizontal review,” in which members make suggestions to the authors of technical specifications to ensure the web tech being developed takes privacy into account.

The group, which currently counts 68 participants, has been trying to adjust the agreed terms under which it operates.

In June, PING polled the 450 or so W3C members about the proposed new charter. Voting closed on August 4th. According to an individual familiar with situation, only 26 W3C members responded and Google alone among them objected. Because the group requires unanimous consensus, the new charter was not adopted.

A copy of Google’s formal objection was posted to the PING mailing list on Monday. “Although we would like the PING to take a strong role in horizontal review, we are uncomfortable investing it with Process authority without more experience,” Google’s note says.

Shortly thereafter, Chris Wilson, a project manager at Google and W3C participant, posted a follow-up message to add additional context, presumably out of concern that Google’s objection might be read as hostility to privacy.

“To be clear: Google does NOT have concerns about the PING reviewing web platform specifications,” he said. “…We do have slight concerns about the additional workload that might entail for the PING group, but we have been actively working to increase our participation in the PING to help account for that.”

The new charter isn’t all that different from the current one. Nick Doty, a privacy researcher and doctoral student at UC Berkeley’s School of Information and a PING member, provided The Register with a diff to compare the two documents.

“The new charter is not dramatically different from the existing one,” Doty said in an email. “It includes providing input and recommendations to other groups that set process, conduct reviews or approve the progression of standards and mentions looking at existing standards and not just new ones. I think those would all have been possible under the old charter (which I drafted originally); they’re just stated more explicitly in this draft. It includes a new co-chair from Brave, in addition to the existing co-chairs from the Internet Society and Google.”

Doty said he’s not surprised there would be discussion and disagreement about how to conduct horizontal spec reviews. “I am surprised that Google chose to formally object to the continued existence of this interest group as a way to communicate those differences,” he said.

“I don’t know why Google representatives chose to object to this charter, but I do hope the now expressed interest in reviews and the deliverables of the Interest Group will lead to more investment in PING and in Web privacy.”

As The Register has heard, the issue for Google is that more individuals are participating in PING and there’s been some recent pushback against work in which Google has been involved. In other words, a formerly cordial group has become adversarial.

The required context here is that over the past few years, a broad consensus has been building around the need to improve online privacy. Back in 2014, not long after Edward Snowden’s revelations about the scope of online surveillance transformed the privacy debate, the Internet Engineering Task Force published an RFC declaring that pervasive monitoring is an attack on privacy. That concern has become more widespread and has led to legislation like the California Consumer Privacy Act (opposed by Google) and efforts by companies like Apple, Brave, and Mozilla to improve privacy by blocking ad tracking.

A group of happy corporate looking types celebrate

The results are in… and California’s GDPR-ish digital privacy law has survived onslaught by Google and friends

READ MORE

“The strategic problem for Google, with Apple, Brave, Mozilla, Samsung all blocking tracking, is how to preserve their business advantages and share price while appearing to be ‘pro privacy,'” said Brendan Eich, CEO of Brave, in a message to The Register.

Facing efforts to block browser fingerprinting and change the way HTTP cookies work, the ad biz last month floated a set of proposals to address privacy concerns without starving advertisers of cookie data. The company’s “Privacy Sandbox” met with mixed reviews from privacy groups and academics, who characterized Google’s claims about the privacy risks of cookie blocking as “privacy gaslighting.”

Eich expressed skepticism of Google’s charter objection. “The W3C is obligated to help fix interoperation problems these privacy protections create, at a minimum; engineering in better privacy over time is even better, but interop is enough of a justification, and no ‘utopia first! then we can solve lesser problems’ objections should be tolerated,” he said.

The Register asked Google for comment. We’ve not heard back. ®

Sponsored:
What next after Netezza?

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/25/google_privacy_wc3/

Hot patches for ColdFusion: Adobe drops trio of fixes for three serious flaws

Adobe has released an update to clean up a trio of vulnerabilities in ColdFusion, its long-running web application platform.

The security update addresses three CVE-listed vulnerabilities discovered in both ColdFusion 2016 and ColdFusion 2018. Two of the bugs open up the software to critical remote code execution risks, while the third flaw allows less serious information disclosure.

The first of the critical bugs has been assigned CVE-2019-8073. The flaw is described as a command injection issue that would allow an attacker to execute arbitrary code on the vulnerable system. Discovery of the flaw was credited to Badcode of bug-hunting crew Knownsec 404 Team.

Man browses his tablet and ignores the beach. Photo by shutterstock

It is with a heavy heart that we must report that your software has bugs and needs patching: Microsoft, Adobe, SAP, Intel emit security fixes

READ MORE

The second of the critical bugs is designated as CVE-2019-8074 and is a path traversal vulnerability that allows code execution by bypassing access controls (as in there is nothing to stop commands from being executed). It was discovered and reported to Adobe by Daniel Underhay of Aura Information Security. Ben Reid of Techlegalia Pty. and Pete Freitag of Foundeo were also credited with helping to report the vulnerability.

The third flaw, dubbed CVE-2019-8072, is classified as an information disclosure vulnerability, but is described as a security bypass. Because it wouldn’t on its own allow for arbitrary code execution, the vulnerability is not considered a critical risk, but whenever there is a security bypass exposed, patching is a very good idea. Discovery was credited to Pete Freitag from Foundeo.

Those using ColdFusion 2018 will want to get the Update 5 release, while those using ColdFusion 2016 should get Update 12 to patch up the bugs, as well as make sure they have JDK 8u121 or higher. For both versions, Adobe recommends admins also get the latest version of JDK/JRE in order to ensure the patches are properly installed.

Adobe’s next scheduled update is set to take place on October 8, when it will join Microsoft and SAP in dropping the monthly Patch Tuesday security update. ®

Sponsored:
What next after Netezza?

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/25/coldfusion_patches_adobe/