STE WILLIAMS

Multiple HTTP/2 DoS flaws found by Netflix

Netflix has identified several denial of service (DoS) flaws in numerous implementations of HTTP/2, a popular network protocol that underpins large parts of the web. Exploiting them could make servers grind to a halt.

HTTP/2 is the latest flavour of HTTP, the application protocol that manages communication between web servers and clients. Released in 2015, HTTP/2 introduced several improvements intended to make sessions faster and more reliable.

Updates included:

  • HTTP header compression. In previous HTTP versions, only the body of a request could be compressed, even though for small web pages the headers, which often include data such as cookies and are always sent in text format, could be bigger than the body.
  • Multiplexed streams and binary packets. This made it easier to download multiple items in parallel, speeding up rendering of web pages made up of many parts.
  • Server Push. This means the server can send across cacheable information that the client might need later, even if it hasn’t been requested yet.

Features like these can help reduce latency and improve search engine rankings. The problem is that more complexity means more opportunity for bugs.

Netflix explains this in its writeup of the issue:

The algorithms and mechanisms for detecting and mitigating “abnormal” behavior are significantly more vague and left as an exercise for the implementer. From a review of various software packages, it appears that this has led to a variety of implementations with a variety of good ideas, but also some weaknesses.

There are eight of those weaknesses, all with their own separate CVE number and nickname.

Some flaws are reminiscent of other non-HTTP/2 DoS attacks.

Ping flooding, for example, is well-known and understood in DDoS circles – it’s where you keep on asking a server, “Are you there?” even though you know the answer perfectly well and are asking simply to make the server waste time doing work to no purpose.

In the HTTP/2 version (CVE-2019-9512), repeated ping requests may force the server to queue up responses, fall behind and even stop responding.

There are three other flooding attacks.

The reset flood (CVE-2019-9514) opens multiple streams and sends invalid requests on all of them to generate a RST_STREAM response. Servers are supposed to use RST_STREAM frames to terminate a session when a browser cancels a file download or navigates away from a page. A server forced to process lots of them can suffer a denial of service condition.

A settings flood, dubbed CVE-2019-9515, sends a stream of SETTINGS frames to the peer. The server is supposed to reply to every SETTINGS request, so this causes a similar situation to the aforementioned ping flood.

The third type of flood is an empty frames attack (CVE-2019-9518). This sends a constant stream of frames with an empty payload, and the server wastes time handling them.

There are several other attacks that rely on manipulating the server by creating invalid, unusual or pointless packets.

The data dribble attack (CVE-2019-9511) uses multiple streams in a way that forces the server to queue up data in small chunks. This can chew through CPU and memory resources.

The resource loop attack (CVE-2019-9513) constantly switches the priority of multiple streams, putting needless load on the server’s priority-shuffling code.

The zero-length headers leak attack (CVE-2019-9516) sends data headers flagged as empty, even though it takes memory to send and receive the data block that says, “Here is an empty item.” If the server keeps the as-received headers in memory, rather than dumping them because they ultimately decode to nothing, an attacker may be able to chew through memory on the server.

Finally, an internal data buffering attack (CVE-2019-9517) asks the server for a large file, but does’t open the necessary data channel required by the protocol for the server to reply with the data. If the server loads up some or all of the requested data into internal buffers but then has nowhere to send it, it may bog down due to the increased load on memory.

This is an important issue for the overall health of the web because 25% of websites use HTTP/2.

A list of affected vendors is shown in the associated CERT note, together with links to vendor websites for updated information.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lnFwWgp85fY/

Did Facebook know about “View As” bug before 2018 breach?

A recent court filing indicates that Facebook knew about the bug in its View As feature that led to the 2018 data breach – a breach that would turn out to affect nearly 29 million accounts – and that it protected its employees from repercussions of that bug, but that it didn’t bother to warn users.

There was a class action lawsuit – Carla Echavarria and Derrick Walker v. Facebook, Inc.filed within hours of Facebook’s revelations last September that attackers had exploited a vulnerability in its “View As” feature to steal access tokens: the keys that allow you to stay logged into Facebook so you don’t need to re-enter your password every time you use the app.

Reuters reports that the lawsuit in question actually combined several legal actions, presumably including the one filed on the same day as Facebook disclosed the breach.

The breach

As Naked Security’s Paul Ducklin explained at the time, the View As feature lets you preview your profile as other people would see it.

This is supposed to be a security feature that helps you check whether you’re oversharing information you meant to keep private. But crooks figured out to how to exploit a bug (actually, a combination of three different bugs) so that when they logged in as user X and did View As user Y, they essentially became user Y. From Paul:

If user Y was logged into Facebook at the time, even if they weren’t actually active on the site, the crooks could recover the Facebook access token for user Y, potentially giving them access to lots of data about that user.

That’s exactly what attackers did: they took the profile details belonging to some 14 million users, including birth dates, employers, education history, religious preference, types of devices used, pages followed and recent searches and location check-ins.

According to Reuters, another 15 million users had only their name and contact details exposed. The attackers could also see posts and lists of friends and groups of about 400,000 users.

Facebook knew about it and “failed to fix it for years”

On Thursday, in a heavily redacted section of the filing in the US District Court for Northern California, the plaintiffs said that Facebook knew about, and failed to fix, the vulnerability for years.

What’s even worse: the plaintiffs allege that Facebook could and did protect its own employees from the fallout, leaving everybody else as sitting ducks.

Reuters quoted the filing:

Facebook knew about the access token vulnerability and failed to fix it for years, despite that knowledge.

Even more egregiously, Facebook took steps to protect its own employees from the security risk, but not the vast majority of its users.

Facebook hadn’t responded to requests for comment as of Friday afternoon. It’s also given out scant details about the breach since initially disclosing the attack. All that it’s said is that the breach affected a “broad” spectrum of users. It hasn’t broken down the numbers by country.

The court wants those details: Judge William Alsup told Facebook in January that he was willing to allow “bone-crushing discovery” in the case to uncover how much user data was stolen. According to Law360, Alsup said that he’s sympathetic to users’ concerns and that’s worth “real money”, as opposed to “some cosmetic injunctive relief.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3KG4K9pSsio/

KNOB turns up the heat on Bluetooth encryption, hotels leak guest info, city hands $1m to crook, and much, much more

Roundup Let’s run through all the bits and bytes of security news beyond what we’ve already covered. Also, don’t forget our articles from this year’s Black Hat, DEF CON, and BSides Las Vegas conferences in the American desert.

KNOB opens door to Bluetooth snooping: Microsoft’s Patch Tuesday dump included the disclosure of a security flaw in the Bluetooth protocol. The design blunder affects more than just Microsoft products, though: it is at the heart of the Bluetooth specification. The flaw therefore affects any gear using Bluetooth chipsets that implement the standard; more than 14 such vulnerable chips have been identified, including parts from Intel, Broadcom, Apple, and Qualcomm.

The security hole is dubbed Key Negotiation of Bluetooth, or KNOB for short – and even though we’ve thought long and hard about making jokes about this, sadly, we’ve come up with nothing. It involves a shortcoming in the process that two devices use to establish a secret key between themselves to encrypt data exchanged over the air. It is possible for a nearby miscreant-in-the-middle to force a pair of gadgets to agree on a key with only 8 bits of entropy, allowing the wireless snooper to decrypt their subsequent communications using brute force.

Boffins Daniele Antonioli, Nils Ole Tippenhauer, and Kasper Rasmussen described [PDF] their eavesdropping technique in a paper presented at the USENIX Security Symposium in the US this month. CMU CERT explained how the method using the “Alice and Bob” key analogy:

To establish an encrypted connection, two Bluetooth devices must pair with each other and establish a link key that is used to generate the encryption key. For example, assume that there are two controllers attempting to establish a connection: Alice and Bob. After authenticating the link key, Alice proposes that she and Bob use 16 bytes of entropy. This number, N, could be between 1 and 16 bytes. Bob can either accept this, reject this and abort the negotiation, or propose a smaller value.

An attacker, Charlie, could force Alice and Bob to use a smaller N by intercepting Alice’s proposal request to Bob and changing N. Charlie could lower N to as low as 1 byte, which Bob would subsequently accept since Bob supports 1 byte of entropy and it is within the range of the compliant values. Charlie could then intercept Bob’s acceptance message to Alice and change the entropy proposal to 1 byte, which Alice would likely accept, because she may believe that Bob cannot support a larger N. Thus, both Alice and Bob would accept N and inform the Bluetooth hosts that encryption is active, without acknowledging or realizing that N is lower than either of them initially intended it to be.

And the upshot of all that?

An unauthenticated, adjacent attacker can force two Bluetooth devices to use as low as 1 byte of entropy. This would make it easier for an attacker to brute force as it reduces the total number of possible keys to try, and would give them the ability to decrypt all of the traffic between the devices during that session.

Oops. That’s pretty upsetting. It means nearby miscreants can potentially snoop on or tamper with Bluetooth connections to keyboards, speakers, and other gizmos. Thus far, though, the vulnerability has only been exploited in the lab, according to the Bluetooth specification folks. However, it would be wise to install patches for your gadgets as they become available. Microsoft and Apple have both released fixes for their products to thwart KNOB attacks – the official solution being to enforce a “minimum encryption key length of 7 octets [seven bytes] for BR/EDR connections,” according to the Bluetooth team. Expect more from other vendors, hopefully.

“The attacking device would need to intercept, manipulate, and retransmit key length negotiation messages between the two devices while also blocking transmissions from both, all within a narrow time window,” the Bluetooth spec people noted. “If the attacking device was successful in shortening the encryption key length used, it would then need to execute a brute force attack to crack the encryption key.”

Choice chopped by open server: Choice Hotels is the latest organization to be stung by a poorly configured cloud database. It was revealed this month that some 700,000 Choice hotel guest records were left in a MongoDB instance exposed to the public internet.

Open-cloud-bucket hunter Bob Diachenko sniffed out the leak and notified the biz, which said that the exposed archive included records containing names, email addresses, and phone numbers among other things. It is understood hackers accessed the server, scrambled the data, and demanded that Choice pay a ransom of $3,850 or so (depending on the price of Bitcoin) to restore the info. No word on whether that demand will be met.

Dating apps risk hooking up with stalkers: Mobile dating apps are sharing a dangerous amount of personal information with the general public, according to a report from Pentest Partners.

The Brit infosec outfit estimates as many as 10 million users could be prone to having their whereabouts tracked thanks to the locating features offered by dating apps. As the report notes, this is particularly dangerous for members of the LGBT+ community who could find the tools used to target and harass them. The team recommends developers make the apps less precise in their locations, and give lonely-hearts the ability to tag themselves rather than use GPS.

GitLab has pushed out version 12.1.6, 12.0.6, and 11.11.8 of the repository management project, mitigating three critical security flaws, our pals at software-engineering sister site DevClass report.

Credit Karma glitch sends strangers’ report data: A website glitch at credit-monitoring service Credit Karma appears to have caused an accidental exposure of some user records. TechCrunch reported that a number of users on Reddit and other public forums complained that when they asked for their own credit reports, they were instead shown those of other users on the site. This is annoying on its own, but since we are talking about personal credit reports, it is also an exposure of sensitive personal data.

Credit Karma has since fixed the issue, which sounds like a classic caching cock-up. No word on just how many people had their records shared with strangers.

Lenovo patches EOP bug: Sorry to bear the bad news, but your patching duties might not be done if you use or administer some Lenovo notebooks. The Chinese computing giant said that it had fixed an information disclosure vulnerability in one of the hardware controllers in Thinkpad notebooks that can potentially lead to firmware tampering and an escalation of privileges. This isn’t considered a huge risk, as you would have to already be in control of the notebook with an administrator account to exploit the flaw, but it is still worth taking a minute or two to install the fix.

Warren wants probe of Equifax’s sweetheart settlement: US Senator Elizabeth Warren (D-MA) took some time off the presidential campaign trail to ask America’s trade watchdog [PDF] why it struck a settlement deal with disgraced credit monitor Equifax that was so bad many of the affected can’t even claim a payout.

“The FTC has the authority to investigate and protect the public from unfair or deceptive acts or practices, including deceptive advertising,” Warren writes. “Unfortunately it appears as though the agency itself may have mislead the American public about the terms of the Equifax settlement and their ability to obtain the full reimbursement to which they are entitled.”

Danabot malware goes under the microscope: Researchers at Webroot’s H3 Collective have done a detailed teardown of Danabot, an online-bank-account-draining nasty that has been circulating for a little over two years. The dissection found that the malware has not only become more sophisticated in targeting victims for account theft, but may also be used as the first step for ransomware infections.

“It continues to evolve its geo targets as more affiliates get added, and has branched out to test ransom functionality,” H3 writes. “This change in tactics certainly aligns with other shifts we’ve observed in which criminals are performing more recon upfront to profile a victim’s worth before executing ransomware from a domain controller.”

Saskatoon loses $1m to fraud scam: The city of Saskatoon in Canada has admitted it was tricked by an online fraudster.

Officials say someone contacted one of its offices claiming to be a contractor working on a project for the city. The person asked that the account for an outgoing payment be changed to one controlled by the fraudster. As a result, the city said, CAN$1.04m ($1.15m, £860,000) in construction bills were sent out to the criminal’s account rather than the actual contractor.

“Our focus at this time is on recovery of the funds. We have experts engaged from our internal auditor, the banks affected, and the Saskatoon Police Service,” said city manager Jeff Jorgenson. “Additionally we have external and internal experts pouring over financial transactions and processes to do everything reasonably possible to protect the City from any further attacks.” ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/19/security_roundup/

Teen TalkTalk hacker ordered to pay £400k after hijacking popular Instagram account

One of the crew who hacked TalkTalk has been ordered to hand over £400,000 after seizing control of a high-profile Instagram account following a hack on Aussie telco Telstra.

Elliott Gunton, 19, pleaded guilty to breaching a Sexual Harm Prevention Order (SHPO), Computer Misuse Act crimes and money laundering at Norwich Crown Court. He was sentenced on Friday 16 August.

The Instagram account he targeted, @adesignersmind, was used by an Australian designer to post innocuous lifestyle content – until Gunton got his hands on it. Boasting to his girlfriend in chat messages later found by police, the teenager bragged that he had “jacked a 1.2M IG”. The account, meanwhile, had auto-replies configured to send abuse to people interacting with its content.

It took two weeks for the hapless designer to regain access to it, with prosecuting barrister Kevin Barry telling the court: “He was both mortified by the hack and the content put on his account. It caused him considerable stress and anxiety.”

Gunton admitted illicitly finding his way into the systems of Telstra, according to the Eastern Daily Press which attended the sentencing hearing.

He was said to be adept at “social engineering and exploitation of the network provider’s inadequate systems”, using that access to compromise social media accounts. He was also accused of preparing to carry out SIM-swap attacks as part of his account-compromising operation.

On top of the social media chicancery, Gunton had also pleaded guilty to money laundering. Police workers became suspicious when a house raid and subsequent examination of his computers and devices revealed a Bitcoin wallet. Police said the wallet contained £407,359.35 worth of Bitcoin at the time of the seizure – which Gunton has now been ordered to hand over.

As we previously reported, the Bitcoin was the proceeds of Gunton’s crimes. After compromising Instagram accounts, he would then trade the account details on cybercrime forums, earning thousands of pounds at a time thanks to his status as a “highly respected member”.

Gunton also pleaded guilty to breaking his SHPO after police found the popular CCleaner disk cleanup and file deletion utility on his laptop. A standard condition of SHPOs prohibits deleting one’s internet history or otherwise obscuring it so unskilled police employees are unable to trawl through it for any evidence of wrongdoing.

The SHPO was imposed when Gunton was being investigated for his part in the TalkTalk hack of 2016, to which he pleaded guilty. Police said they had found indecent images of children on the then 16-year-old’s devices. Gunton had applied to have his SHPO removed, which triggered an increase in no-notice police visits to inspect his browsing history. It was the discovery of CCleaner that triggered the full investigation in the latest case.

Defending Gunton, barrister Matthew McNiff said the SHPO had stopped his client from taking a job at a “multinational accounting firm”, but added, addressing the full spectrum of Gunton’s criminality: “It is not incorrect to describe him at the time as a young man, both in years and maturity… He has evolved from someone isolated from society into an individual who no longer sits in his room.”

Sentencing him, His Honour Judge Stephen Holt said: “It is quite plain over the last 18 months you have grown up and matured considerably.”

Gunton, 19, of Longland Close, Old Catton, Norwich, admitted five charges including money laundering and Computer Misuse Act offences. He was sentenced to 20 months, though was immediately freed thanks to time spent in prison on remand. A 3.5-year community order was also imposed to restrict his internet and software use. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/19/elliott_gunton_400k_repay_instagram_hacking_telstra/

Compliance Training? What Compliance Training?

Employees can run … but they can’t hide. Or can they?

Source: Twist and Shout Communications 

What security-related videos have made you laugh? Let us know! Send them to [email protected].

Beyond the Edge content is curated by Dark Reading editors and created by external sources, credited for their work. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/compliance-training-what-compliance-training/b/d-id/1335554?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Modern Technology, Modern Mistakes

As employees grow more comfortable using new technologies, they could inadvertently be putting their enterprises at risk. And that leaves security teams having to defend an ever-expanding attack surface.

As employees grow more comfortable using new technologies, they could inadvertently be putting their enterprises at risk.

Indeed, myriad innovations have made it easier for people to do their jobs more efficiently, says Todd Peterson, security evangelist at One Identity Now. At the same time, they’ve also made it easier for them to “play.”

“Employees can now access any websites, such as fantasy sports, gambling, entertainment channels, and collaborate and share information using cloud storage tools easily with a single click,” says Sai Chavali, security strategist at ObserveIT.  

But with all that know-how and tech at their fingertips comes a trade-off.

“Technologies are making it easier for people to do their job, but generally security is not at the forefront of people’s minds,” Peterson says.

Much of that has to do with security’s reputation of getting in the way of digital transformation. Peterson cites multifactor authentication as an example.

“Because of added security features, a user has to login with multifactor authentication, and now the user has another hoop to jump through,” he says. “Then we are back to the old days of technology being cumbersome and hard to do.”

Often that leaves employees making a choice between doing what’s right and doing what’s easy. As the computing landscape changes and businesses move to a more dynamic, cloud-delivered, self-service model, the attack surface their security teams have to defend increases.

Changing Behaviors
Austin Murphy, VP of managed services at CrowdStrike, agrees with Peterson about the two sides of emerging tech. “The upside of users becoming more comfortable with emerging technologies has its benefits, but also comes with some attack considerations,” he says.

For example, employees’ adoption of social platforms for communications has drastically increased attacker’s methods of social engineering. In response, enterprises are doubling down on scanning email for spam and malicious content, but “oftentimes they have no visibility into communications sent over Skype, LinkedIn, Facebook, and others,” Murphy says.

Users are also installing their own software, which can infect their devices if they install an infected version from an untrusted source.

“It is common for attackers to find common utilities such as FTP clients or video conversion software, package or wrap malicious code into the installer, and then upload their packed installer to a free software download site, knowing that users may find their malicious version of the software installer before they find the legitimate original,” Murphy says.

Additionally, employees are increasingly feeling entitled to work from anywhere and have access to anything at any time, according to Nick Bennett, director of professional services at FireEye Mandiant.

“Employees [also] feel entitled to use work assets for non-work activities, and they are bypassing protections that are in place, making themselves more susceptible to phishing attacks,” Bennett says.

The issue is twofold. Employees are using corporate-issued workstations for personal use, even if they are at home. When they bring that workstation back to the enterprise, they are also putting the business at risk, Bennett explains. In addition, “employees are also using non-corporate assets to access the corporate network on a device that is unmanaged by enterprise,” Bennett says.

Detecting User Behavior in a Modern Workforce
Adjusting to the behaviors of a modern workforce means expanding the security team’s focus to include defending against insider threats. Early attempts failed to address this, says Joe Payne, CEO at Code42.

“Today, people want employees to collaborate more with data,” he says. Early technologies, such as data loss prevention (DLP), were focused on prevention, which hindered the flow of data necessary for collaboration and led to the “zero trust” best practice.

What organizations can do is focus more on detection and response. “It’s possible to track all the data movement in the enterprise so that you can build some basic rules and detect near real time when people are taking egregious amounts of data,” Payne says.

Because the modern workforce can have a mix of full-time employees, remote employees, contractors, and third-party vendors, all of whom must have access to technology and data to do their jobs efficiently, organizations need to protect against both internal and external threats, ObserveIT’s Chavali says.  

“Organizations need to implement cybersecurity controls that not only keep the bad guys from coming in, but also proactively detect insider threats by gaining visibility into the users’ behavior,” he says.

Related Content:

Image Source: frender via Adobe Stock

 

Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition’s security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM’s Security Intelligence. She has also contributed to several publications, … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/modern-technology-modern-mistakes/b/d-id/1335532?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Tough Love: Debunking Myths about DevOps & Security

It’s time to move past trivial ‘shift left’ conceptions of DevSecOps and take a hard look at how security work actually gets accomplished.

The security community talks a lot about DevSecOps — look at any vendor’s marketing materials. But very few are suggesting any significant changes to the way that security is practiced. DevOps is a fundamental shift in the way we think about building software and is intended to dramatically improve speed and quality. At its core, DevOps prescribes the following:

  • Get work flowing by breaking it into small pieces
  • Create tight feedback loops
  • Create a culture of innovation and learning

The exact problems that used to plague software are still major problems for security. Most organizations only do security on a subset of their code, such as the “critical” or “externally facing” applications and APIs, leaving the vulnerability backlog rampant across the portfolio and the attack visibility practically zero. The truth is, there are a few DevOps myths that are making the problem worse. Here are six:

Myth 1: “DevOps Is All About Automation.” The automation is often the most visible part of DevOps, and it may be all that security folks perceive. The reality is that automation can also help to explain how your software factory actually works to the security team. Challenge security to experiment with applying the DevOps techniques to their own work and help them get started on their own DevOps journey.

Myth 2: “We Need to Shift Security Left.” “Shift left” is a catchy phrase, but don’t just push legacy security tools onto developers who don’t have the background to use them. Instead, help developers find modern security tools that integrate into the DevOps pipeline, produce accurate and usable results, and aren’t just DevOps lipstick on traditional security tools. Hopefully, your security team will come to realize that security can achieve the same kind of DevOps transformation and benefits as development.

Myth 3: “Increasing Velocity Reduces Security.” Security doesn’t actually have anything to do with velocity. But often the same automation and processes that lead to high-quality software can be leveraged by security to deliver assurance much faster and more reliably. DevOps actually establishes the infrastructure that security has been lacking for so long. By turning security into code, organizations can dramatically increase the speed, coverage, and accuracy of their security efforts.

Myth 4: “DevOps Makes Security Harder.” Many security folks are already overwhelmed with the scale and complexity of their challenge, and they may view DevOps as making their job harder. They may unwittingly slow, derail, or undermine your DevOps transformation, so bring them along on the journey. Include the security team in your planning. Have them read “The Phoenix Project” and have regular interaction with development teams.

Myth 5: “Developers Don’t Care about Security.” This is one of the most pernicious myths in DevOps. After teaching application security to thousands of developers, my impression is that developers are smart, interested, and curious about security. They want to learn to do things right and avoid security vulnerabilities. Consider that in many organizations, security is poorly defined, a moving target, and full of landmines that can derail projects or careers. Consider becoming part of a development team for a time and working to make security explicit and fully automated so that anytime applications are “clean” they can go into production at whatever velocity the project wants.

DevOps has been widely adopted and has achieved impressive results across a wide array of organizations. It’s time to move past trivial “shift left” conceptions of DevSecOps and take a hard look at how security work actually gets accomplished. You’ll find massive bottlenecks, wasted effort, duplicated work, feedback loops of weeks, months, or years, and many other problems. Let’s call an end to security exceptionalism — there’s nothing that special here. The vast majority of security is working on well-understood problems such as injection, path traversal, cross-site scripting, etc. Let’s unleash the power of DevOps and radically improve security.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: Modern Technology, Modern Mistakes.

A pioneer in application security, Jeff Williams is the founder and CTO of Contrast Security, a revolutionary application security product that enhances software with the power to defend itself, check itself for vulnerabilities, and join a security command and control … View Full Bio

Article source: https://www.darkreading.com/endpoint/tough-love-debunking-myths-about-devops-and-security/a/d-id/1335511?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Chamber of Commerce, FICO Report National Risk Score of 688

While the score was up for large businesses and down for small firms, the report urges all to prioritize third-party risk management.

The United States’ National Risk Score is 688, report the US Chamber of Commerce and FICO in their Q2 “Assessment of Business Cyber Risk” (ABC). This marks little overall change from the previous quarter’s score of 687, though large businesses and small firms saw greater change.

ABC’s National Risk Score is the revenue-weighted average of the FICO Cyber Risk Score for nearly 2,400 small, midsize, and large companies. A score, ranging from 300 to 850, reflects the probability of a business being hit with a material data breach within the upcoming 12 months. The higher the score, the lower the likelihood the organization will experience a breach.

Since last quarter, the average risk score for large organizations rose from 643 to 649; small ones saw their average score drop from 740 to 736. “While these scores reveal the nation’s cybersecurity risk was virtually unchanged, FICO and the Chamber urge businesses to do more to measure and manage risk posed by third parties,” officials said in a statement.

Third-party risk management was a highlight of the second-quarter report, which states a growing percentage of security incidents against organizations stem from initial compromise against third parties. Attackers can leverage this trusted relationship to gain access, move laterally, and escalate privileges to get to their targets. The ABC report urges businesses to build a framework for third-party categorization, develop a workflow to address the criticality of each risk, frequently assess high-impact suppliers, and ensure the appropriate transfer of risk.

Read more details here.

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: Modern Technology, Modern Mistakes.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/us-chamber-of-commerce-fico-report-national-risk-score-of-688/d/d-id/1335561?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Subcontractor’s track record under spotlight as London Mayoral e-counting costs spiral

Concerns have been raised over a key supplier of an e-counting system for the London Mayoral elections in 2020.

The contract, split between Canadaian integrator CGI and Venezuelan-owned Smartmatic, will cost nearly £9m – more than double the procurement cost of £4.1m for the system at the last election in 2016.

During a July hearing about the 2020 elections at the London Assembly Oversight Committee, members heard that Smartmatic, which builds and sells electronic voting tech, had worked on the Scottish elections.

However, the London Assembly has since confirmed to The Register that Smartmatic was not involved. The company was also recently blamed for a number of technical glitches in the Philippine elections.

The London Assembly was told costs had increased because the new vote-counting system offered better functionality than the previous procurement.

A spokesperson for the Greater London Authority said: “This was a decision taken by the Greater London Returning Officer. The contract was awarded following a standardised process that considered bids according to value for money and service quality.”

CGI is the lead contractor for the London elections and was also the lead contractor responsible for the Scottish elections.

The Greater London Authority added that while it hadn’t “technically” worked on the Scottish elections, Smartmatic’s current client director for the London elections did as part of a different organisation “so the London elections will benefit from same experience, expertise and knowledge gained in previous major elections”.

The Authority said that as a candidate, the Mayor had no involvement in the choice of contractor.

However, Caroline Pidgeon, member of the London Assembly for the Liberal Democrats, told The Register she was concerned.

“With the cost of electronic counting for the GLA elections doubling and with a number of other concerns about how transparent the process is, it is only right to take a careful look at whether electronic counting is in fact the right approach for such an important set of elections.

“The Electoral Commission have long called for a proper cost benefit analysis of electronic counting for the GLA elections – it is time they were listened to.”

Pascal Crowe, democracy officer for scrutiny group Open Rights, said the body had previously raised worries about the procurement to the Greater London Returning Officer.

“This means that private companies are using our democracy as a user-testing exercise for their products.” He added that Smartmatic has “a poor track record”.

The Register has asked Smartmatic and CGI for a comment. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/19/cost_and_delivery_concerns_raised_over_mayoral_ecounting_system/

iFrame clickjacking countermeasures appear in Chrome source code. And it only took *checks calendar* three years

Three years ago, Google software engineer Ali Juma proposed that Chrome should be modified to ignore recently moved iframe elements on web pages as a defense against clickjacking.

Clickjacking, a form of online attack also known as user-interface redressing, involves modifying web page elements to hijack click events so they hit an attacker-designated page element. The goal generally is to trigger ad or affiliate payments, to expose information or to install malicious code.

Juma in his proposal didn’t specifically mention clickjacking, though he linked to a YouTube video illustrating just that. The problem, he explained, is that an Inline Frames or iframe – an embedded web page element that can be tied to another Origin (domain) – can be made to move suddenly so that it covers another web page element (like a button or link), thereby intercepting the click event intended for the covered part of the page.

“Web authors might not be sufficiently incentivized to fix this problem themselves, since they benefit from clicks on ads,” Juma wrote. “This means that addressing this problem may require user agents [browsers] to intervene.”

Google, which also benefits from clicks on ads, hasn’t prioritized fixing this problem either, given that the proposal has languished for three years. But the delay appears to have had more to do with job responsibility changes and absent technical capabilities in Chrome.

Certainly some of the search king’s Chrome engineers have made it clear that they recognize clickjacking represents a serious threat and something needs to be done to prevent it.

We IOv2 you, too

At the start of the year, a technological piece of the puzzle fell into place. In January, a web API called IntersectionObserver v2 (IOv2) landed in Chrome developer builds and reached the general public via Chrome 74 in May. Other browsers have yet to adopt it. IOv2 adds the ability to check whether the target page element is visible in a computationally efficient way – it looks for the intersecting geometry of interface elements on a web page – and is intended as a defense against fraud.

“When the iframe processes the click event, it has no way to determine that its content was not faithfully displayed on the screen,” the W3C’s explainer says. “Using IntersectionObserver V2, code running inside the iframe can get a strong guarantee from the implementation that its content was completely visible and unmodified for some minimum length of time before the click.”

In May, Google engineer Stefan Zager revived Juma’s idea and expanded on it, citing the utility of IOv2 to implement the iframe movement monitoring scheme.

“IOv2 is not a complete solution: it will notify an iframe when its contents are hidden or visually altered, but it doesn’t detect when an iframe has moved within the viewport of its embedding page,” Zager’s proposal says. “An iframe can bounce all around the viewport, but as long as it’s continuously visible, IOv2 will not trigger any notifications.”

The intervention he proposes, ignoring input events targeting recently-moved cross-origin iframes, would help make clickjacking more difficult. The capability has been tested in a few Chrome Canary builds but isn’t publicly available at the moment unless you are building Chromium from source code. It doesn’t yet have a target release date. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/19/clickjacking_countermeasures_chrome/