STE WILLIAMS

Baldr malware unpicked with a little help from crooks’ bad opsec

The world of malicious software is a mirror universe to the world of legitimate software.

Some of the software is very good (at being bad) and some of the software is very bad (which is good), and underneath it all is an army of people: software developers, sales and marketing people, distributors, support…

And, as new research from SophosLabs reminds us, where people go, human interest and human failings go too.

The research in question concerns Baldr, an up-and-comer in the world of illegal software that SophosLabs’ has been tracking closely since January.

In simple terms, Baldr is a password stealer, although in reality it’s more of an indiscriminate malware thief with an interest in anything it can carry away. It would steal your watch if it could.

We know this, thanks to our story’s first bunch of humans. The venerable researchers of SophosLabs whose job it is to roll up their sleeves and figure out what malware does and how to stop it, after their algorithms have taken things as far as they can.

In the case of Baldr, the sleeve rolling started early. In an attempt to frustrate malware analysts, it hides its secrets under layers of obfuscation, so the SophosLabs boffins had to flay it by hand.

Baldr employs an excessive number obfuscation layers (at last count, 9) that thwarts static code analysis. Can we hack it? (Yes we can!)

What their analysis reveals is a rapacious voyeur.

If you’re unlucky enough to get it on your system, the malware will grab anything that looks like it might contain useful or valuable data.

It begins by creating a profile of your system: grabbing a boat load of information about the computer it’s on such as the CPU model, operating system, system language, screen resolution, installed programs, hat size and favourite colour (OK, it doesn’t capture the last two but it would if it could).

Then it ransacks your web browsers, relieving them of saved credentials, autocomplete information, credit card information, cookies, the domains you’ve visited and your browsing history.

After that it hoovers up any FTP logins it finds, and then it steals credentials from your computer’s instant messaging clients and VPNs.

If you’ve got any cryptocurrency lying around, it knows how to plunder it from a range of different wallets. And then it takes a screenshot of your desktop, because – why not?

It stuffs all the data into an encrypted file and POSTs it via HTTP to a C2 (Command and Control) server before deleting itself in an effort to cover its tracks.

As if that wasn’t all bad enough, Baldr can also be used to download other malware from its C2 server.

Baldr’s malware relationship status is: Complicated. For example, we recently observed ransomware loading Baldr onto a victim’s machine … We’ve also logged instances of Arkei or Megumin dropping Baldr during an infection, and Baldr dropping Megumin.

Our knowledge of Baldr and its C2 infrastructure was assisted, unintentionally, by our story’s second bunch of humans: Baldr’s customers.

Demonstrating that cloud data leaks are a game that anyone can play, their sloppy server admin skills gave SophosLabs a grandstand view of the malware’s back end.

Some of Baldr’s customers were careless with their operational security, and left the C2 package accessible in an open directory on the C2 server, so we downloaded a few to take a closer look.

With a copy of the C2 code, the folks in the labs were able to learn more about the malware authors’ strategy and motivations. They also learned something about some of the Baldr developers’ coding skills, finding sufficient security failings to conclude that the admin console is “vulnerable to a number of attacks.”

And just in case anyone is labouring under the illusion that there’s honour among thieves, it seems they aren’t the only people to have noticed.

We observed Baldr C2 servers that had been repeatedly taken over with web shells by other threat actors, who have been investigating and taking advantage of these panels.

Further insight was provided by crooks who cut out the middle man and just pwned themselves.

In rare circumstances, the malware buyers also become victim to their own stealer, either by mistake or for testing purposes they execute the malware sample on their own machine.

Sadly though, Baldr itself is no joke and there are enough Baldr operators who know what they’re doing to represent a significant threat. This, in spite of the efforts of the people around Baldr:

The makers of Baldr appear to have had a falling out with one of their larger distributors, and as this story went to press, the primary distributor appears to have stop working with the Baldr developers. Based on the nature of this type of criminal enterprise, we suspect that Baldr will once again be offered for sale, and the distribution issues are only temporary.

To find out more about Baldr, read Baldr vs The World.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/W2OuD9MrGs8/

Hollywood-Style Hacker Fight

Watch movies much? Here’s what happens when two hackers try to outhack each other.

Created by Door Monster: Original link

Article source: https://www.darkreading.com/edge/theedge/hollywood-style-hacker-fight/b/d-id/1335447?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mimecast Rejected Over 67 Billion Emails. Here’s What It Learned

New research warns that security pros must guard against updates to older malware and more manipulative social-engineering techniques.

New research released today from Mimecast has found that the bad threat actors are modifying old exploits as well as changing their attack techniques in which they link to documents or landing pages on well-known cloud platforms using URLs that otherwise would appear to be legitimate.

The report, “Mimecast’s Threat Intelligence Report, Black Hat Edition 2019,” leverages the processing of some 160 billion emails during the period of April 2019 to June 2019. During this time, Mimecast rejected more than 67 billion of those emails and based its subsequent analysis on rejections classified as spam, opportunistic and targeted attacks, and impersonation detections.

“We have seen a marked increase in malware links,” says Josh Douglas, Mimecast’s vice president of threat intelligence. “We also found that nearly 30% of the impersonation attacks were targeted at the management and consulting sectors and biotechnology.”

Peter Firstbrook, a research vice president who covers security at Gartner, says he doesn’t worry too much about attachments and links. That’s because most vendors are good at detecting them, and most organizations have endpoint protection and secure Web gateways that serve to backstop email security, he explains. However, he say he’s very concerned about impersonation attacks, mainly business email compromises (BECs).

“The BEC-type attacks are the most interesting and difficult to detect,” Firstbrook says. “There is no payload or URL to detect. It’s typically just a person-to-person email from a legitimate account. Users trust email not knowing or understanding that email is not an authentication method. Companies are losing real money to these types of attacks, and few legacy solutions adequately protect them. Even newer solutions are still evolving their techniques to detect these threats.”

Mimecast’s Douglas says security managers should respond with the following: a layered security strategy, advanced targeted threat protection, and better user awareness training.

The Mimecast report also identifies professional education, which includes private educational companies, colleges, institutes, and training providers, as the most targeted industry. Mimecast believes this is because of higher education’s inherent conflict with being open institutions of higher learning. Attackers prey on students who may not have the highest security awareness, plus numerous institutions conduct federal government research, so the attackers are after intellectual property and classified research.

Other highly targeted sectors include software and software-as-a-service (SaaS), which were hit with a number of attacks during this past quarter using Adwind and QRat. Similar to Adwind, QRat is a Trojan that targets Java-based platforms and uses Java Archive (.JAR) attachments in the malicious emails. IT resellers were also hit with a large number of Adwind attacks during the quarter, as well as a mixture of other Trojan downloaders.

The report also found Emotet to be another of the more active campaigns. What started as a banking Trojan in 2014 has evolved and now appears to download secondary malware. This may be because the threat actors behind Emotet have adapted it into a packing and delivery service for other threat actors, the report states, basically using it as a downloader-as-a-service for other malware.

Related Content:

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/mimecast-rejected-over-67-billion-emails-heres-what-it-learned/d/d-id/1335443?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Securing DevOps Is About People and Culture

Preconceived notions and divisions make building security into the software development life cycle an uphill battle for many organizations.

Security teams have long had the reputation of being “out of process” — that they add requirements, complicate processes, and disrupt DevOps.

“The reality is that building security into the development process — be it agile, DevOps, DevSecOps, or a mix thereof — remains a significant challenge for many reasons, beginning with the group relationship,” says Matt Keil, director of product marketing at Cequence Security. “In many organizations, the teams rarely interact, and the group reputation precedes the meeting. Security is ‘Dr. No’ and app dev is ‘rogue, ignoring security.'”

These preconceived notions and divisions make building security into the software development life cycle an uphill battle for many organizations. Perhaps it is possible that the solution to securing DevOps lies in reframing how we look at the problem.

Is Automation the Answer?
Trying to put different labels on security is antithetical to the goal of complementing an existing DevOps process. “When you integrate and use security as part of that automation associated with [the] DevOps pipeline and constant integration orchestration, it works much better,” says Matt Rose, global director application security strategy at Checkmarx. 

Many dev teams have been told to “shift left” or “shift right,” but a good DevOps process is an infinite loop that is constantly moving. In other words: continuous integration (CI), he says. “Automation at CI is the key aspect — the conductor of the symphony orchestra,” Rose says.

Automating at every stage of the process isn’t really difficult, according to Rose, who points to four disciplines of DevOps: development, CI, continuous delivery, and continuous deployment (CD) and production.

“You look at those building blocks, and there are different security technologies that slot themselves within each of those disciplines,” Rose says. “It’s about understanding where to integrate, but people make it much more difficult by putting different labels on security. When you look to architect a DevSecOps, you basically insert the right technologies at the right points in the DevOps process.”

Where Does ‘Sec’ Fit In?
Ever since development and operations came together to unify responsibility and accountability, different parts of the business have tried to create their own iteration of DevOps. And that is part of the problem, says Kelly Shortridge, VP of product strategy at Capsule8.

“I think we need to discuss a DevOps-centric approach to security rather than trying to figure out a security-centric approach to DevOps,” she says. “SecDevOps is a vendor-driven term that gives a false sense of comfort for security teams so that it seems they are actually doing something to fit into the DevOps world.”

Fundamentally, DevOps is a mindset, according to Shortridge. It’s a collaborative approach for optimizing the software delivery and performance life cycle. Security-centric DevOps tends to focus on the automation component, but Shortridge says the way security tries to integrate into DevOps isn’t working out too well.

“When security teams in various organizations automate some tasks, they boast that they are DevOps-friendly or DevOps-enabled, and I think that is misguided,” she says.

The biggest step an organization can take toward bringing security into the DevOps world is actually ensuring that their priorities align with the business priorities, Shortridge says. 

“Security has to shift to thinking about how it can improve software delivery performance and how it can start unifying that responsibility and accountability,” she explains. “It takes a lot more work than what I’ve seen commonly, which is supporting the old threat models and controls with new technologies and trying to steamroll the DevOps team by trying to shove all the security testing into the software development life cycle.”

Changing the DevOps Narrative
The very suggestion that security needs to be tacked onto or integrated into the DevOps process inherently suggests that DevOps does not think about security — which Mozilla security engineering manager Julien Vehent says is not the case.

While a culture of “go fast and break things” has been attributed to DevOps, it was a symptom of startup companies that didn’t have a lot of risk, he explains. Large organizations that want to stay competitive are focused on speed, but they usually don’t want to break things.

“These companies have adopted DevOps but have also made sure to integrate security in their [quality assurance] and testing processes to make sure that they can go fast without breaking things,” Vehent says.

Securing DevOps is about more than technology, though. It is also about people. Marrying security and DevOps requires a change in culture, a breaking down of barriers, and dispelling the myths of misguided beliefs that have led to finger-pointing in the past. So how do organizations change the culture? In the case of Malwarebytes, it started with changing the name.

“At Malwarebytes, we did not have good SecDevOps until I created a group within engineering that had the specific charter of ensuring the ‘ops’ and ‘sec’ part of SecDevOps was owned and done properly,” says Malwarebytes senior vice president of engineering Mark Patton. “We named the team ‘Site Reliability Engineering,’ and they were the bellwether team that changed the culture of engineering from within.”

The Malwarebytes SRE team then worked backward, guided by the mantra of “100% uptime, 0% breaches, optimize cost.” That mantra allowed the team to slowly push good practice into the engineering team.

“They reshaped the way our environments were created and deployed, reworked our deployments to use the most economical AWS services, showed our developers how to architect cloud systems to optimize security and cost, and then added security monitoring that immediately flags issues to the proper people,” Patton says.

The process was not without its challenges, which included both finding talent and convincing the teams. In the end, Patton says, the new processes and feedback converted even their most reluctant engineers to get onboard.

Leave Your Ego at the Door
Both security and DevOps teams must unite in the common goal of deploying apps and features securely and quickly, experts say.

“Security teams need to be willing to loosen the security change control reins, allowing automation to play a larger role in their process,” Cequence Security’s Keil says. “Security teams also need to look outside of their existing toolset to ensure they have the right mix for their current and future application architectures.”

The shift to DevOps and security working more closely together “promotes teams to move from viewing security inclusion as a blocker to instead viewing security as an enabler to getting their code into production,” adds Jonathan Deveaux, head of enterprise data protection at comforte AG. “SecDevOps is an excellent way for developers to play a vital role in the overall security program for enterprises and businesses wanting to ensure data security is a critical part of their organization, especially in the development phase.”

Related Content:

Image Source: Prazis Images via Adobe Stock

 

Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition’s security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM’s Security Intelligence. She has also contributed to several publications, … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/securing-devops-is-about-people-and-culture/b/d-id/1335396?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security & the Infinite Capacity to Rationalize

To improve the security posture of our organizations, we must open our eyes to rationalization and put an end to it with logic. Here’s how.

As humans, we have a tendency to justify and rationalize our actions. We all do so, whether we see what we’re doing and whether or not we do so willingly. The novelist Paul S. Kemp is quoted as saying that “the human mind has infinite capacity to rationalize.” Former tobacco lobbyist Victor Crawford, who died of lung cancer at 63, justified his personal and professional choices in a 1994 article in The New York Times Magazine entitled “How Do Tobacco Executives Live With Themselves?” this way:

In a way, I think I got my just desserts, because, in my heart, I knew better. But I rationalized and denied, because the money was so good and because I could always rationalize it. That’s how you make a living, by rationalizing that black is not black, it’s white, it’s green, it’s yellow.

What does rationalization have to do with security? First, we are security professionals, but we are, of course, also human. As humans, we rationalize just like anyone else would, which can have a negative influence on our decision-making, weaken our security posture, and introduce additional risk to the business. One way to improve our security postures is to stop rationalizing and to start using logic instead.

Let’s consider four preventable signs of rationalization in a security organization:

Sign 1: “We have to do something.” When a high-profile incident occurs or there is a newsworthy buzz about the world of security, it often comes with a lot of questions from executives, customers, and other stakeholders. While it’s clear that something needs to be done, it’s not always clear what that something is. If a security organization evaluates the issue at hand, the risk it brings to the organization, and how that risk can be remediated, the team can arrive at a logical conclusion as to what actions are necessary. If, however, the security organization finds itself saying “well, we have to do something,” that’s usually an indication that rationalization is occurring. It’s often a sign that a knee-jerk response is being put in place rather than the right response.

Sign 2: “We can’t do that.” It may very well be the case that whatever is being proposed can’t actually be done. But before accepting that conclusion, you need to ask: “Why not?” If there isn’t a clear, concise, logical answer to that simple question, take it as a sign of rationalization.

Perhaps completing a particular task will be difficult, will require significant effort, or will ruffle some feathers in the organization. Regardless, if a security organization isn’t honest with itself about what it can and cannot do, it may discount or dismiss ideas that could go a long way toward helping it improve.

Sign 3: “Our security program is quite mature.” When I hear a statement like this, I want to see metrics that prove it. If there aren’t any readily available, that’s the first sign that rationalization is at play. This may sound a bit harsh, but half of security programs are of average or below-average maturity! That’s the way a mean works — it’s right in the middle. It may very well be the case that a given security organization is ahead of its peers and excelling. Those security teams generally have extremely well-defined processes and procedures, along with metrics to continuously monitor and improve progress and performance. In other words, if a security program is mature, there should be numbers to prove it. If there aren’t, logic isn’t likely in the mix.

Sign 4: “That’s not how we do things here.” There are policies, rules, guidelines, practices, and procedures that make sense and are in place for good reason. However, not all of them make sense all of the time. If something isn’t producing the required or expected results, it’s time to change the way that something is done. I don’t mean simply changing for change’s sake. I mean real change that improves outcomes and results based on logic and reason. That requires putting the rationalization aside.

Rationalization is lazy — the easy way out. Logic requires more effort. But the extra investment of identifying and eradicating rationalization allows a security organization to open doors that lead the way toward improvement. An improved security posture begins by opening our eyes to rationalization and putting an end to it with logic.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Josh (Twitter: @ananalytical) is an experienced information security leader who works with enterprises to mature and improve their enterprise security programs.  Previously, Josh served as VP, CTO – Emerging Technologies at FireEye and as Chief Security Officer for … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/security-and-the-infinite-capacity-to-rationalize-/a/d-id/1335400?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

PIN the blame on us, says Monzo in mondo security blunder: Bank card codes stored in log files as plain text

Trendy online-only Brit bank Monzo is telling hundreds of thousands of its customers to pick a new PIN – after it discovered it was storing their codes as plain-text in log files.

As a result, 480,000 folks, a fifth of the bank’s customers, now have to go to a cash machine, and reset their PINs.

The bank said the numbers, normally tightly secured with extremely limited access, had accidentally been kept in an encrypted-at-rest log file. The content of those logs were, however, accessible to roughly 100 Monzo engineers who normally would not have the clearance nor any need to see customer PINs.

The PINs were logged for punters who had used the “card number reminder” and “cancel a standing order” features.

To hear Monzo tell it, the misconfigured logs, along with the PINs, were discovered on Friday evening. By Saturday morning, the UK bank updated its mobile app so that no new PINs were sent to the log collector. On Monday, the last of the logged data had been deleted.

“No one outside Monzo had access to these PINs,” Monzo said in its attempt to reassure customers.

“We’ve checked all the accounts that have been affected by this bug thoroughly, and confirmed the information hasn’t been used to commit fraud.”

Fortune wheel

Let’s spin Facebook’s Wheel of Misfortune! Clack-clack-clack… clack… You’ve won ‘100s of millions of passwords stored in plaintext’

READ MORE

Monzo says anyone whose PIN was exposed in the logs will be given a message instructing them to change their codes. To do that, the customer will need to go to a cash machine (app-happy Monzo has no brick-and-mortar branches) and select a new number via the PIN Services menu.

While Monzo maintains that nobody outside of the bank was able to see the codes, it would not be a bad idea to keep an eye on your account activity in case anything suspicious is spotted.

Additionally, everyone with an account at the bank is being advised to update their Android and iOS apps to make sure they have the latest versions, via which PINs are no longer fed into the log files.

The blunder is a setback for the upstart UK banking service, but fortunately for Monzo it happened to come along in the wake of a much bigger banking privacy foul-up. With Capital One now facing lawsuits and the possibility of Congressional hearings for its mishandling of records on 106 million people, a few mishandled PINs won’t get much play in the news cycle. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/06/monzo_pins_reset/

It’s 2019 – and you can completely pwn a Qualcomm-powered Android over the air

Black Hat It is possible to thoroughly hijack a nearby vulnerable Qualcomm-based Android phone, tablet, or similar gadget, via Wi-Fi, we learned on Monday. This likely affects millions of Android devices.

Specifically, the following two security holes, dubbed Qualpwn and found by Tencent’s Blade Team, can be leveraged one after the other to potentially take over a handheld:

  • CVE-2019-10540: This buffer-overflow flaw is present in Qualcomm’s Wi-Fi controller firmware. It can be exploited by broadcasting maliciously crafted packets of data over the air so that, when they are received by at-risk devices, arbitrary code included in the packets is executed by the controller.

    This injected code runs within the context of the Wi-Fi controller, and can subsequently take over the adjoining cellular broadband modem. Thus, CVE-2019-10540 could be exploited by nearby miscreants over the air to silently squirt spyware into your phone to snoop on its wireless communications.

    There is also, we spotted, a related CVE-2019-10539 buffer-overflow vulnerability in the Wi-Fi firmware that is not referenced by Tencent and not part of the QualPwn coupling.

  • CVE-2019-10538: This vulnerability can be exploited by malicious code running within the Wi-Fi controller to overwrite parts of the Linux kernel running the device’s main Android operating system, paving the way for a full device compromise.

    Essentially, CVE-2019-10538 lies in Qualcomm’s Linux kernel driver for Android. The Wi-Fi firmware is allowed to dictate the amount of data to be passed from the controller to the kernel, when the kernel should really check to make sure it isn’t being tricked into overwriting critical parts of its memory. Without these checks, a compromised controller can run roughshod over the core of the Android operating system.

Thus, it is possible for a miscreant to join a nearby wireless network, seek out a vulnerable Qualcomm-powered Android device on the same Wi-Fi network, and send malicious packets over the air to the victim to exploit CVE-2019-10540. Next, the hacker can either compromise the cellular modem and spy on it, and/or exploit CVE-2019-10538 to take over the whole operating system at the kernel level to snoop on the owner’s every activity and move.

Both bugs are confirmed by Tencent to exist in Google Pixel 2 and 3 devices, and anything using a Qualcomm Snapdragon 835 and 845. Meanwhile, Qualy, in its own advisory released on Monday, revealed many more of its chips – which are used in hundreds of millions of Android devices – are at risk, all the way up to its top-of-the-line Snapdragon 855. Basically, if your phone or tablet uses a recent Qualcomm chipset, it’s probably at risk.

android

Exposed: Lazy Android mobe makers couldn’t care less about security

READ MORE

The good news is that all the bugs have been patched by Qualcomm. CVE-2019-10538 lies within Qualy’s open-source Linux kernel driver, and is available from Google. CVE-2019-10539 and CVE-2019-10540 are patched in Qualcomm’s closed-source Wi-Fi controller firmware, which was distributed to device makers in June after Tencent privately alerts the chip designer in April.

Now for the bad news. When exactly these fixes will filter down to actual Android users is not clear: if you’re using a supported Google-branded device, you should be able to pick up the updates as part of this month’s security patch batch. If not, you’re at the mercy of your device maker, and possibly cellular operator, to test, approve, and distribute the updates to punters.

Full details on the bugs and how they can be exploited are not public, and no exploits have been spotted in the wild. There is more good news: there are also various security hurdles to clear, within the Linux kernel and the Wi-Fi firmware, such as stack cookies and non-executable data areas before exploitation is successful. In other words, it is non-trivial to exploit Qualpwn, but not impossible.

Tencent’s Peter Pi and NCC Group consultant Xiling Gong plan to describe the pair of programming blunders during talks at the Black Hat and DEF CON hacking conferences this week in Las Vegas.

But wait, there’s more

Also out this week from Google are more security fixes for various parts of Android. The worst can be exploited by maliciously crafted media messages to take over a device.

Also, as for devices with Broadcom-based Bluetooth electronics: it’s possible to pwn the gizmos over the air via malicious data packets, which seems pretty bad and worthy of a story on its own.

Here’s a swift summary of the bugs:

  • CVE-2019-2120 in Android runtime “could enable a local attacker to bypass user interaction requirements in order to gain access to additional permissions.”
  • CVE-2019-2121, CVE-2019-2122, and CVE-2019-2125 in Framework, with the “most severe vulnerability in this section could enable a local malicious application to execute arbitrary code within the context of a privileged process.”
  • CVE-2019-2126, CVE-2019-2128, CVE-2019-2127, and CVE-2019-2129 in Media Framework, with the “most severe vulnerability in this section could enable a remote attacker using a specially crafted file to execute arbitrary code within the context of an unprivileged process.”
  • CVE-2019-2130 to CVE-2019-2137 in System, with “most severe vulnerability in this section could enable a remote attacker using a specially crafted PAC file to execute arbitrary code within the context of a privileged process.”
  • CVE-2019-11516 in Broadcom’s firmware that “could enable a remote attacker using a specially crafted transmission to execute arbitrary code within the context of a privileged process.”

Again, if you’re using an officially supported Google-branded device, you should be getting these updates over the air soon if not already. If you’re not, then, well, look for updates soon from your manufacturer and/or cellular network provider, or hope they can be installed automatically via Google Play services if they are not too low level. ®

PS: Google is adding support for Arm’s memory-tagging security feature to Android.

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/06/qualcomm_android_patches/

Need to automatically and securely verify a download is legit? You bet rget this new tool

Brandon Philips, a member of the technical staff at Red Hat, has created a software tool called rget for Linux, macOS, and Windows, to make it easier to determine whether downloaded files can be trusted.

The command-line application is intended as an alternative to wget, a widely used tool for fetching files that has been around for more than two decades. In a phone interview with The Register on Monday, Philips said he hopes that rget sees enough adoption that its functionality gets incorporated into wget and other software distribution mechanisms like Docker and npm.

The advantage of rget over wget is that the former provides a way to automatically and securely verify the integrity of downloaded files so that users can be confident that stuff they’ve just fetched has not been tampered with since its publication on the internet.

Specifically, rget can fetch a file from a given URL, and check a SHA-256 hash of that file’s contents against the official hash entry for that URL in a public cryptographic log. If the hash of the downloaded file does not match the expected hash in the log, that indicates the download has been altered by someone in an unauthorized way, and an alarm is raised.

When someone publishes a file on the internet that they wish to be verified by rget, they have to add the file’s hash and URL to the public log so that, in future, rget can verify the legitimacy of downloaded copies of said file from the given URL.

Many developers are familiar with SHA-256 hashes, those lengthy alphanumeric strings that are often listed alongside download links on web pages but may not be replicated elsewhere. It’s this lack of an authoritative public record of digests that rget aims to address. The rget tool therefore automates the process of publishing files and distributing the cryptographic digest to multiple parties, which makes it easy to assess file integrity and to build tools that automate alerts about unauthorized file changes.

Running rget might look something like this:

rget https://github.com/philips/releases-test/archive/v1.0.zip

Releasing software using rget would involve two commands:

rget github publish-release-sums https://github.com/merklecounty/rget/releases/tag/v0.0.6
rget submit https://github.com/merklecounty/rget/releases/download/v0.0.6/SHA256SUMS

One of the problems with hacked libraries is that it may take time for word to reach affected individuals. With rget, the hope is to normalize the publication of cryptographic digests with every set of published files and to provide a distributed record of digests for automatic auditing.

The project arose from work Philips and others did on etcd, a distributed reliable key-value store often used with Kubernetes and other applications. Initially, Philips explained, doing cryptographic signatures for etcd releases was easy because all the contributors were in-house at CoreOS (later acquired by RedHat, which in turn got acquired by IBM).

But as more people began working on that project, the etcd team had to confront the difficulty of managing cryptographic keys across a geographically distributed group of people.

“We didn’t have a good solution and turns out no project has a good solution to that,” said Philips. “At the end of it, I started to look at other solutions because key custody is difficult.”

With services like GitHub, NPM and elsewhere, said Philips, software is secured using usernames and passwords. But in recent years, the shortcomings of this approach have become apparent.

For example, in 2016, Linux Mint 17.3 Cinnamon edition was compromised with a backdoor. More recently, malicious code was found in the PureScript installer distributed through npm. And there have been other attacks on developer-oriented resources because such software often gets used widely enough to make it attractive to miscreants.

Philips said the security community is concerned about this because there may be hundreds of thousands of people or more relying code kept safe only with a password.

Certificate Authorities have tried to make online certificates more trustworthy through the Certificate Transparency project, which provides a way to audit the issuance and maintenance of TLS/SSL certificates.

“Mozilla had created a design document for auditing software releases based on the Certificate Transparency project,” Philips said. “No one had implemented it so I went off and did it.”

The software is currently in alpha stage but Philips said he hopes to see 20 or so large GitHub projects testing rget by the end of the year.

“It would be awesome if Kubernetes, before it runs a container, checked the container digest,” he said. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/06/rget_wget_replacement/

Googlers hate it! This one weird trick lets websites dodge Chrome 76’s defenses, detect you’re in Incognito mode

A week ago, Google released Chrome 76, which included a change intended to prevent websites from detecting when browser users have activated Incognito mode.

Unfortunately, the web giant’s fix opened another hole elsewhere. It enabled a timing attack that can be used to infer when people are using Incognito mode.

On Sunday, developer Jesse Li described a novel method to detect when Chrome users have activated Incognito mode using Chrome’s FileSystem API: it is possible to benchmark the speed at which files can be written to disk using this software interface.

The technique is similar to one proposed last month by security researcher Vikas Mishra. He found that browser’s Quota Management API, for managing temporary and persistent storage, can be used to infer the presence or absence of Incognito mode.

Incognito mode in Chrome sounds as if it keeps users anonymous online. But it doesn’t. It simply prevents browsing activity from being stored in the History log and it erases local HTTP cookies and site data from memory when the Incognito session ends (rather than writing the data to local storage). Its purpose is to prevent other people using the same browser on the same device from being able to look at an in-browser record of past browsing sessions.

Web publishers with paywalls dislike Incognito mode because it prevents the setting of cookies to limit article consumption among non-paying visitors. To fight such freeloading, some paywalled websites include code that detects whether Chrome users have access to the FileSystem API – it used to be disabled when Incognito mode was active.

To eliminate this inconsistently, Google engineers in March last year proposed a plan to make the FileSystem API available when Incognito mode is active. The change debuted behind a flag in Chrome 74 and was turned on by default in Chrome 76. But Incognito mode can still be detected.

onanism

Incognito mode won’t stop smut sites sharing your pervy preferences with Facebook, Google and, er, Oracle

READ MORE

What Li found was that the FileSystem API performs differently when Incognito mode is active. By conducting a series of write speed benchmarks, Li demonstrated that normal write operations are more irregular and take about three to four times longer than write operations when Incognito mode is active. The source code is available on GitHub.

The technique is slower and less reliable than determining whether the FileSystem API is available – it takes tens of seconds to conduct the measurements and different hardware configurations affecting timing data. But Li contends the issue is difficult to patch because Incognito mode stores data in memory while normal mode stores data to disk.

“The only way to prevent this attack is for both Incognito mode and normal mode to use the same storage medium, so that the API runs at the same speed regardless,” Li wrote.

In Google’s design document, Jochen Eisinger, director of engineering for Chrome, suggested timing attacks could be addressed by keeping only metadata in memory and encrypting files to disk rather than storing both metadata and files in memory when Incognito mode is active.

Google did not respond to a request for comment about whether it intends to explore this alternative approach to prevent timing and storage-based inferences about Incognito mode.

Li however is skeptical that a different strategy would lead to improved privacy. “While it’s resistant to our attacks, it leaves behind metadata: even if the data itself cannot be decrypted, its mere existence provides evidence of incognito usage, and leaks when the user last used incognito mode and the approximate size of the data they wrote to disk,” Li’s post claims.

According to Eisinger, Google intends to deprecate and eventually remove the FileSystem API eventually because it hasn’t been adopted by other browser vendors and appears to be used mainly to detect Incognito mode. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/05/chrome_incognito_mode_fix_falls_flat/

F-B-Yikes! FBI bod allegedly hid spy camera under desk to snap coworker’s upskirt pics

An FBI contractor has pleaded not guilty to charges that he installed a camera under a coworker’s desk to satisfy his “voyeur” fetish.

Joshua Green was working as a contractor in the J Edgar Hoover building – aka the FBI headquarters in Washington DC – last month. On July 26, one of his colleagues, who had just returned to work from maternity leave, was at her desk when she shifted in her chair and knocked a camera installed under the table onto the floor, NBC Washington reports.

According to court documents, Green retrieved the camera, and was stopped and quizzed by a coworker. He was next taken into custody by the Feds and police. Prosecutors claim during interrogation he admitted to planting the camera, though denies getting any pictures because the spy gear wasn’t apparently working properly.

pervert

Foot lose: Idiot perv’s shoe-mounted upskirt vid camera explodes

READ MORE

The cops claim Green told them he had a “voyeurism fetish,” enjoyed taking photographs of young women, and had viewed images of several underage girls on his home computer.

“The suspect advised that there may be images of people that he has taken unconsented pictures of stored on his personal computer, and that he uses personal computers to look at images (both clothed and nude) of girls under the age of 18,” the court documents state.

The woman who bumped into the camera confirmed she hadn’t given her consent for any images to be taken. She also had chosen to wear skirts to the office after returning from leave, and was concerned that the camera may have recorded her.

Green has now been charged with voyeurism, and on Monday pleaded not guilty. He faces a tough trial: if these crimes were committed, they were carried out in one of the most heavily monitored law enforcement buildings in the country, and the Feds are unlikely to take a potential crime against one of their own lightly. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/05/fbi_camera_upskirt/