STE WILLIAMS

Google launches new Chrome protection from bad URLs

Google on Tuesday launched two new security features to protect Chrome users from deceptive sites: an extension that offers an easy way to report suspicious sites, and a new warning to flag sites with deceptive URLs.

Emily Schechter, Chrome Product Manager, said in a post on Google’s security blog that Google’s Safe Browsing, which has been protecting Chrome users from phishing attacks for over 10 years, is now helping to protect more than four billion devices across multiple browsers and apps by showing warnings to people before they visit dangerous sites or download dangerous files.

The new extension, called the Suspicious Site Reporter, is going to help even more, she said, as it gives users an easy way to report suspicious sites to Google Safe Browsing.

Safe Browsing works by automatically analyzing sites that Google knows about through Google Search’s web crawlers, and the more dangerous or deceptive sites that it knows about, the more users it can protect, Schecter said.

Users who install the extension, which is available now on the Chrome web store, will see an icon when they’re on a potentially dangerous site. It will give a list of reasons regarding why it’s considered suspicious. In the example Google provided, sample reasons are that the domain uses uncommon characters (which could be a sign of typosquatting), that the site isn’t listed in the top 5,000 sites, and that the user hasn’t visited the site in the past three months.

Clicking on “Send Report” allows users to report unsafe sites to Safe Browsing for further evaluation.

Do it for team “Everybody,” Schecter said:

If the site is added to Safe Browsing’s lists, you’ll not only protect Chrome users, but users of other browsers and across the entire web.

Protection from fumble fingers

The second new security feature for Chrome that Google announced is a new warning to protect users from sites with deceptive URLs. Heaven knows we fat-fingered typists need help on this front: It’s all too easy to quickly type a URL you use every day, whether it’s Google or Facebook or Amazon, and in your haste, you accidentally swap, add, or delete a single letter and hit enter.

Maybe you’ll wind up getting a 404 message …if you’re lucky. Otherwise, you could wind up visiting a spoofed page of the original one you were trying to get to.

Registering common misspellings of popular websites to catch users unaware is known as typosquatting, and it’s exactly what it sounds like: cybercrooks scoop up these frequently misspelled domain names, knowing that sooner or later, some innocent users will get stuck in their fly trap.

A while back, Naked Security’s Paul Ducklin misspelled Apple, Facebook, Google, Microsoft, Twitter, and Sophos in 2,249 ways to see what would happen – basically he let a computer miss-type URLs across the web to see what it uncovered. He found everything from outright fake pages to adult content and contests designed to capture personal information:

But while we can try as hard as we like to type as carefully as possible, the reality is that misspellings and mistyping are bound to happen. That’s why it’s nice to hear that Google’s going to help out with this new security feature, which will warn users away from sites that have URLs that look an awful lot like legitimate domains but have something else entirely in mind.

Like, for example, throwing pop-ups and ads into unsuspecting users’ faces; or trying to sell them IT and hosting services, or interesting domain names, or fake tech support; or tricking users into giving away personal or financial information – say, by offering you a free product if you pay for shipping and thereby capturing your payment card data.

This is what the new warning looks like. Clicking on “continue” will whisk you back to safety:

Schecter:

This new warning works by comparing the URL of the page you’re currently on to URLs of pages you’ve recently visited. If the URL looks similar, and might cause you to be confused or deceived, we’ll show a warning that helps you get back to safety.

This one feels obvious, doesn’t it? It’s hard to believe that we’ve been telling people for years to be careful as they type, instead of having some way to automagically check the spelling of a URL to see if it’s almost-but-not-quite a popular/safe domain instead of some stagnant sinkhole lying nearby.

Nicely done, Chrome. Readers, if you use the new tools, feel free to tell us how you like them in the comments section below.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BqmWOsnr0Cs/

Update Firefox now! Zero-day found in the wild

Mozilla has fixed a critical zero-day bug in the latest point releases of the Firefox web browser. The security flaw allows attackers to run their own code by exploiting the browser with malicious JavaScript, and people are already targeting Firefox users in the wild.

The bug affects both Firefox and its enterprise counterpart, Extended Support Release (ESR). According to Mozilla’s advisory:

A type confusion vulnerability can occur when manipulating JavaScript objects due to issues in Array.pop.

Programmers use JavaScript’s array object to contain a collection of data items. pop is a command that they can use to remove the last element of an array.

A type confusion vulnerability happens when a program doesn’t check the type of a data item that is passed to it. It might assume it’s getting a number, for example, when it actually gets a string. If it doesn’t check, then it can mishandle the data item, potentially destabilising its code.

In this case, the effect is catastrophic, the advisory warned:

This can allow for an exploitable crash. We are aware of targeted attacks in the wild abusing this flaw.

The vulnerability was discovered by Samuel Groß of Google Project Zero, and has the code CVE-2019-11707. The Department of Homeland Security also published an alert about the flaw.

Mozilla has fixed the flaw in Firefox version 67.0.3, and in Firefox ESR version 60.7.1. Because people are already exploiting the bug, it’s important that you update to the latest version now.

Firefox automatically checks for updates and installs them, but if you’re worried, you can force it to do this manually. To do this, select Help, and About Firefox. This will force it to check for updates and install them. When it has finished, restart the browser.

Users of the Tor Browser (which is based on Firefox) should also update their browsers to version 8.5.2, which the company released Wednesday. The Android version isn’t available yet, though. The Tor team said:

As part of our team is currently traveling to an event, we are unable to access our Android signing token, therefore the Android release is not yet available. We expect to be able to publish the Android release this weekend.

In the meantime, Android users should use the safer or safest levels, the Tor team concluded. Do that by selecting Security Settings in the menu to the right of the URL bar.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/fj3qcBlYHxs/

Shut the barn door: UK data watchdog tells MPs mass slurping by firms is a huge risk to privacy

Regulators and campaign groups have warned a UK Parliamentary inquiry that the increasing collection, use and storage of data by corporations poses a serious risk to privacy and security.

The Human Rights Committee hearing into the right to privacy and the “digital revolution” follows the scandal last year of 87 million unauthorised Facebook users’ details being shared with Cambridge Analytica.

In its submission, the Information Commissioner’s Office said: “The mass collection and aggregation of data, particularly by companies with data-driven business models, underpins the risks to individuals’ privacy at a very fundamental level.

“Businesses have always collected data on customers and users, but the rapid development of technology, particularly online, has allowed this collection and aggregation to be done on an industrial scale. It has reached the point where data collection is not simply a means to a business end, but the end in itself. Data has become the commodity.”

In April, the ICO fined commercial pregnancy and parenting “club” Bounty £400,000 for illegally sharing personal information belonging to more than 14 million people.

The company shared approximately 34.4 million records between June 2017 and April 2018 with credit reference and marketing agencies, including Acxiom, Equifax, Indicia and Sky.

Steve Wood, deputy commissioner at the ICO, told the committee the ICO was actively “calling out” private companies sharing data in this way. He said in the next few years legal precedents are likely to be set on a European level against private companies amassing data.

Orla Lynskey, associate professor of Law, Department of Law, London School of Economics, said a lot of companies are asking consumers to consent unnecessary data in order to access free services. She cited the example of online supermarket Ocado recently asking permission to access her photos in order to use its app.

“That is a common feature across applications.. I would in no way single out Ocado – it is systemic.”

Indeed, prime ministerial hopeful Matt Hancock released an app last year, which also requested access to users’ photos.

Not surprisingly, a lot of the submissions found folk don’t understand what happens to their data and therefore do not give meaningful consent when using online services. According to digital charity Doteveryone, 62 per cent of people are unaware that social media companies make money by selling data to third parties.

Privacy campaigners have argued that consumers should have the choice to opt into data tracking.

Fears were also expressed that companies can build a profile around sensitive information such as political and religious views, socio-economic status, sexual orientation and other details of their family life.

In its submission Privacy International said: “Companies routinely derive data from other data, such as determining how often someone calls their mother to calculate their credit-worthiness. As a result, potentially sensitive data can be inferred from seemingly mundane data, such as future health risks.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/20/mass_data_slurping_huge_risk_to_privacy_mps_told/

The Hunt for Vulnerabilities

A road map for improving the update process will help reduce the risks from vulnerabilities.

In 2018, 16,515 new common vulnerabilities and exposures (CVEs) were published. By November of last year, more than 300 vulnerabilities per week were being reported, and we’re on pace for an even bigger 2019. That means updates and patching must be seen as security imperatives.

But keeping every OS, application, and browser version across every machine and device configured exactly right all of the time is a huge, seemingly impossible job. To even get close, enterprises need strategies that make it easier to find, prioritize, fix, and report on vulnerabilities in ways that make sense for their business and existing resources.

To help, let’s lay out a road map for improving the update process required to reduce the security risks posed by vulnerabilities.

Change the Culture
Instead of viewing updates and patching as something tedious that should be done but perhaps not urgently, it’s important that employees understand the role vulnerabilities play in company security and how their management is part of the larger security strategy. This mindset should extend beyond just the IT department to every employee.

The Center for Internet Security (CIS) recommends gap or risk-based training, in which IT staff try to identify where the bulk of security issues are — whether it is with people sharing passwords, updating their own machines, or putting sensitive data on a USB drive that could get easily lost or mishandled — and provide training against the biggest challenges. This helps employees understand important practices, why they should be implemented, and provides them with relevant, real-world situational guidance. It should be a partnership where all employees feel supported so that cooperation happens when it is vital, even if this means rebooting an employee’s machine right in the middle of a project in order to patch a critical issue.

Security awareness training also should be more than one-and-done during onboarding to be effective. Employees are so bombarded with new information related to their specific job functions that security is likely not top-of-mind. For culture to shift, training needs to be ongoing. It doesn’t have to be overwhelming or threatening but rather as simple as spending a few minutes in an all-hands, a quarterly email of best practices, or a biannual seminar.

Utilize Standards
In addition to getting employees on board with basic practices, teams have to actually find existing vulnerabilities. There are a number of open standards to help identify the ever-expanding list of vulnerabilities as well as proper configurations to guard against them. Security Content Automation Protocol (SCAP) is one of the most common and provides a framework of specifications that support automated configuration, vulnerability and patch checking, compliance, and measurement. It is highly useful for definitions of common exposures and in determining what situations are applicable to your environment. There are a number of other standards that are useful in establishing a baseline for configuration as well: CIS (mentioned earlier) provides guidance, and the technical information guides released by the Defense Information Systems Agency are also quite useful.

Once you establish a baseline, the CVE database and the National Vulnerability Database, which pull from a wide range of sources, can assist in identifying vulnerabilities. Microsoft also posts its own authoritative security updates. But a quick look at these databases will spark fear in the heart of anyone charged with vulnerability management based on the complexity and sheer volume of vulnerabilities involved.

Seek Automated Solutions
Automated vulnerability management solutions have emerged to help. These solutions pull from the respective databases to identify and analyze the vulnerabilities affecting your endpoints. Automated products on the market can be slow and interfere with network performance, which has not won them a legion of fans, but with advances in technology, a new generation of vulnerability management solutions is poised to rapidly accelerate the speed of detection and increase the number of vulnerabilities they can search — and they do it without negative impacts on performance. As a result, scans don’t need to wait until the end of the day or the weekend, and remediation can occur much, much faster than the industry average of 38 days.

If you have the option of adding an automated vulnerability management solution to your arsenal, be sure to do your research to find a product that fits your needs. No automated solution will get you to 100% detection, but the prospect of reaching 80% to 90% detection in a fraction of the time should have team members rejoicing.

The Process Is Just Beginning …
Now that you’ve found vulnerabilities, the job is just getting started. You still have to figure out how to assess and prioritize, remediate, and report on what you’ve found. As you can see, today’s world of vulnerability management is anything but simple; however, there is an opportunity to turn the tide by paying attention and addressing the little things that become big problems. Doing so will help keep your company as secure as possible.

Related Content:

Jim Souders is CEO of Adaptiva. A global business executive with more than 20 years’ experience, Jim excels at leading teams in creating differentiated software solutions, penetrating markets, achieving revenue goals, and P/L management. Prior to Adaptiva, Jim led high-growth … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-hunt-for-vulnerabilities-/a/d-id/1334976?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cybersecurity Accountability Spread Thin in the C-Suite

While cybersecurity discussions have permeated board meetings, the democratization of accountability has a long way to go.

A spate of recent surveys offer indications that the philosophy that “cybersecurity is everyone’s responsibility” is gaining steam in the C-suite at most large organizations. But digging into the numbers — and keeping in mind perennially abysmal breach statistics — it’s clear that while awareness has broadened across the board room, accountability and action are still spread pretty thin.

report released this week by Radware shows promising signs that cybersecurity is increasingly coming up in board talks and is near-universally viewed as the entire C-suite’s responsibility to enable. Conducted among 260 C-suite executives worldwide, the study shows that more than 70% of organizations touch on cybersecurity as a discussion item at every board meeting. Meantime, 98% of all members across the C-suite say they have some management responsibility for cybersecurity.  

This jibes with another study released earlier this month by KPMG that shows CEOs are increasingly paying attention to cybersecurity risks as a part of their overall technology risk profile. In 2018, according to KPMG, just 15% of US CEOs agreed that strong cybersecurity is critical to engender trust with key stakeholders; that percentage shot up to 72% of CEOs this year. 

“CEOs are no longer looking at cyber-risk as a separate topic. More and more they have it embedded into their overall change programs and are beginning to make strategic decisions with cyber-risk in mind,” says Tony Buffomante, global co-leader of cybersecurity services at KPMG. “It is no longer viewed as a standalone solution.” 

That sounds good at the surface level, but other recently surfaced statistics offer grounding counterbalance. A global survey of C-suite executives released last week by Nominet indicates these top executives have some serious gaps in knowledge about cybersecurity, with around 71% admitting they don’t know enough about the main threats their organizations face.

This corroborates with a survey of CISOs conducted earlier this year by the firm that indicates security knowledge and expertise possessed by the board and C-levels is still dangerously low. Approximately 70% of security executives agree that at least one cybersecurity specialist should be on the board in order for it to take appropriate levels of due diligence in considering the issues. Unfortunately, less than 6% of CISOs believe their boards and executive management have enough knowledge to truly understand the nuances and implications of the cybersecurity issues CISOs bring to them. 

“The lack of cybersecurity expertise on boards underscores the disconnect between CISOs and the rest of the organizational leadership team,” said Bradley Schaufenbuel, CISO and VP at Paylocity, in that report. “It is difficult to expect the proper level of governance and oversight with such an inherent absence of understanding of the risk at that level.” 

More troubling about this assessment from the security professionals is that many CEOs consider themselves experts in security matters. The Radware study shows that 82% of CEOs ranked themselves as having a “high” level of knowledge about information security. The disparity between how the CISOs rank corner-office cybersecurity expertise compared with how CEOs self-assess likely indicates a false sense of security. And that’s reflecting itself in low levels of buy-in and acceptance of advice from security employees: Only 36% of CISOs say senior management regularly takes their advice, and just 46% of broader C-suites admit to taking advice from security employees.

The results show that even though on paper everyone is “responsible” for security, in practice not enough decision-makers have adequate expertise and knowledge to develop and execute on security strategies. This is resulting in a disconnect that creates a situation where cybersecurity incidents are only reported to the board and C-suite at 40% of businesses today, according to the Nominet report.

Related Content:

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/risk/cybersecurity-accountability-spread-thin-in-the-c-suite/d/d-id/1335015?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 2019 Security Venture Fund Deals You Should Know

2019 has, so far, been a busy year for venture capitalists in the security industry. Here are 7 funding rounds important because of the technologies or market trends they represent.PreviousNext

Venture capitalists look for the “big win,” not for this week, but for the future. And they stake millions of dollars on their ability to find the winning combination of team and technology that will turn a very good idea into a very profitable company.

2019 has been an active year for venture funds as investors place their bets on a wide range of security and privacy technologies. From artificial intelligence to zero trust security, startup companies are looking for ways to slow attackers, prevent damage, and increase security while keeping costs low and performance high.

Dark Reading has covered many of the significant venture security events in the first half of 2019, but as this half comes to a close it can be valuable to look back and see which technologies professional analysts and investors believe will be important in coming years.

One of the notable characteristics of the major venture announcements in 2019 is the lack of a single technology theme. The companies featured in these 7 deals have products for cloud applications and IoT, analytics and encryption. The only common thread in the seven is their focus on security.

It’s important to note that these are not the only security industry venture capital deals that have been concluded in the first half of 2019. These seven have made the list because they represent broader trends, demonstrate something important about the shape of the market, or show just how much confidence the venture funds have in particular technology directions. And for CISOs, security managers, and team members looking forward toward next-generation security products, these seven provide an important road map for the future.

Which of these technologies do you think will be most important in the coming five years? Do you think any of these will have a truly profound impact on the way we secure enterprise IT (or OT) networks? Let us know where you’ll place your bets in the comment section, below.

(Image: Brian Jackson VIA Adobe Stock)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/analytics/7-2019-security-venture-fund-deals-you-should-know/d/d-id/1335001?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Machine Learning Boosts Defenses, But Security Pros Worry Over Attack Potential

As defenders increasingly use machine learning to remove spam, catch fraud, and block malware, concerns persist that attackers will find ways to use AI technology to their advantage.

Machine learning continues to be widely pursued by cybersecurity companies as a way to bolster defenses and speed response. 

Machine learning, for example, has helped companies such as security firm Malwarebytes improve their ability to detect attacks on consumer systems. In the first five months of 2019, about 5% of the 94 million malicious programs detected by Malwarebytes’ endpoint protection software came from its machine-learning powered anomaly-detection system, according to the company. 

Such systems, and artificial intelligence (AI) technologies, in general, will be a significant component of all companies’ cyberdefense, says Adam Kujawa, director of Malwarebytes’ research labs.

“The future of AI for defenses goes beyond just detecting malware, but also will be used for things like finding network intrusions or just noticing that something weird is going on in your network,” he says. “The reality is that good AI will not only identify that it’s weird, but [it] also will let you know how it fits into the bigger scheme.”

Yet, while Malwarebytes joins other cybersecurity firms as a proponent of machine learning and the promise of AI as a defensive measure, the company also warns that automated and intelligent systems can tip the balance in favor of the attacker. Initially, attackers will likely incorporate machine learning into backend systems to create more custom and widespread attacks, but they will eventually focus on ways to attack other AI systems as well.

Malwarebytes is not alone in that assessment, and it’s not the first to issue a warning, as it did in a report released on June 19. From adversarial attacks on machine-learning systems to deep fakes, a range of techniques that general fall under the AI moniker are worrying security experts. 

In 2018, IBM created a proof-of-concept attack, DeepLocker, that conceals itself and its intentions until it reaches a specific target, raising the possibility of malware that infects millions of systems without taking any action until it triggers on a set of conditions.

“The shift to machine learning and AI is the next major progression in IT,” Marc Ph. Stoecklin, principal researcher and manager for cognitive cybersecurity intelligence at IBM, wrote in a post last year. “However, cybercriminals are also studying AI to use it to their advantage — and weaponize it.”

The first problem for both attackers and defenders is creating stable AI technology. Machine-learning algorithms require good data to train into reliable systems, and researchers and bad actors have found ways to pollute the data sets as a way to corrupt the resultant system. 

In 2016, for example, Microsoft launched a chatbot, Tay, on Twitter that could learn from messages and tweets, saying, “the more you talk the smarter Tay gets.” Within 24 hours of going online, a coordinated effort by some users resulted in Tay responding to tweets with racist responses.

The incident “shows how you can train — or mistrain — AI to work in effective ways,” Kujawa says.

Polluting the dataset collected by cybersecurity firms could similarly create unexpected behavior and make them perform poorly.

A number of AI researchers have already used such attacks to undermine machine-learning algorithms. A group including researchers from Pennsylvania State University, Google, the University of Wisconsin, and the US Army Research Lab used its own AI attacker to craft images that could be fed to other machine-learning systems to train the targeted systems to incorrectly identify images.

“Adversarial examples thus enable adversaries to manipulate system behaviors,” the researchers stated in the paper. “Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software.”

While Malwarebytes’ Kujawa cannot point to a current instance of malware in the wild that used machine-learning or AI techniques, he expects to see examples soon. Rather than malware that incorporates neural networks or other AI technology, initial attempts at fusing malware with AI will likely focus on the backend: the command-and-control server, he says.

“I think we are going to see a bot that is deployed on an endpoint somewhere, communicating with the command-and-control server, [which] has the AI, has the technology that is being used to identify targets, what’s going on, gives commands, and basically acts as an operator,” Kujawa says.

Companies should expect attacks to become more targeted in the future as attackers increasingly use AI techniques. Similar to the way that advertisers track potential interested users, attackers will track the population to better target their intrusions and malware, he says.

“These things could create their own victim profiles internally,” he says. “A dossier on each target can be created by an AI very quickly.”

Related Content

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/machine-learning-boosts-defenses-but-security-pros-worry-over-attack-potential/d/d-id/1335017?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

If Uncle Sam could quit using insecure .zip files to swap info across the ‘net, that would be great, says Silicon Ron Wyden

Influential US Senator Ron Wyden (D-OR) is not happy about Uncle Sam’s employees using insecure .zip files and other archive formats to electronically transfer information.

The Oregon Democrat today sent a letter [PDF] to Walter Copan, director of America’s National Institute of Standards and Technology (NIST), asking that the standards body put together a guidance document for government workers on alternatives to .zip archiving tools.

“I write to ask that NIST create and publish guidance describing how individuals and organizations can safely share sensitive documents with others over the internet,” Silicon Ron urged. “Government agencies routinely share and receive sensitive data through insecure methods – such as emailing .zip files – because employees are not provided the tools and training to do so safely.”

As Wyden points out, data security experts have long considered the encryption algorithms used by stock .zip archiving tools, including those built into some editions of Microsoft Windows and Apple macOS, to be next to useless: they are usually too weak and can be easily cracked. Thus, creating password-protected .zip files to send government and other sensitive documents over the ‘net is considered unwise because the underlying algorithms used are probably insufficient, unless the sender goes out of their way to use software that employs stronger encryption.

Lady looking at phone with the world map in the background connecting with the phone

US govt staffers use personal gear on work networks, handle biz docs on the reg – study

READ MORE

For instance, back in 2005, eggheads devised a simple method to crack encrypted password-protected .zip archives created by Windows XP. The weak cipher used, and other since-cracked encryption methods, are still employed by many .zip archiving tools today.

And this is assuming the archives are password protected at all.

When government employees use these insecure tools to create .zip archives, Wyden argues, they are potentially putting sensitive information at risk of decryption and theft, and possibly creating a national security hazard should the messages be intercepted or the scrambled compressed archives be stolen.

The senator is not alone, either. Security experts agree that agency workers should not be using .zip archive tools for moving government documents.

“We cryptographers are arguing over PGP key sizes,” noted Associate Professor Matthew Green, a cryptography expert at Johns Hopkins University in Baltimore. “Meanwhile government employees are emailing each other documents encrypted with a cipher that was handily broken in the 1990s. This is one of those areas (like legacy SMS) where we’ve somehow gotten stuck with the least common denominator.

“There’s a huge opportunity for smart people in this field to come up with something much better.”

It appears Wyden wants NIST to do just that.

“The government must ensure that federal workers have the tools and training they need to safely share sensitive data,” he wrote. “To address this problem, I ask that NIST create and publish an easy-to-understand guide describing the best way for individuals and organizations to securely share sensitive data over the internet.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/19/ron_wyden_nist_zip_files/

Using Oracle WebLogic? Put down your coffee, drop out of Discord, grab this patch right now: Vuln under attack

Oracle has issued an emergency critical update to address a remote code execution vulnerability in its WebLogic Server component for Fusion Middleware – a flaw miscreants are exploiting in the wild to hijack systems.

The programming blunder, designated CVE-2019-2729, is present in WebLogic Server versions 10.3.6.0.0, 12.1.3.0.0, and 12.2.1.3.0. The vulnerability itself is caused by a deserialization bug in the XMLDecoder for WebLogic Server Web Services.

When exploited, a remote attacker can execute malicious code on the targeted machine via an HTTP request without any credentials or authorization, a nightmare scenario for a server platform, and especially one facing the public internet.

Making the fix for this flaw even more urgent is the report from US-CERT that working exploits for the vulnerability have already been spotted being wielded by scumbags in the wild.

Oracle recommends that admins test and install the update as soon as possible.

In posting the patch, Oracle credited a crop of 11 security researchers for spotting and reporting the vulnerability and exploits: Zhiyi Zhang from Codesafe Team of Legendsec at Qi’anxin Group, Zhao Chang of Venustech ADLab, Yuxuan Chen, Ye Zhipeng of Qianxin Yunying Labs, WenHui Wang of State Grid, Sukaralin, orich1 of CUIT D0g3 Secure Team, Lucifaer, Foren Lim, Fangrun Li of Creditease Security Team, and Badcode of Knownsec 404 Team.

This patch comes just one day after Oracle patched a similar deserialization flaw in WebLogic Server, designated CVE-2019-2725. Like today’s release, that bug allows for remote code execution on the vulnerable servers and was likewise considered to be a critical, ASAP patch install. Oracle did not say if that vulnerability is also under active exploit.

Developers who have yet to install one or both of the fixes would be well-advised to read and follow Oracle’s advisories. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/19/oracle_weblogic_emergency/

Google takes the PIS out of advertising: New algo securely analyzes shared encrypted data sets without leaking contents

Google on Wednesday released source code for a project called Private Join and Compute that allows two parties to analyze and compare shared sets of data without revealing the contents of each set to the other party.

This is useful if you want to see how your private encrypted data set of, say, ad-clicks-to-sales conversion rates, correlates to someone else’s encrypted conversion rate data set without disclosing the actual numbers to either side.

This particular technique is a type of secure multiparty computation that builds upon a cryptographic protocol called Private Set-Intersection (PSI). Google employs this approach in a Chrome extension called Password Checkup that lets users test logins and passwords against a dataset of compromised credentials without revealing the query to the internet goliath.

Private Join and Compute, also known as Private Intersection-Sum (PIS), takes PSI further by hiding the data that represents the intersection of the two data sets and revealing only the results of calculations based on the data.

The technique is described in a research paper, “On Deploying Secure Computing Commercially: Private Intersection-Sum Protocols and their Business Applications,” penned by nine Google researchers: Mihaela Ion, Ben Kreuter, Ahmet Erhan Nergiz, Sarvar Patel, Mariana Raykova, Shobhit Saxena, Karn Seth, David Shanahan, and Moti Yung.

The paper describes how PIS can be computed using three cryptographic protocols: Random Oblivious Transfer, encrypted Bloom filters, and Pohlig–Hellman double masking.

Practical

“Private Intersection-Sum is not an arbitrary question, but rather arose naturally and was concretely defined based on a given central business need: computing aggregate conversion rate (or effectiveness) of advertising campaigns,” Google’s researchers explain in their paper. “This problem has both great practical value and important privacy considerations, and represents a type of analysis that occurs surprisingly commonly.”

As an example, Google researchers describe a scenario in which a city wants to know whether the cost of operating weekend train service is offset by increased revenues at local businesses. The city’s rider data set and the point-of-sale data set from merchants can be processed using Private Join and Compute in a way that allows the city to determine the total number of train riders who made a purchase at a local store without revealing any identifying information.

SEAL up your data just like Microsoft: Redmond open-sources ‘simple’ homomorphic encryption blueprints

READ MORE

Google’s researchers argue that reconciling organizations’ hunger for data mining with rising interest in privacy requires security computing protocols. “Indeed, the consideration given to privacy by users and governments around the world is growing rapidly,” they observe.

In an email to The Register, Mike Rosulek, assistant professor of computer science at Oregon State University in the US, explained that PSI can replace the status quo, whereby Google and another company draft a legal agreement promising to share data to understand ad campaign effectiveness, generate aggregate data, and then to dispose of each other’s source data sets under contractual duress.

These PSI techniques let companies do this without the legal ritual. “With PSI there is no way to violate the ‘agreement’ because the cryptography literally prevents you from learning more than you are allowed,” he said.

For those appearing in one of these data sets – an individual who saw a Google ad or bought an advertised product – PSI-sum computation offers a similar privacy proposition as the contract scenario, said Rosulek.

“Imagine a ghost appears to Sergey Brin in a dream and says ‘people who saw this advertisement spent collectively $824,8952 at Company X!'” he said. “If you feel like this ghastly vision is not a significant violation of your personal privacy, then you should be comfortable with PSI-sum, since it releases exactly the same information about you into the world.”

Rosulek suggests the greatest benefit of this technology accrues to companies that would have otherwise foregone analytics altogether for fear of privacy problems.

While Google developed its technology as a privacy preserving way to attribute aggregate ad conversions, the web giant says it hopes PIS can advance research into public policy, diversity and inclusion, healthcare and vehicle safety by making secure computing more widely accessible.

At the moment, however, the code is not quite secure enough. The PIS security model envisions “honest-but-curious adversaries” and as the GitHub repo notes, “If a participant deviates from the protocol, it is possible they could learn more than the prescribed information.” What’s more, the protocol doesn’t ensure that parties using it employ legitimate inputs or prevent arbitrary inputs. And there may be PIS leakage.

“For example, if an identifier has a very unique associated integer values, then it may be easy to detect if that identifier was in the intersection simply by looking at the intersection-sum,” the GitHub repo cautions.

The code isn’t officially supported by Google and comes with no guarantees. ®

Speaking of encryption… MongoDB Server 4.2 RC, unveiled at MongoDB World 2019 this week, includes a feature called client-side field level encryption. This allows clients to “selectively encrypt individual document fields, each optionally secured with its own key and decrypted seamlessly on the client,” according to the software’s maker.

This ensures data is encrypted by a client before it is sent to the database to store, and decrypted by the client when it is fetched, providing end-to-end encryption. Whoever is hosting the MongoDB database cannot decipher the data, therefore, because only the client, ideally, has the necessary keys.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/19/google_pis_encryption/