STE WILLIAMS

Is Chrome really secretly stalking you across Google sites using per-install ID numbers? We reveal the truth

Analysis Google is potentially facing a massive privacy and GDPR row over Chrome sending per-installation ID numbers to the mothership.

On Tuesday, Arnaud Granal, a software developer involved with a Chromium-based browser called Kiwi, challenged a Google engineer in a GitHub Issues post about the privacy implications of request header data that gets transmitted by Chrome. Granal called it a unique identifier and suggesting it can be used, by Google at least, for tracking people across the web.

He and others argue this violates Europe’s General Data Protection Regulation, because the identifier could be considered to be personally identifiable data.

Google did not respond to a request for comment, but its description of the header suggests it would argue otherwise.

When a browser wishes to fetch a web page from a server, it sends an HTTP request for that page, a request that contains a set of headers, which are key-value pairs separated by colons. These headers describe data relevant to the request. For example, sending the header accept: text/html tells the browser what media types it will accept.

For years, since 2012 at least, Chrome has sent a header called X-client-data, formerly known as X-chrome-variations, to keep track of the field trials of in-development features active in a given browser. Google activates these randomly when the browser is first installed. Active trials are visible if you type chrome://version/ into Chrome’s address bar. Under the label Variations, you’re likely to see a long list of hexadecimal numbers similar to 202c099d-377be55a.

Referenced on line 32 of this Chromium source code file, the X-client-data header sends Google a list of field trials available to the Chrome user.

“This Chrome-Variations header (X-client-data) will not contain any personally identifiable information, and will only describe the state of the installation of Chrome itself, including active variations, as well as server-side experiments that may affect the installation,” Google explains in a paper describing Chrome capabilities.

Google suggests the number of active variations for a given installation – if usage statistics and crash reports are disabled – are determined by a random seed number between 0 and 7,999, which represents 13 bits of entropy.

Less entropy means browser fingerprinting becomes more difficult, and more entropy means the opposite. But usage statistics and crash reports are on by default, so most Chrome users operate under high entropy for this particular data point.

“If stats are on, then the ID is called ‘High entropy ID’ in the source-code, and ‘determined by your IP address, operating system, Chrome version and other parameters,’ and sticks to your installation,” explained Granal in an email to The Register.

For example, if you visit YouTube using Chrome, the header might include a string like this:

X-client-data: CIS2yQEIprbJAZjBtskBCKmdygEI8J/KAQjLrsoBCL2wygEI97TKAQiVtcoBCO21ygEYq6TKARjWscoB

“With that long ID, hard to believe it’s only 8,000 possibilities,” observed Granal.

Chrome users can see this for themselves by opening up the browser’s Developer Tools, selecting the Network tab and loading a Google property like YouTube or visiting https://ad.doubleclick.net/test. In the right-hand Developer Tools pane, various headers sent during the page load request should be visible, including X-client-data.

“When you install Google Chrome, your installation gets assigned a random number 0 and 7999 and this number is mixed with a number given by Google’s servers (“seed”), depending on your country, your IP address, and other criteria that Google decides (it could be a random number between 0 and 10 billion as well, we’d never know),” explained Granal.

“This identifier is stored on your computer, and sent every time your Google Chrome communicates with Google *including* (and that makes a huge difference) DoubleClick services (ad targeting).”

According to Granal, this identifier is sent to, and can only be read by, youtube.com, google.com, doubleclick.net, googleadservices.com, and other Google-owned domains – except when in Incognito mode.

This issue has come up before. It was discussed in 2018. But it’s relevant again because Google is in the midst of a broad revision of its web technologies, including its browser code, its extension platform, and web specifications to close privacy and security gaps while retaining the ability to deliver targeted ads.

One of the stated goals of Google’s revisions is to reduce the effectiveness of browser fingerprinting – creating a unique identifier for internet users based on the technical capabilities of their browser. In fact the Issues thread where Granal weighed in was about Google’s plan to make the text string sent in the User-Agent header more generic (less entropy,) so it’s less useful for fingerprinting.

There’s been some resistance among marketers about losing the ability to track people through fingerprinting. The GitHub discussion includes individuals affiliated with ad tech firms who worry that losing data for tracking will make it harder to police ad fraud and will magnify Google’s data advantage.

In an email to The Register, Augustine Fou, a cybersecurity and ad fraud researcher who advises companies about online marketing, dismissed the idea that less fingerprinting means more ad fraud.

“The UA string was entirely useless in detecting fraud since the beginning because any bot worth its salt can copy and paste a legit UA string and pass that to any detection tech to get by it,” he said. “So losing the UA string will not increase fraud, unless of course you assumed UA strings were useful to detect bots, which is not true.”

But the existence of the X-client-data identifier, even if it’s only readable by Google, makes it clear that Google is focused on privacy with respect to third-parties, rather than a defense against itself.

Lukasz Olejnik, a computer scientist, independent privacy researcher, and adviser, said in an email to The Register that while this feature has been around for a while, and is probably meant to help track technical problems, it raises potential issues.

Chrome icon on sandy beach

Google promises next week’s cookie-crumbling Chrome 80 will only cause ‘a very modest amount of breakage’

READ MORE

“The ID is rather non-transparent, and its management by the user is far from easy,” Olejnik said. “I would imagine that most users have no idea about this ID, what it does and when it is in use. A potentially problematic issue seems to be that the persistent ID is not reset when the user is clearing browser data. In this sense, it is a fingerprint.”

“The risk in general is bounded by the fact that this ID is apparently only sent to sites controlled by a single organization,” he added, referring to Google. “It is then up to the receiving party to make sure that processing of this data is done rightly, so either that users know about it, or that it is impossible to use the ID to single out individuals.”

Fou observes that Google has users logged into a variety of services like Chrome, Gmail, Google Maps, Google Docs, and Android devices, to name a few, so it can already track you that way.

“So you can see having User-Agent strings on a damn browser is less than irrelevant to Google, because it can still ID everyone it wants (and it has Google Analytics, DoubleClick, Adsense, reCaptcha and other code on pretty much every site that matters),” he said. “So anyone who visits any site, Google can set its own first-party cookie to identify them.”

There may also be a security vulnerability here. Granal points out that the Chromium source code only checks for a preset list of Google domains but doesn’t check specific domains, so a malicious individual could buy a domain like youtube.vg and setup a website there to collect X-client-data header information, at least until the take-down notice arrives. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/05/google_chrome_id_numbers/

Google Takeout a bit too true to its name after potentially 1000s of private videos shared with complete strangers

A bug in Google’s Photo software caused potentially 100,000 or more netizens to have their personal videos exposed to complete strangers last Thanksgiving.

The Chocolate Factory this week began notifying punters that a bug in its data-archiving tool Takeout was to blame for some accounts having their private videos shared with total strangers.

“Unfortunately, during this time some videos in Google Photos were incorrectly exported to unrelated users archives,” Google told folks in an email. “One or more videos in your Google Photos account was affected by this issue.”

The Mountain View ads slinger claimed the issue only impacted a small portion of users, about 0.01 per cent. But when you operate on the size of Google, with more than one billion Photos users, that is still in the range of a hundred thousand or more oeeos who had their private videos sent out.

gsuite

Because Monday mornings just aren’t annoying enough: Google Drive takes a dive and knocks out G Suite

READ MORE

Takeout is a download service the Chocolate Factory offers to users who want to take their business elsewhere. When exporting their data, it appears that on occasion some customers were presented with video footage from other users.

“We are notifying people about a bug that may have affected users who used Google Takeout to export their Google Photos content between November 21 and November 25,” Google said in a statement to El Reg.

“These users may have received either an incomplete archive, or videos — not photos — that were not theirs. We fixed the underlying issue and have conducted an in-depth analysis to help prevent this from ever happening again. We are very sorry this happened.”

Google could not say what measures it would take for those whose images were inadvertently shared by the glitch. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/05/google_takeout_leak/

Ransomware Attacks: Why It Should Be Illegal to Pay the Ransom

For cities, states and towns, paying up is short-sighted and only makes the problem worse.

When it comes to ransomware attacks on municipalities, paying hackers isn’t the right solution. First, there’s no guarantee hackers will return sensitive data. Second, there’s no guarantee cybercriminals won’t leverage and monetize the data anyway, returned or not. To effectively fight back, we need to make ransomware payments illegal, and develop a strong industry of cyber professionals, a digital army of sorts, to proactively increase security awareness and data protection.

Ransomware attacks on municipal governments, from large cities to small towns, have been crippling their IT operations nationwide, disrupting civilian lives and costing millions of dollars. Cybercriminals use malicious software, delivered as an email attachment or link, to infect the network and lock email, data and other critical files until a ransom is paid. These evolving and sophisticated attacks are damaging and costly. They shut down day-to-day operations, cause chaos, and result in financial losses from downtime, ransom payments, recovery costs, and other unbudgeted and unanticipated expenses.

While ransomware has been around for about 20 years, its popularity has been growing rapidly as of late, especially when it comes to attacks on governments. As of August 2019, more than 70 state and local governments had been hit with ransomware that year alone. Local, county and state governments have all been targets, including schools, libraries, courts, and other municipal entities.

In 2019, some smaller government entities paid ransoms, including two town governments and one county government. In Florida, Lake City paid roughly $500,000 (42 Bitcoin) and Riviera Beach paid about $600,000 (65 Bitcoin) after trying and failing to recover their data. In Indiana, La Porte County paid $130,000 to recover its data.

So far, none of the cities attacked in 2019 have paid a ransom, including Baltimore, which spent $18 million to recover from an attack. Unfortunately, Baltimore has been the victim of two ransomware attacks. In response to these attacks, Baltimore did something different from other cities, including Atlanta and Albany, NY, which have also fallen prey to advanced attacks recently. According to an October article in the Baltimore Sun, the city bought $20 million in cyber liability insurance to cover any additional disruptions to city networks over the next year. The first plan, for $10 million in liability coverage from Chubb Insurance, will cost $500,103 in premiums. The second, for $10 million in excess coverage, will be provided by AXA XL Insurance for $335,000.

Ransom payments fuel the efforts of the cybercriminals. Hackers use that money to become more capable, commit more crimes, and expand their operations. This helps feed into the activities of the Dark Web economy.

Organizations that pay the ransom are also at a higher risk for additional attacks. It’s a winning situation for the hacker when the ransom is paid, so they are likely to target the same organization and individuals over and over again to get additional payments. Hackers purposely target the valuable personal records held by the government and other organizations, such as legal records, financial data, and construction applications, as well as assets critical to the day-to-day functions, such as database files, audit logs, and more. As long as the opportunity for payout remains, they will continue to target these organizations.

No organization, whether it’s a municipal government or a private company, should lose sight of the fact that insurance isn’t a replacement for trying to prevent attacks in the first place. Insurance is meaningless when it comes to solving the problem; it just helps pay the bill. It’s also likely to increase the amount of ransom, especially in cases where the amount of cyber liability insurance coverage has been made public.

After a ransomware payment, and the potential reclamation of your data, hackers still have the information and will try to leverage and monetize it. That’s why organizations handling the personal information of consumers — such as credit card information, Social Security numbers, and addresses — shouldn’t be allowed to pay ransoms. It should be illegal to fund the bad actors, since paying up is ultimately the sale of personal and sensitive information, albeit an unwilling exchange.

Government leaders and executives should be held accountable for the safety of the data. There’s a lack of interest and competence when it comes to defending data, yet our private information and our digital identities must be protected.

Defending Against Ransomware Attacks
Government organizations at all levels need preventative and defensive strategies in place, along with disaster and recovery capabilities. The rapidly evolving email threat environment requires advanced inbound and outbound security techniques that go beyond the traditional gateway. Government security professionals must work on closing the technical and human gaps, to maximize security and minimize the risk of falling victim to sophisticated ransomware attacks.

There are a number of solutions to help defend against ransomware attacks (Editor’s note: The author’s company is one of a number of companies that offer some of these services):

  • Spam Filters/Phishing-Detection Systems
    Spam filters, phishing-detection systems, and related security software can help block potentially threatening messages and attachments.
  • Advanced Firewall
    If a user opens a malicious attachment or clicks a link to a drive-by download, an advanced network firewall provides a chance to stop the attack by flagging the executable as it tries to pass through.
  • Malware Detection
    For emails with malicious attachments, static and dynamic analysis can detect indicators that the document is trying to download and run an executable file.
  • User-Awareness Training
    Make phishing simulation part of security awareness training.
  • Backup
    If an attack happens, cloud backup can get your systems restored quickly.

Instead of paying ransoms, we need to build awareness and empower a workforce to help us digitally defend ourselves. This is an opportunity for America to lead the way in cyber protection and to build a strong industry of cybersecurity leaders by creating a variety of new jobs and opportunities to help us protect the data and build a stronger infrastructure.

Cybercriminals are going to keep launching attacks. More talent, skills, and training are needed to protect our governments, businesses, and individual citizens. It’s time to think about cybersecurity in a new way.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “AppSec Concerns Drove 61% of Businesses to Change Applications.”

Fleming Shi serves as Chief Technology Officer at Barracuda Networks. Fleming joined Barracuda in 2004 as the founding engineer for the company’s web security product offerings, helping to create the first version of Barracuda’s message archiving product and paving the way … View Full Bio

Article source: https://www.darkreading.com/risk/ransomware-attacks-why-it-should-be-illegal-to-pay-the-ransom/a/d-id/1336905?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft DART Finds Web Shell Threat on the Rise

Various APT groups are successfully using Web shell attacks on a more frequent basis.

An investigation into the breach of a customer’s Web server by Microsoft’s Detection and Response Team (DART) found a Web shell attack that had succeeded in moving through most of the ATTCK matrix before being remediated.

The Web shell was part of an attack that placed files in numerous directories on the Web server, gaining persistence and beginning to spread laterally in the infrastructure before it was discovered, DART notes. DART also says it is seeing Web shells used more frequently by APT groups, including Zinc, Krypton, and Gallium. And the threat is growing: “Every month, Microsoft Defender Advanced Threat Protection (ATP) detects an average of 77,000 web shell and related artifacts on an average of 46,000 distinct machines,” DART says.

Read more here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/microsoft-dart-finds-web-shell-threat-on-the-rise/d/d-id/1336966?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

SharePoint Bug Proves Popular Weapon for Nation-State Attacks

Thousands of servers could be exposed to SharePoint vulnerability CVE-2019-0604, recently used in cyberattacks against Middle East government targets.

Researchers have detected multiple instances of cyberattackers using SharePoint vulnerability CVE-2019-0604 to target government organizations in the Middle East. These mark the latest cases of adversaries exploiting the flaw, which was recently used to breach the United Nations.

CVE-2019-0604 exists when SharePoint fails to check the source markup of an application package. Attackers could exploit this by uploading a specially crafted SharePoint application package to an affected version of the software. If successful, they could run arbitrary code in the context of both the SharePoint application pool and the SharePoint server farm account.

Microsoft released a patch for the vulnerability in February 2019 and later updated its fix in April. Shortly after, reports surfaced indicating the remote code execution flaw was under active attack. A series of incidents used the China Chopper web shell to gain entry into a target; evidence shows attackers used the web shell to gain network access at several organizations.

New findings from Palo Alto Networks’ Unit 42 suggest the vulnerability is still popular among attackers. In September 2019, researchers detected unknown threat actors exploiting the flaw to install several web shells on the website of a Middle East government organization. One of these was AntSword, a web shell freely available on GitHub that resembles China Chopper.

Attackers used these web shells to move laterally across the network to access other systems, explains cyber threat intelligence analyst Robert Falcone in a blog post on the findings. They employed a custom Mimikatz variant to dump credentials from memory and Impacket’s atexec tool to use dumped credentials to run commands on other systems throughout the network.

Later in September, Unit 42 saw this same Mimikatz variant uploaded to a web shell hosted at another government organization in a second Middle East country. This variant is unique, Falcone writes, as it has an allegedly custom loader application written in .NET. Because of this, researchers believe the same group is behind the breaches at both government organizations.

This isn’t the first time Unit 42 has seen CVE-2019-0604 used against government targets in the Middle East. In April 2019, researchers saw the Emissary Panda threat group exploiting this flaw to install web shells on SharePoint servers at government organizations in two Middle Eastern countries, both different from the nations targeted in the January attacks. There are no strong ties linking the two aside from a common vulnerability, similar tool set, and government victims.

“The exploitation of this vulnerability is not unique to Emissary Panda, as multiple threat groups are using this vulnerability to exploit SharePoint servers to gain initial access to targeted networks,” Falcone writes. There is a possibility of overlap in the use of AntSword, as Emissary Panda used China Chopper and the two are “incredibly similar,” he explains, but researchers don’t currently believe the attackers behind the April 2019 attacks leveraged AntSword.

CVE-2019-0604 appeared in a recent attack against the United Nations during which intruders compromised servers at UN offices in Geneva and Vienna. Attackers accessed Active Directories, likely compromising human resources and network data. It’s unclear exactly which files were stolen in the breach. One UN IT official estimates some 400GB of files were downloaded.

In early January 2020, Unit 42 researchers used Shodan to search for Internet-accessible servers running versions of SharePoint exposed to CVE-2019-0604. Their findings showed 28,881 servers advertised a vulnerable version of the software. They did not check each server to verify its exposure, so it’s possible many public-facing servers are not exposed or have been patched.

“Regardless, the sheer number of servers and publicly available exploit code suggests that CVE-2019-0604 is still a major attack vector,” Falcone writes.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/sharepoint-bug-proves-popular-weapon-for-nation-state-attacks/d/d-id/1336967?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

8 of the 10 Most Exploited Bugs Last Year Involved Microsoft Products

Six of them were the same as from the previous year, according to new Recorded Future analysis.

For the third year in a row, cybercriminals employed vulnerabilities in Microsoft products far more so than security flaws in any other technology, new data for 2019 shows.

Eight out of the 10 most exploited vulnerabilities in 2019 in fact impacted Microsoft products. The other two—including the most exploited flaw—involved Adobe Flash Player, the previous top attacker favorite, according to analysis by Recorded Future.

Like it has done for past several years, Recorded Future analyzed data gathered from vulnerability databases and other sources to try and identify the vulnerabilities that were most used in phishing attacks, exploit kits, and remote access Trojans.  

The threat intelligence firm considered data on some 12,000 vulnerabilities that were reported and rated through the Common Vulnerabilities and Exposure (CVE) system last year. Vulnerabilities related to nation-state exploits were specifically excluded from the list because such flaws are not typically offered for sale or even mentioned much on underground forums, according to Recorded Future.

The 2019 analysis showed a continued—and unsurprising—preference among cybercriminals for flaws impacting Microsoft software.

The most exploited vulnerability in 2019 itself was CVE-2018-15982, a so-called use-after-free issue impacting Adobe Flash Player 31.0.0.153 and earlier, and 31.0.0.108 and earlier. Exploits for the remote code execution flaw was distributed widely through at least ten exploits kits including RIG, Grandsoft, UnderMiner, and two newcomers, Capesand and Spelevo. But this vulnerability, and another use-after-free issue impacting multiple Adobe Flash Player versions (CVE-2018-4878), were the only ones in Recorded Future’s top 10 list unrelated to Microsoft.

Four of the remaining eight vulnerabilities in Recorded Future’s top 10 most exploited list impacted Internet Explorer. One of them—CVE-2018-8174—a remote code execution flaw in the Windows VBScripting engine, was the second-most abused flaw this year—and the most exploited issue in 2018. Exploits for the flaw were distributed through multiple exploit kits including RIG, Fallout, Spelevo, and Capesand.

Troublingly, as many as six of the vulnerabilities in this year’s list, were present in the 2018 top 10 as well. One of them—a critical remote code execution flaw in Microsoft Office/Wordpad (CVE-2017-0199)—has been on the list for three years. In fact only one security vulnerability in Recorded Future’s 2019 top 10 list was disclosed the same year—CVE-2019-0752—a “Scripting Engine Memory Corruption Vulnerability” in Internet Explorer 10 and 11.

“The number of repeated vulnerabilities is significant because it reveals the long-term viability of certain vulnerabilities,” says Kathleen Kuczma, sales engineer at Recorded Future. Vulnerabilities that are easy to exploit or impact a common technology are often incorporated into exploit kits and sold on criminal underground forums, she notes.

CVE-2017-0199, for instance, continues to be heavily exploited because it impacted multiple Microsoft products, specifically Microsoft Office 2007-2016, Windows Server 2008, and Windows 7 and 8. “The number of products impacted coupled with its inclusion in multiple exploit kits makes it a viable vulnerability to continue to exploit. Kuczma says.

Another reason criminals continue exploiting certain vulnerabilities is simply because they work. Organizations often can take a long time to address known vulnerabilities even when the flaws are being actively exploited or being distributed through exploit kits.  Common reasons for delays in patching include concern over downtime and operational disruptions and concern over patches not working or breaking applications. Other reasons include a lack of visibility and an inability to identify potentially vulnerable systems on a network.

Patching Challenges

“Many in security, primarily those that don’t work on blue teams for large organizations, like to look through rose-tinted glasses,” says Brian Martin, vice president of vulnerability intelligence at Risk Based Security. “The unfortunate reality is that patching all of the systems in a large organization is brutal.”

It’s not uncommon at all for penetration testers to discover systems on a target network that the hiring organization was not even aware about, he says.

Recorded Future’s analysis showed a continuing decline in the use and availability of new exploit kits. At one time, exploit kits were extremely popular because they allowed even cybercriminals with relatively little skills the opportunity to execute sophisticated attacks. In 2016, Recorded Future counted at least 62 new exploit kits in underground markets. In 2019, they were just four new entrants.

The decline, which numerous security vendors and researchers have reported over the last two years or more, is primarily the result of multiple successful law enforcement action against the groups selling exploit kits. 

The 2016 arrests of dozens of individuals in Russia behind the Angler exploit kit operations, is just one example, Kuczma says. Another factor is the relatively scarcity of zero-day flaws which were what exploit kits primarily relied on to be successful. “With less zero-days, companies are better able to shore up their defenses against potential exploit kit usage,” she notes.

Harrison Van Riper, strategy and research analyst at Digital Shadows, says another factor is the planned end-of-life of Adobe Flash this year. Adobe Flash used to be an extremely common attack vector and therefore popular among exploit kit-makers. But with the technology scheduled for termination this year and modern browsers not running Flash automatically any more, interest in exploit kits has dwindled.  

Lists like those from Recorded Future can help organizations identify the biggest immediate threats so remedial action can be prioritized. According to Recorded Future, less than 1% of all disclosed vulnerabilities are immediately weaponized. So by having information on the ones that are being actively exploited organizations can gain a better understanding of the specific issues impacting their technology stack.

“Vulnerabilities that are being actively exploited should be considered priorities for patching,” Van Riper says. “Keeping up to date with newly-disclosed vulnerabilities and exploits, can also help with prioritizing patch processes.”

Organizations need to realize that often, known vulnerabilities are exploited actively before a CVE number is assigned to it, says Martin from Risk Based Security.

According to the company, the CVE and National Vulnerability Database system often does not include many security vulnerabilities that researchers discover and disclose in various ways. In a report last year, Risk Based Security warned that organizations relying solely on the CVS/NVD system likely are not getting information on nearly one-third of all disclosed vulnerabilities. Risk Based Security has said its researchers found 5,970 more vulnerabilities last year than reported in the NVD. Of that, over 18% had a severity rating of between 9 and 10.

“While such a list is interesting and helpful, a more interesting nuance for the list would be to note the vulnerabilities but show a ‘time to CVE assignment’ metric,” he says. This can help determine how long a security bug went from first recorded exploitation to the CVE being assigned to the CVE being opened and made public.

For organizations, the key takeaway is to pay attention to patching. “Vulnerability management has become a major priority recently, given the proliferation of attacks that rely on exploits that have existing patches,” says Rui Lopes, engineering and technical support director at Panda Security. “A rock-solid process for assessing and deploying patches should be the bedrock of every organization’s vulnerability management plan.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/8-of-the-10-most-exploited-bugs-last-year-involved-microsoft-products/d/d-id/1336968?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Companies Pursue Zero Trust, but Implementers Are Hesitant

Almost three-quarters of enterprises plan to have a zero-trust access model by the end of the year, but nearly half of cybersecurity professionals lack the knowledge to implement the right technologies, experts say.

Worried about protecting data, the likelihood of breaches, and the rise of insecure endpoint and Internet of Things (IoT) devices, companies are looking to technologies and security models that focus on continuous authentication, experts say.

On February 4, survey firm Cybersecurity Insiders published its “Zero Trust Progress Report,” finding that two-thirds of surveyed cybersecurity professionals would like to continuously authenticate users and devices and force them to earn trust through verification, two foundational tenets of the zero-trust model of security. Yet while the average cybersecurity professional is confident he or she can apply the zero-trust model in their environment, a third of respondents had little confidence, and 6% were not confident at all, the report found.

Other studies have found a similar conclusion: The concept of a zero-trust architecture, now a decade old, appears ready to go mainstream, but cybersecurity professionals remain uncomfortable with its implementation, says Jeff Pollard, vice president and principal analyst with Forrester Research, the analyst firm that coined the model in 2010.

“Zero trust is one of those initiatives that is being driven from the top-down perspective,” he says. “Previous models, security architectures — were very practitioner-driven. They were very organic and grew over time. … But because zero trust is a different model and a different approach, it is going to take time for all the practitioners out there to become ultimately familiar with what this looks like from an operations standpoint.”

The zero-trust concept evolved as a reaction to the disappearance of the network perimeter, as personal smartphones and other devices became widely used by employees at the office and as more workers did their jobs remotely. While old models of network security assigned trust based on location — anyone in the office was often trusted by default — zero-trust models focus on users and context. 

Those components also create the biggest challenges for companies, according to the survey, which was sponsored by network security firm Pulse Secure. Most companies (62%) have to worry about over-privileged employees accessing applications as well as whether partners (55%) are only accessing the resources assigned to them. About half of respondents (49%) are worried about vulnerable mobile and rogue devices in their networks.

“Digital transformation is ushering in an increase in malware attacks, IoT exposures, and data breaches, and this is because it’s easier to phish users on mobile devices and take advantage of poorly maintained Internet-connected devices,” Scott Gordon, a spokesman for Pulse Secure, said in a statement. “As a result, orchestrating endpoint visibility, authentication, and security enforcement controls are paramount to achieve a zero-trust posture.”

The result is that companies have to move their entire infrastructure to the new model to benefit from the overall benefits of a zero-trust approach — one of the reasons that the process has taken so long, says Forrester’s Pollard.

“They cannot take what they have done in the past, and forklift it over to the new architecture — taking an existing infrastructure and porting it over,” he says. “There is just so much technical debt in the old environment. Instead, we recommend of taking a more thoughtful approach.”

Security practitioner should first focus on using the zero-trust approach for cloud services, which are often new projects and which do not have much security debt. With the move, companies could also find new ways of accomplishing zero trust, such as security-as-a-service (SaaS) models.

The hesitation on the part of companies surveyed by Cybersecurity Insiders is understandable, says Holger Schulze, founder and CEO of the firm.

“Some organizations are hesitant to implement zero trust as SaaS because they might have legacy applications that will either delay, or prevent, cloud deployment,” he said in a statement. “Others might have greater data protection obligations, where they are averse to having controls and other sensitive information leaving their premises, or they have a material investment in their data center infrastructure that meets their needs.”

Done right, zero trust should not be any more expensive than the perimeter-focused security that most companies use today, says John Kindervag, field chief technology officer for network-security firm Palo Alto Networks and the person credited with formalizing the zero-trust model.

“Zero trust is not more costly than what is being done today — in fact, we typically see significant savings in capital expenditures, because often multiple technologies are collapsed into a single one or legacy technology is not needed in a zero-trust environment,” he says. “We also see significant savings in operational expenditures, because smaller teams can effectively operate zero-trust environments.”

Finally, companies need to focus on educating, not just the practitioners, but the users as well, says Forrester’s Pollard. New tools and systems are necessary, but the user is essential, he says.

“Make sure that you understand that they user is at the epicenter of the zero-trust model,” he says.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/operations/identity-and-access-management/companies-pursue-zero-trust-but-implementers-are-hesitant/d/d-id/1336969?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

School’s out as ransomware attack downs IT systems at Scotland’s Dundee and Angus College

A further education college in east Scotland has been struck by what its principal described as a cyber “bomb” in an apparent ransomware attack so bad that students have been told to stay away and reset passwords en masse.

Dundee and Angus College told students not to turn up after the ransomware seemingly downed the entire institution’s IT systems.

The outage began late last week and has been ongoing ever since, though the latest internal update to students and staff says services are finally being restored, we have been told by sources.

An update from the college posted on its website said:

To enable us to continue this work, no classes will run on Tuesday 4 February 2020. This includes classes for college students, school pupils, evening classes and also means student interviews will not take place.

However, all students are required to reset their passwords before they can access College systems such as MyLearning.

Dundee and Angus College has about 5,000 registered students. Its campuses are in Dundee itself and a few miles further along Scotland’s east coast.

A student affected by the attack told The Register: “We cannot access any college systems… the intranet that contains learning resources for all course materials has been offline for some since Thursday.”

Our source added: “I got in on Friday but couldn’t get any material out [from college servers]. Loads of my class are worried in case the Graded Unit has been lost, with no way to directly contact college or IT bods… loads of classmates are in panic mode in case work has been compromised or vanished.”

The Graded Unit, we are told, can directly affect students’ chances of getting into university.

Principal Grant Ritchie described to local newspaper The Courier what sounds remarkably like a drive-by ransomware attack that began around 3am on 30 January.

“It stopped our system and now we have to get it back,” he said, adding: “It’s a mischief thing, it wasn’t necessarily targeted at us. It’s one of these bombs that are just sitting waiting to go off.”

The infection appears to be worse than college admins initially thought, with a promise of being back up and running by Monday having passed without IT systems being fully restored.

The impact of the attack is “mass panic worse than the coronavirus”, added our source, “because nobody knows anything”.

We have asked Dundee and Angus College some questions and will update this article if we hear back. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/04/dundee_angus_college_ransomware/

Oh buoy. Rich yacht bods’ job agency leaves 17,000 sailors’ details exposed in AWS bucket

A private yacht crew recruitment agency has left an AWS bucket containing the CVs, passports and even some drug test results for up to 17,000 people exposed to world+dog, according to reports.

Crew Concierge – a Bath-based jobs firm that targets “high net worth individuals”, yacht captains and management companies searching for seafarers to crew private yachts – left an AWS S3 bucket open to anyone and everyone for around 11 months starting in February 2019.

British news site Verdict reported that 17,379 seafarers’ CVs were exposed, along with thousands of ENG1 medical certificates and passport scans.

A total of 90,000 files were exposed, it was said, including sample menus from chefs hoping to fill a billet aboard some oligarch’s floating gin palace.

In a statement to Verdict, Crew Concierge director Sara Duncan blamed “the team of developers we had hired” for the bucket being left open, saying she had trusted the devs to “do a competent job” of securing “personal and sensitive personal information relating to our registered crew”.

The breach has been reported to the Information Commissioner’s Office, as required by the Data Protection Act 2018.

Duncan continued, saying: “It appears likely that the individual or individuals responsible have developed advanced tools designed specifically to identify AWS customers and whether or not they have [a] misconfigured instance that may leave it open to malicious attack.”

Such so-called “advanced tools” include the search engine Gray Hat Warfare, which does for AWS buckets what Shodan does for IoT devices carelessly and inappropriately left accessible by the public.

A few weeks ago Britain’s Royal Yachting Association (RYA) ‘fessed up to a breach of its member database circa 2015. The two incidents are not thought to be linked, in particular because the RYA identified malicious access to the database in question whereas Crew Concierge left the door to its digital stables wide open.

The Register has asked Crew Concierge for comment. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/04/crew_and_concierge_data_breach/

What WON’T Happen in Cybersecurity in 2020

Predictions are a dime a dozen. Here are six trends that you won’t be hearing about anytime soon.

In many cultures, a new year is seen as a symbol of hope, of new beginnings. A chance to refresh, reset, learn from mistakes, and power ahead with a whole lot of energy and amazing plans. But with more than 5 billion sensitive data records stolen in 2019, I figured it would be more accurate to predict what won’t be happening in cybersecurity in 2020. So I sat down with my Secure Code Warrior co-founder, Matias Madou, and came up with the following six trends:

SQL Injections Eradicated from All Software
Sadly, we’ve been waiting for this day for more than 20 years, and we’ll keep waiting for at least one more. It’s the vulnerability that, like a cockroach, has survived every tactic thus far to eradicate it for good. Ironically, the remedy has been known for pretty much the same length of time. Yet the prioritization of security best practice at every stage of software development (especially right from the beginning) remains too low, and certainly inadequate in light of the vast increase of code production since the discovery of SQL injection.

Developers and AppSec Becoming Best Friends
Ah, developers and the AppSec team. Will they ever get along, or are they destined to a life of rivalry like Rocky vs. Apollo? The short answer is yes, they can get along, but right now their priorities are often very different.

When they meet in a project environment, they’re on opposite pages and clash right at the final hurdle, when AppSec specialists are poring through a developer’s code. The developer has built beautiful, functional features (which is his or her top priority) that are torn apart if security vulnerabilities are discovered. The AppSec specialist has, in effect, called the baby ugly and forced the developer to go back and fix any bugs, often delaying deployment.

In our current state, this won’t be fixed until both teams work toward a common goal, which is the creation of secure software. This is not going to happen as a default process in 2020, but with the advent of the DevSecOps movement, developers are starting to recognize the need to upskill in security and work to a higher standard that includes security objectives from the beginning.

An Oversupply of Security Professionals
In 2020, 2025, 2030 … it’s almost guaranteed that we will be short-staffed globally when it comes to security expertise. According to a report from (ISC)2, there are around 2.93 million cybersecurity positions currently unfilled. This is almost certainly going to get worse before it gets better, and there is no hidden security army waiting to march to our rescue this year.

In the immediate future, our best chance to address the skills shortage is to make security an organizational priority and upskill our existing workforce, which means empowering developers with the training and tools to code securely and creating a companywide security culture. Most current AppSec teams are probably fighting against well-known, old security bugs (see the first point above). If we ensure they don’t have to spend precious time and effort fixing these common issues, they will have more bandwidth to focus on tough security problems such as APIs and building tools that fit development pipelines.

Production of Less Code     
The world is being digitized at a staggering rate, and societal demand is not going to waver. There are approximately 111 billion lines of code written each year, and this number will only grow larger and more terrifying for already-stretched AppSec teams.

A Reduction in Stolen Data Records
More code means more vulnerabilities, and this presents more opportunities for attackers to find a way to steal data. At least 5.3 billion records were stolen worldwide in 2019, and defense against attackers is still a bit of a desperate, reactive scramble. This number may not double in 2020, but I think it will get close.

According to Statistica research, there has been an upward trend in breaches and number of stolen records in the US, with a huge peak in 2017. The number of attacks trended downward in 2018, perhaps due to tougher security measures, but the number of records obtained was the highest it has ever been. Going forward, cyberattacks will become increasingly sophisticated and high-volume, and they’re not going away anytime soon.

Developers Demand Longer, More Frequent Video-Based Security Training
If there is one thing developers love, it’s watching hours upon hours of computer-based training videos. In fact, such is the demand for this captivating content, Netflix will announce a whole new subcategory dedicated to generic security training videos.

Er, nope. Not now, not in 2020, not ever. For developers, the introduction to security is often by way of workplace compliance training. Secure coding is rarely part of their tertiary education, and on-the-job training can be the very first encounter with software security. And, unsurprisingly, they often don’t like it.

For developers to take security seriously — and for training to be useful — it has to be relevant, engaging, and contextual to their jobs. One-off compliance training — or an endless stream of dull videos — is not the way to a developer’s heart, and it’s not going to reduce vulnerabilities.

If you want developers to have any chance of becoming a security-aware defensive force against common vulnerabilities, get them working with real code examples — the kind they would come across in their day-to-day tasks. Make the learning bite-sized, easy to build upon, and incentivize it with a sense of fun. For a security culture to thrive, it must be positive, engaging, and develop real skills and solutions across the organization.

Zero Deaths from a Cybersecurity-Related Incident
This is clearly no laughing matter. I’ve said many times that the world simply won’t care about cybersecurity until people start dying from a cyberattack-related incident. The problem is that this has already happened, and it went largely unnoticed.

Cyberattacks against US hospitals have been linked to a rise in heart attack deaths in 2019. Of course, the attackers did not cause lethal cardiac events in patients, but their ransomware attacks on hospital systems and equipment slowed treatment times for critical care. This study from the University of Central Florida analyzed 3,000 hospitals, 311 of which had experienced a data breach. In those that were affected by a security incident, healthcare workers took an average of 2.7 minutes longer to give suspected heart attack victims an ECG, likely due to procedural changes, newly implemented security measures, and IT support issues taking up more time than it did previously. Identifying and treating a heart attack is a race against time, and those hospitals saw an additional 36 deaths per 10,000 heart attacks per year on average.

Fewer Stock Images of “Hooded Hackers”
If you type “hacker” into an image search, you will inevitably uncover thousands of images of a hooded, faceless figure typing away at a laptop, or a similar figure in a Guy Fawkes mask. This stereotyped image of a hacker is getting really tired, and makes everyone look like a bad guy. There are plenty of security good guys and girls, and the negative connotations around the hacker image do everyone a disservice.

Do I see this changing in 2020? Probably not, but it’s nice to dream. For now, it’s important to remember that security doesn’t have to be scary.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “7 Steps to IoT Security in 2020.”

Pieter Danhieux is a globally recognised security expert, with over 12 years experience as a security consultant and 8 years as a Principal Instructor for SANS teaching offensive techniques on how to target and assess organisations, systems and individuals for security … View Full Bio

Article source: https://www.darkreading.com/risk/what-wont-happen-in-cybersecurity-in-2020/a/d-id/1336927?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple