STE WILLIAMS

DNS Firewalls Could Save Companies Billions

New analysis shows widespread DNS protection could save organizations as much as $200 billion in losses every year.

DNS protection could prevent approximately one-third of the total losses due to cybercrime – which translates into billions of dollars potentially saved.

According to “The Economic Value of DNS Security,” a new report published by the Global Cyber Alliance (GCA), DNS firewalls could annually prevent between $19 billion and $37 billion in losses in the US and between $150 billion and $200 billion in losses globally. GCA used data about cybercrime losses from the Council of Economic Advisors and the Center for Strategic and Internation Studies as the basis for its GCA’s estimates of how much DNS protection, such as a DNS firewall, could save the economy.

“The benefit from using a DNS firewall or protective DNS so exceeds the cost that it’s something everyone should look at,” says Philip Reitinger, GCA president and CEO. In many cases, he says, the DNS protection service or DNS firewall will be available at no cost to purchase or license.

But could any cost, no matter how small, be offset by the difficulty in deploying or managing the protection? Not likely. “In most cases, it will be extremely easy to do. There’s no new software here,” Reitinger says. When it comes to protecting endpoints, it could be as simple as changing the address used for DNS resolution in the computer’s network settings. And for some companies, the adoption will be only slightly more difficult.

The only real difficulty, Reitinger says, comes if the firewall begins generating false-positives, blocking traffic to destinations that serve a legitimate business purpose. Should that happen, firewall rules will need to be manually overridden. “If you see people trying to going out to various services, you get to write the rules that allow or block the destination in spite of the firewall,” he says.

One legitimate point of concern is the data on DNS traffic that the protection provider might collect, Reitinger adds. Knowing about an organization’s traffic patterns provides a great deal of information about the organization itself, he says. In this case, asking serious questions of the provider before signing a contract or changing a resolution server address can prevent privacy concerns in the future.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/network-and-perimeter-security/dns-firewalls-could-save-companies-billions/d/d-id/1334965?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

No Telegram today, protestors: Chinese boxes DDoS chat app amid Hong Kong protest

Chat app Telegram has reportedly been DDoS’d, with its downtime coinciding with protests in Hong Kong against repressive new Chinese laws.

The traffic crapflood resulted in the app, which is advertised as being “privacy-focused”, going offline to users “in the Americas”, according to the firm, as well as unspecified “other countries”. Telegram claims to have around 200 million users and said the outage lasted for around an hour.

The timing of the attack, last night, came as Hong Kong residents staged large-scale protests against a Chinese extradition law being pushed through the territory’s legislature.

A century of British colonial rule left Hong Kong with laws and customs rooted in the democratic tradition, in stark contrast to the Chinese mainland. Locals are determined to maintain these in the face of authoritarian communist China, which took over the former colony in 1997 after Britain’s century-long lease ended.

In another tweet, Telegram founder Pavel Durov explicitly attributed the DDoS to an army of devices in China:

Telegram’s group chat feature allows groups of up to 200,000 members to be created, making it a useful way of sending messages to large numbers of followers. Inevitably, governments have seen it as a direct threat: Russia targeted the app last year through both legal and illegal means, while Iran, Iraq and others have even diverted internet traffic in an effort to scoop up information about its users and their messages.

Reg readers may remember that Durov is on some kind of hunger strike in protest against his own lack of business inspiration, or something. ®

* Chinese Democracy

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/13/telegram_ddos_hong_kong_protests/

Hacking these medical pumps is as easy as copying a booby-trapped file over the network

Two security vulnerabilities in medical workstations can exploited by scumbags to hijack the devices and connected infusion pumps, potentially causing harm to patients, the US government revealed today.

The flaws, CVE-2019-10959, rated critical (specifically, 10 out 10 in severity), and CVE-2019-10962, rated medium (7.5), were identified by infosec biz CyberMDX. The bugs affect certain versions of the Becton Dickinson’s Alaris Gateway Workstation (AGW), which provides power and network connectivity to infusion and syringe pumps. The equipment is not sold in America, though it is used across Europe and Asia.

The US Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) has issued an advisory, ICSMA-19-164-01, detailing the flaws. AGW devices running the latest firmware, versions 1.3.2 and 1.6.1, are not affected; earlier iterations are however.

For the critical flaw, that includes: 1.1.3 Build 10, 1.1.3 MR Build 11, 1.2 Build 15, 1.3.0 Build 14, and 1.3.1 Build 13. For the medium flaw, affected versions include: 1.0.13, 1.1.3 Build 10, 1.1.3 MR Build 11, 1.1.5, and 1.1.6.

Beyond AGW hardware running older firmware, several other Alaris devices – GS, GH, CC and TIVA – running software version 2.3.6, released in 2006, are also affected.

An attacker successfully exploiting the critical flaw could remotely install malicious firmware, thereby disabling the workstation or altering its function.

hippocratic

Docs ran a simulation of what would happen if really nasty malware hit a city’s hospitals. RIP :(

READ MORE

To do so, the attacker would first need access to the hospital network. Given that hospitals and healthcare organizations run out of date operating systems and software, and are routinely ransacked by ransomware, this shouldn’t be too much of a stretch.

Next, the intruder crafts a Windows Cabinet file (CAB), an archive format used for storing data related to Microsoft Windows drivers and system files, that is booby-trapped with malicious executables.

Here’s the heart of the vulnerability: it is possible to update an AGW’s firmware over the network without any special privileges or authentication; you just have to copy across a CAB file using Windows SMB. That means the hacker can upload their malicious .CAB to a vulnerable workstation, powered by Windows CE, and the archive will be unpacked by the AGW on its file system, overriding its executables with the intruder’s malware or spyware.

Recommended mitigations including blocking the SMB protocol, segregating the VLAN network, and taking steps to limit who has access to the hospital network.

In an advisory on its website, device maker Becton Dickinson said, “BD has assessed the change in scope to this vulnerability for clinical impact and concluded that although the probability of remotely exploiting the vulnerability to the Workstation and then creating a custom, executable code that impacts the delivery of a patient’s IV infusion is theoretically possible, the probability of patient harm is unlikely to occur due to the sequence of events that must occur in a specific order by a highly trained attacker.”

The other less serious flaw affects could allow an attacker with knowledge of the IP address of the device to access information through its browser interface, including monitoring data, event logs, user guide and configuration settings.

This browser interface issue can be mitigated through the installation of firmware versions 1.3.2 or 1.6.1. Limiting and segmenting network access are also advisable.

In an emailed statement, Elad Luz, Head of Research at CyberMDX, stressed the need of everyone involved with medical devices – device makers, hospitals, and technology companies – to commit to cybersecurity in order to ensure patient safety. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/13/medical_workstation_vulnerabilities/

The Rise of ‘Purple Teaming’

The next generation of penetration testing represents a more collaborative approach to old fashioned Red Team vs. Blue Team.

In 1992, the film Sneakers introduced the term “Red Team” into popular culture as actors Robert Redford, Sydney Poitier, Dan Aykroyd, David Strathairn, and River Phoenix portrayed a team of security experts who hire themselves out to organizations to test their security systems by attempting to hack them.

This was a revolutionary concept at the time — the term “penetration test” didn’t even exist yet, and the idea of a friendly security team trying to break through a company’s defenses wasn’t exactly commonplace. Today, penetration testing is an important part of any cybersecurity system, and both internal and external Red Teams play a critical role in that process.

But they don’t do it alone. Organizations often employ “Blue Teams,” referring to the internal security team tasked with defending against both real and simulated attacks. If this raises your curiosity about whether and how closely Red Teams and Blue Teams collaborate in security testing, then you’ve pinpointed the fast-rising cybersecurity trend of “Purple Teaming.”

What Makes Purple Teaming Different?
For years, organizations have run penetration tests similarly: The Red Team launches an attack in isolation to exploit the network and provide feedback. The Blue Team typically knows only that an evaluation is in progress and is tasked to defend the network as if an actual attack were underway. 

The most important distinction between Purple Teaming and standard Red Teaming is that the methods of attack and defense are all predetermined. Instead of attacking the network and delivering a post-evaluation summary of finding, the Red Team identifies a control, tests ways to attack or bypass it, and coordinates with the Blue Team in ways that either serve to improve the control or defeat the bypass. Often the teams will sit side by side to collaborate and truly understand outcomes.

The result is that teams are no longer limited to identifying vulnerabilities and working based on their initial assumptions. Instead, they are testing controls in real time and simulating the type of approach that intruders are likely to utilize in an actual attack. This shifts the testing from passive to active. Instead of working to outwit each other the teams can apply the most aggressive attack environments and conduct more complex “what-if” scenarios through which security controls and processes can be understood more comprehensively and fixed before a compromise.

How Deception Technology Adds Value to Penetration Testing
Part of what makes Red Teaming and Purple Teaming so valuable is they provide insight into the specific tactics and approaches that attackers might use. Organizations can enhance this visibility by incorporating deception technology into the testing program. The first benefit comes from detecting attackers early by enticing them to engage with decoys or deception lures. The second comes from gathering full indicators of compromise (IOCs) and tactics, techniques, and procedures (TTPs) into lateral movement activity. This significantly enhances visibility into how and when attackers circumvent security controls, enriching the information that typically results from these exercises.

Cyber deceptions deploy traps and lures on the network without interfering with daily operations. A basic deployment can easily be completed in under a day, providing the Blue Team an additional detection mechanism that blends in with the operational environment. This creates more opportunities to detect when the Red Team bypasses a defensive control, forcing team members to be more deliberate with their actions and making simulated attack scenarios more realistic. It also offers a truer test of the resiliency of the organization’s security stack and the processes it has in place to respond to an incident.

The rise of Purple Teaming has changed the way many organizations conduct their penetration tests by providing a more collaborative approach to old-fashioned Red Team vs. Blue Team methodology. The increased deployment of deception technology in cybersecurity stacks has further augmented the capabilities of both the Red and Blue teams by allowing them to adopt a more authentic approach to the exercises.

Related Content:

Joseph Salazar is a veteran information security professional, with both military and civilian experience.  He began his career in information technology in 1995 and transitioned into information security in 1997.  He is a retired Major from the US Army … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-rise-of-purple-teaming/a/d-id/1334909?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Adds Two-Factor Authentication For Its Apps on iOS

Android-based two-factor authentication now works for Google applications on iPad and iPhone.

Google now has extended its Android-based two-factor authentication to Google applications on iOS devices like iPads and iPhones.

The authentication app uses a certificate built into Android that responds to a challenge issued by a specific, enterprise-enabled website for authenticating both the user and website during login. Google apps or an account on the Google cloud presents a challenge screen to the user on their Android phone, asking them to verify it’s really them logging into the account.

The company in May first introduced the Android-based 2FA mechanism for log-in to Google and Google Cloud services from ChromeOS, Windows 10, and MacOS devices. Now, it can also be used to protect login attempts from iPads and iPhones. 

Google’s authentication is based on FIDO (Fast ID Online) security keys, which leverage public-key cryptography to verify both the user and login page URL, making it more difficult for an attacker who has stolen user credentials to abuse them.

Among the differences between the desktop environments and iOS systems is in the way the authentication appears to the user. To authenticate from a desktop, the user must launch a Chrome browser window to begin the login and initiate the 2FA process. In the iOS implementation, the Google Smart Lock app provides authentication for Google and Google Cloud applications.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/cloud/google-adds-two-factor-authentication-for-its-apps-on-ios/d/d-id/1334958?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Truths About BEC Scams

Business email compromise attacks are growing in prevalence and creativity. Here’s a look at how they work, the latest stats, and some recent horror stories.PreviousNext

Last summer, the US Federal Bureau of Investigations (FBI) sounded a loud alarm for organizations about the growing danger of business email compromise (BEC) scams. At that time, the FBI said BEC fraud had cost organizations worldwide $12 billion in losses since 2013.

Since then, the threat has continued to grow more dire. Security industry researchers have shown BEC scams are increasing in scope and complexity as attackers perfect their attack playbooks to target an increasing number of victims around the globe.

Here, Dark Reading takes a look at how BEC scams work, the latest statistics on BEC prevalence, and some recent BEC horror stories that should help security professionals and users prepare themselves for this growing class of fraud.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/7-truths-about-bec-scams/d/d-id/1334961?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The CISO’s Drive to Consolidation

Cutting back on the number of security tools you’re using can save money and leave you safer. Here’s how to get started.

Industry reports vary, but experts estimate that the modern CISO uses somewhere between 55 and 75 discrete security products. Vendors are often guilty of overpromising and underdelivering — the reality rarely lives up to the marketing. This puts CISOs in an ironic situation — often, the tool they bought to make their lives easier ended up causing more headaches.

This is an endemic issue, but what do you do when you have too many tools that integrate poorly, require different expertise, and provide too much data but not an overall view to the security risk level? Consolidation sounds attractive. After all, what CISO wouldn’t want to reduce clutter, cut costs, and simplify procedures — but where to start?

Begin with Data Quality
CISOs know there is no perfect solution for security. Clearly, multiple security solutions are needed to cover the security controls. However, CISOs should strive to maximize the value of each investment and reduce the number of tools. To cut through the noise and data coming from tools (specifically, those that identify vulnerabilities and control failures), a great place to start is by increasing the confidence that data coming out them is complete and accurate.

By taking measures to ensure that the data is accurate, CISOs can drive remediation more efficiently and know what to fix first to get the greatest ROI on their security investments. It also leads to getting access to automated analytics and reducing the need to manually work through multiple reporting processes for different tools.

Approaching Consolidation
A key reason that CISOs have too many tools is that they have continued to buy tools and rarely decommission any; this results in in overlapping functionality but doesn’t always close all gaps in coverage.

We need to consolidate/reduce the number of security tools we use, and we need to establish discipline around the process of adding new security solutions. This is not as simple as going through each of the tools and deciding if it adds value or if its function is or can be provided by another tool. Instead, we need to determine which security tools are needed by using two core fundamentals: Each security tool should align with a significant risk in the security framework, and each tool implemented should reduce risk to the company, be able to measure the reduction in risk, and be capable of sustaining that risk reduction.

Aligning with a Security Framework
By developing a security framework based on National Institute of Standards and Technology or some other standard, and then selecting a set of security controls around each category of security, a comprehensive view of your security landscape can be developed. From that view, we can take each significant area of security and begin to develop systems and processes that achieve those controls.

Only after developing these processes do we begin to select tools that help implement and control the processes. Each tool should fulfill a specific need in the security controls framework. For example, let’s take the area of system vulnerability management. We shouldn’t start picking our tool to scan our systems until we understand all of the controls that manage the process to patch our systems on a timely and complete basis. We should only select the appropriate tool(s) once we understand what it or they must achieve.

How to Approach Consolidation
The objective of having security systems is to lower the risk of an event that negatively affects the company (e.g., financial, reputational, or regulatory risk). We must keep this in mind when designing processes and selecting security tools. As we implement security processes and tools, we should ensure that the end solution does the following:

  • Covers the entire intended landscape across the company. For example, if we scan only 70% of the environment for system vulnerabilities, we may not adequately reduce risk to the company.
  • Provides sufficient information to act. For example, if we select a system vulnerability scanner and it provides great detail on the vulnerability and inherent risk but does not provide context to the importance to the company or context as to the owner of the system, then the tool/system is not providing sufficient information to reduce the risk sufficiently.
  • Sustains the control, meaning it should automate the control and monitoring processes. Otherwise, the risk will grow again after expending efforts and monies to remediate.

To further refine the approach to security tools, we also need to address risk. All systems and tools do not provide the same level of risk reduction. By focusing on those security domains that carry the highest risk, one can prioritize the selection and implementation of security tools.

By taking this risk-based, end-to-end, and sustainable approach to implementing security processes (and their related tools), we can begin to permanently solve areas of security that historically have remained despite all the tools and money we have thrown at them. Armed with this newly available knowledge, we can permanently solve some longstanding areas of security.

Ultimately, with enhanced data quality and automation plus the consolidation of tools, CISOs can confidently enhance their company’s cyber-risk posture.

Related Content:

Nik Whitfield is the founder and CEO at Panaseer. He founded the company with the mission to make organizations cybersecurity risk-intelligent. His  team created the Panaseer Platform to automate the breadth and depth of visibility required to take control of … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-cisos-drive-to-consolidation-/a/d-id/1334910?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Congress Gives ‘Hack Back’ Legislation Another Try

Officials reintroduce a bill that would let businesses monitor attacker behavior and target intruders on corporate networks.

“Hacking back,” the largely controversial concept by which organizations can target intruders on their network, has reappeared in a bill poised to be the subject of arguments in Washington.

Rep. Tom Graves, R-Ga., is today reintroducing a bill that would let businesses monitor for, locate, and potentially target cyberattackers. This isn’t the first time Graves has attempted to making “hacking back” legal, CyberScoop reports. It had previously been found to violate the Computer Fraud and Abuse Act (CFAA), which prohibits computer access sans authorization.

So why try again? Graves, who says businesses are already targeting intruders, points to a lack of rules around the practice. If the bipartisan bill is passed, he hopes businesses will share intelligence on cyberattacks with the government. The bill does not currently enforce this.

While the US Cyber Command has recently been given the go-ahead for more offensive cyber operations, there are myriad reasons security experts think “hacking back” is a bad idea. For starters, cybercriminals often take several steps to disguise their identities; as a result, it’s difficult to determine who was actually behind an attack. Federal government officials, and many security researchers, don’t think companies will be able to ascertain who targeted them.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/perimeter/congress-gives-hack-back-legislation-another-try/d/d-id/1334963?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft’s battle with SandboxEscaper zero days turns into grim Groundhog Day

Last August, a security researcher using the pseudonym SandboxEscaper tweeted news of proof-of-concept code targeting an unpatched security vulnerability in Windows 7 and 10.

Later identified as CVE-2018-8440, the issue was a weakness in Windows Task Scheduler’s Advanced Local Procedure Call (ALPC) function and was fixed by Microsoft just over two weeks later in its September 2018 monthly update after it had been exploited for several days.

A few weeks later and SandboxEscaper was back with a second Windows zero day proof-of-concept (patched in December 2018 as CVE-2018-8584), followed by a third in time for Christmas 2018 (CVE-2019-0863, eventually exploited but not patched until May 2019).

Ironically, it wasn’t the unpatched flaw disclosures that resulted in SandboxEscaper’s Twitter and GitHub accounts being suspended but a quickly deleted December 2018 death threat made against US president Donald Trump that got the attention of the FBI.

But far from silencing SandboxEscaper, if anything this seems to have provoked even more disclosures that Microsoft has been scrambling to fix each time they are dropped.

SandboxEscaper currently takes credit for 21 vulnerability disclosures dating back to 2015, which must make it hard to keep up, not least for SandboxEscaper. As the anonymous researcher says:

I drop so much of my stuff and can’t be bothered to keep track of it all.

Moving target

Tell that to Microsoft, which in this month’s Windows updates found itself fixing three zero-day disclosures (CVE-2019-1069, CVE-2019-1053, and CVE-2019-0973) released by SandboxEscaper in May 2019 alone.

But it was CVE-2019-0841, patched in April 2019, that proved to be Microsoft’s biggest challenge – what started as “a bug” turned into a saga, as SandboxEscaper revealed successive bypasses for Microsoft patches.

First came a hole dubbed CVE-2019-0841-BYPASS, which was patched this week as CVE-2019-1064.

Then came a bypass of the patch for the bypass of the patch for the original vulnerability.

Patches for patches are rare; patches for patches for patches are rarer still, so when Microsot fixes this latest hole (possibly in the July 2019 Patch Tuesday update), it will surely be hoping that it really has put the issue to bed.

Why is SandboxEscaper devoting so much effort to releasing vulnerabilities in a clearly irresponsible way?

The consensus is that the researcher is either embittered or troubled.

In 2018, SandboxEscaper reportedly expressed a desire to sell flaws for $60,000 in now-deleted GitHub posts, before appearing to admit to giving exploits to “people who hate the US.”

Except, of course, vulnerabilities don’t work in a neat, surgical way – for all SandboxEscaper knows, their exploits could end up being used to attack anyone, including countries unfriendly to the US.

Releasing flaws that have yet to be patched hurts everyone.

Naked Security’s analysis of June’s Windows Patch Tuesday can be found here.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/B0UY9vBzD2w/

Vim devs fix system-pwning text editor bug

Diehard text editor users everywhere breathed a sigh of relief this week as the open source community fixed a bug in one of the most venerable *nix programs: Vim.

The battle between Vim’s predecessor Vi and rival text editor Emacs dates back to 1976 when they were both released. Since then, they have been quasi-religions in a holy hacker war.

Richard Stallman, who wrote Emacs, famously announced the Church of Emacs, with himself as its Saint. Heathens can opt for the Cult of Vi, although Vim (Vi improved) was released in 1981 with new features and has become the de facto program when referencing Vi.

Advocates of each snipe at the other, but just so long as you want God-like control over your text editor compared to modern mass market word processors, both will serve you well and it’s all in good fun.

This week, though, a researcher found a dangerous flaw in Vim. Armin Razmjou (@rawsec) discovered a high-severity bug in the text editor that could let a remote attacker break out of the editor’s sandbox and execute arbitrary code on the host.

The attack exploits a vulnerability in a Vim feature called modelines, which lets you set variables specific to a file. As long as these statements are in the first few lines, Vim interprets them as instructions. They might tell Vim to display the file with a text width of 60 characters, for example. Or maybe you want to expand tabs to spaces to avoid another geek’s ire.

Vim is a powerful text editor that includes lots of scripting commands. Modelines is careful about what it executes. It runs many commands in a sandbox to avoid someone creating malicious text files that could alter the system. Razmjou discovered a way to make those commands run outside the sandbox.

His attach uses the :source! command followed by a filename to bypass the sandbox. It makes Vim read the file and execute commands as though the user had typed them manually.

An attacker using this technique could get a victim’s computer to do anything by persuading them to open the file. Razmjou proves it with two proof of concept programs.

The first PoC just runs a uname -a shell command that reports basic information about the OS kernel. The second is more sophisticated, opening a reverse shell on the victim’s computer (a reverse shell connects to the attacker’s machine). This code also rewrites the file when opened and hides the modeline to cover its tracks.

The bug also affects neovim, which is a refactored version of Vim with more functionality. The attacker has to do a little more work to own a system via neovim, though, because this program doesn’t allow the execute command that runs the file contents. Instead, attackers can exploit the assert_fails() or nvim-input() commands, both of which take inputs that can carry a payload.

Luckily, Vim worshippers everywhere can rejoice, because the bug has been cast out. Razmjou told the maintainers of both projects about the bug on 22 May 2019. The Vim community patched it the next day, with Neovim’s community patching it a week after notification. The bug has the code CVE-2019-12735, and NIST gives it a vulnerability rating of 8.6 (high) under CVSS 3.0.

Some versions of Linux, including Debian, ship with a custom version of the vimrc file that contains settings for the application, turning off modelines altogether, and those are safe, Razmjou says. Alternatively, you can also use the securemodelines plugin that replaces modelines, or disable modelineexpr in Vim to block expressions in modeline statements.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ATyCeDxRtO8/