In late December, Microsoft researchers responding to publicly posted attack code that exploited a vulnerability in the FTP service of IIS told users it wasn’t much of a threat because the worst it probably could do was crash the application.
Thanks at least in part to security mitigations added to recent operating systems, attackers targeting the heap-overrun flaw had no way to control data that got overwritten in memory, IIS Security Program Manager Nazim Lala blogged. It was another victory for Microsoft’s defense-in-depth approach to code development, which aims to make exploitation harder by adding multiple security layers.
However, it turned out that wasn’t the case. White-hat hackers Chris Valasek and Ryan Smith of security firm Accuvant Labs soon posted screenshots showing they had no trouble accessing parts of memory in the targeted machine that the protection – known as heap exploitation mitigation – should have made off limits. With that hurdle cleared, they had shown the IIS zero-day bug was much more serious than Microsoft’s initial analysis had let on.
“The point was proven that you could actually start to execute code, as opposed to them saying: ‘Don’t worry about it. It can only crash your server’,” Valasek, who is a senior research scientist for Accuvant, told The Register.
Up until now, their technique for bypassing the heap protection has been a mystery outside of a small circle of researchers. On Saturday, Valasek and Smith, the latter who is Accuvant’s chief research scientist, shared their secret at the Infiltrate security conference in Miami Beach.
Heap-exploitation mitigation made its Microsoft debut in Service Pack 2 of Windows XP, and has since been refined in later OSes. It works by detecting memory that’s been corrupted by heap overflows, and then terminating the underlying process. The technology was a significant advance for Microsoft. Practically overnight, an entire class of vulnerabilities that once allowed attackers to take full control of the targeted operating system were wiped out.
Running on the newer operating systems, the same exploits could do nothing more than crash the buggy application.
Valasek and Smith were able to bypass the mitigation because Microsoft’s reworked heap design also included a new feature known as LFH, or low fragmentation heap, which aims to improve speed and performance by providing a new way to point applications to free locations of memory. And for reasons that remain unclear, the new feature didn’t make use of the heap-exploitation mitigations.
“They opened up a new path for attackers, so it was great for attackers but bad for the end user,” Smith said. “The back door is locked, so we go in the front door.”
The LFH isn’t turned on by default, and it turns out that it often requires a lot of work for an attacker to enable it. In the case of December’s IIS vulnerability, they turned it on by invoking several FTP commands in a particular way. With that out of the way, they had no trouble controlling the memory locations on the targeted machine.
Valasek and Smith are quick to point out that bypassing the mitigations requires considerably more effort and skill on the part of the attacker. Five or 10 years ago, it was frequently possible for exploit developers to recycle huge amounts of code when writing a new script. That’s not the case now.
“Unlike other exploitation techniques of the past, you need to know more about the underlying operating system and the application that’s being run to figure out how to enable [LFH] and how to use it to your advantage,” Valasek said. “You can’t blindly go about your business.”
The talk is the latest reminder of the spy-versus-spy nature of security work, in which new protections developed by whitehats are constantly being defeated by blackhats, which then requires whitehats to come up with still newer protections. Researchers have similarly figured out ways to bypass other security mitigations, with techniques such as “JIT-spraying” for address space layout randomization and return oriented programming for data-execution prevention.
Still, the researchers said the mitigations are an essential part of software development – as long as engineers recognize their inherent limitations and don’t become complacent.
“As long as the mitigations are there to protect the end user and not to protect the company from having to patch, then they’re a good thing because it does make the job harder,” Smith said. “It’s a way to buy time.