If the name’s not on the whitelist it can’t come in
The poor old corporate endpoint has had a bit of a battering in the last few years. Malware is more widespread and complex than ever and it is easy to get infected simply by visiting legitimate sites that have been hacked.
Now that the internet has become such a dangerous neighbourhood, are malware blacklists enough to keep the nasties out? Or are whitelists a better way to go?
Most anti-malware tools use blacklists. They scan system and application files and compare them against a list of signatures matching known malicious files.
But there are a couple of problems with blacklists. Increasingly sophisticated technology means the number of variants on a particular strain of malware has grown exponentially.
Drive-by download sites will often custom-bake a file with a unique hash for a visitor, making it difficult for anti-malware tools to spot them.
Suspicious behaviour
Catching these files is not impossible, thanks to techniques such as behavioral analysis. Scanning what the file does, rather than simply what it looks like, is a useful way to spot malicious activity.
However, this can generate its own problems. Occasionally, system files can look as though they are doing malicious things when they are simply doing their job. McAfee suffered that problem in April last year – and again in October.
An alternative is whitelisting. If the malware base is growing all the time, then maybe the right question to ask is not what you shouldn’t let in but what you should.
A whitelist contains only the files that you are willing to allow to be installed on your organisation’s computers.
Elusive prey
The disadvantage of a blacklist is that by the time malware is documented and added to a list, it is already out there in the wild. Some organisations will be infected before a blacklist is updated.
With rapidly propagating malware, waiting until a blacklist is updated can be disastrous. But it is also problematic for highly targeted software exploiting zero-day vulnerabilities.
Some of the most effective attacks have been targeted at high-value machines as part of long-term, highly orchestrated “advanced persistent threat” attacks.
A whitelist can be made to be failsafe. If a piece of software tries to install but is not on a whitelist, it can be denied access.
That would seem to provide adequate protection against all malware, albeit at the cost of some inconvenience to the user. The question for the organisation then becomes whether the trade-off is worth it.
Let the right one in
The extent of the inconvenience depends on the size of the organisation’s whitelist and its internal disciplines. A relatively large whitelist has a better chance of allowing through legitimate software.
The National Institute of Science and Technology maintains the National Software Reference Library (NSRL), a collection of hashes for known, legitimate software releases.
The SANS Institute’s Internet Storm Centre has created a list of hashes using the NSRL as a base, adding in a searchable web front end to make it useable as a whitelist.
Lumension also provides a whitelist of application data, collected from sources that include vendors and its customers’ own application scanning processes.
A legitimate file could be compromised and made to perform malicious acts
One potential drawback of a whitelist is that a legitimate file could be compromised and made to perform malicious acts. If there is an inherent vulnerability in the file, it may not even be necessary to change the hash.
The other challenge is keeping a whitelist up to date. How often do you need to update machines, and how often are you ensuring that the whitelist is also updated?
“Whitelists are good if you’re not going to be updating a machine,” says Gunter Ollman, vice-president of research at security firm Damballa, adding that a Windows 7 installation uses hundreds of thousands of files.
“If you’re not patching or putting out automatic updates, then whitelisting can serve as an ideal baseline,” he says.
In reality, of course, we all update our machines, so ensuring that a whitelist is up to date is vital if you are to avoid file problems.
If you can integrate patch management with your whitelist, you can automate those updates to lessen the burden of maintaining a whitelist over time. Trust engines allow you to make decisions based on publisher, file location, updaters and so on. They also facilitate whitelist management.
Ebony and ivory
Using both whitelisting and blacklisting together may create a more effective means of endpoint protection, say experts.
Overlaying a whitelist on a blacklist can provide another layer of verification and help to avoid some of the false positives that occasionally befall blacklists.
The whitelist could also help to filter out some of other, such as “greylisted” applications that are technically legitimate but frowned upon by company policy.
“When the culture of the firm is ‘anything goes’, it’s hard to say turn the taps off,” says Eldon Sprickerhoff, co-founder of security consulting and services firm ESEntire.
Deployment of a whitelist should begin with a profiling phase, he advises.
“We work out what’s going on right now, and then we gradually close down the scary things,” he says.
Deploying a whitelist alongside a malware blacklist could also be a useful rationalisation exercise. By taking a long, hard look at what really needs to be running on PC endpoints, organisations might be able to simplify their application portfolio, freeing up some support and maintenance budget. ®
Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/10/14/network_security/