STE WILLIAMS

Two telemetry projects should mean better testing and fewer false alarms

VB 2013 conference logoIn the course of one afternoon at last week’s Virus Bulletin conference in Berlin, two major cross-industry telemetry projects were presented which, it’s hoped, should improve the quality of anti-malware products.

The first is designed to up the standard of anti-malware testing, which in turn encourages better products, and the second aims to help reduce the chances of products mis-identifying clean files as malware.

Real time threats

First up, and closest to my own specialty, was a presentation on behalf of the Anti-Malware Testing Standards Organisation (AMTSO), given by AMTSO CTO Righard Zwienenberg of Eset and my colleague on the AMTSO board of directors, Thomas Wegele of Avira.

The subject of their talk was a new system called the Real Time Threat List (RTTL).

For many years there has been an industry standard system of listing a base subset of the threat landscape, known as the WildList. This has been used as the basis for a range of testing systems, including Virus Bulletin’s own VB100 certification.

Since its founding in 1993 the WildList has seen some gradual evolution and improvements, but has been criticised for being slow to adapt to the faster pace and diverse range of attack techniques of modern cybercrime.

The RTTL aims to provide an alternative to the WildList which offers a much more accurate and up-to-the-minute picture of the latest and most important threats.

It operates as a community telemetry-gathering and sharing system – registered data providers (mostly the anti-malware vendor companies) submit information on what they are seeing, including details such as how many of their customers have been hit by a given threat, how long they think it’s been around, which geographical regions it’s appeared in, and much more besides.

Testers can then query the list on their own terms, pulling out data to best suit their style of testing. Some might suggest that relying on prevalence information provided by vendors biases tests in their favour, but this seems to be a necessary evil – they are the only people with the raw information this sort of system can be based on.

The flexibility of the system also allows testers to make use of it in different ways, to either mitigate or leverage the vendors’ own access to the information.

For example, in the case of a certification scheme like the VB100, or those offered by other labs such as the ICSA, a basic list of the most significant threats over a given time period can easily be generated.

This kind of baseline test expects any decent solution to provide full and reliable coverage of all these major items at all times, and as we’ve seen over the years with the WildList such expectations are not always justified.

Despite most vendors having access to the WildList data, there have been many instances of missed or mis-classified samples in public tests, even from the most reliable of vendors.

So, a test with what appears to be a fairly easy target can give a good indication of which vendors are managing to keep up with the pace and targeting the most important areas, and which are falling behind. More flexible data should allow us to fine-tune such tests to provide a more accurate picture of who’s doing well.

There are other ways of making use of the RTTL data too, for example in tests which aim to measure the other end of the scale, looking for samples which are very rare, perhaps highly targeted to a particular sector or organisation and unlikely to have been seen by vendors until their job has been done.

In such a test, the tester could throw the samples they manage to turn up against the products under test, then later on look them up in the RTTL system to find out if they were indeed as new, rare and specialised as they were thought to be. Their results could then be derived only from the examples which best fit their intended design for the test.

As the system will include records on malicious URLs as well as files, it will allow tests to more closely approximate real-world use cases, covering all the layers of protection in modern solutions, while still using a standard and repeatable sample selection process.

The RTTL is currently at a late beta stage, and we hope to see its influence coming online early in the new year.

Clean file metadata

The second talk was on behalf of the IEEE Industry Connections Security Group (ICSG) malware research group, and was given by IEEE-ICSG members Igor Muttik (McAfee) and Mark Kennedy (Symantec). Their topic was another data-sharing initiative, this time covering clean files rather than malicious ones.

False positives have always been a problem for anti-malware solutions. With the explosive growth in the quantities of malware being produced, new techniques have had to be adopted to cover the glut.

Ever more aggressive heuristic and generic detection methods are of course more likely to cause false alarms, while automated systems which add detection for items based on features such as multiple detections by other products can cause snowball effects, spreading false positives from product to product.

Cloud-based reputation systems can also cause unnecessary alarm by alerting on items due to rarity or newness.

The IEEE-ICSG clean file metadata sharing system (CMX) is designed to help address these issues. Data will be fed into the system by legitimate software developers, providing details of every file they produce. This can be used to help ensure their files are not detected by anti-malware products, even if they are brand new or have only the smallest numbers of users.

This will help the vendors populate whitelists, mainly cloud-based, and will also help guide the building of clean sample sets used in quality assurance (QA).

Any good anti-malware QA process should include running over as much known-clean stuff as possible, to spot false alarms in new detection algorithms. While the CMX system does not plan to include actual copies of files (mainly for copyright reasons), it will at least provide enough information to show QA teams where their sourcing of samples is falling behind.

It also simplifies the process of liaison between anti-malware firms and software developers, by providing a simple conduit for communications.

At present, each AV vendor has to build a relationship with all the major software producers, and any software developer who has a problem with their wares being flagged by AV needs to find someone who can help them out (a lot of them approach me for introductions).

The CMX system should make this all much easier, meaning not only fewer false positives to start with but also swifter resolution of any issues which do emerge.

This will make everybody happy – the anti-malware firms will suffer less embarassment from false positive incidents, software makers will get fewer complaints from their customers, and end users will be less likely to have their business interrupted unnecessarily.

It’s great to see how much collaboration there can be between the technical people at companies which are on the face of it in tough competition. We all need to work together to put up the strongest defence possible against the tidal wave of threats.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/x3kYjI5pKok/

Comments are closed.