Threat Intelligence Blog

The recent report from Cyveillance regarding AV detection lag-time rates has sparked some interesting responses and we welcome the discussion around the ever increasing threats on the Internet. Specifically, Randy Abrams raises several interesting critiques about the methodology used in our report. The first weakness (in his view) is that ESET sees a lot more malwareMalware: A generic term for a software that is designed to disable or otherwise damage computers, networks and computer systems LookingGlass Cyber (n) - another type of cold that can destroy a computer by latching on to destroy other programs. than we do at Cyveillance. This point may be true, though, in fairness, the paper was very clear about the “threats” covered, which is not “all malware”. It is the Web-borne malware that Cyveillance predominantly sees being distributed and installed (without user consent i.e. by exploit/drive-by install) in real time as we visit live, infected and malicious Web pages on any given day. Of those threats, the results are accurate. While this may be less malware in a day than all the samples ESET analyzes, it is representative of the kinds of Web-delivered threats that users encounter as they surf the Web, click on links, download content, on that given day. We find it by emulating real user web surfing behavior.

The second point, and much more important, Randy raises as a flaw is that our methodology relies on the leading brands in the industry to say what is and isn’t malware. (I should note this is an admirable criticism to raise, since, at least within the specific lens used in our study, Randy’s company fared the best and may very well be the leading brand in the industry.) Still it is a methodological choice we had to make. We made it partly for objectivity and because if we relied solely on our own analysis of “what it does”, we would expect the industry response to be a chorus of “but what does Cyveillance know about analyzing malware? They’re not an anti-virus company!”

One key point we feel many readers of the paper may have missed is this study was intended to illustrate detection lag times by the leading AV companies. If you read the paper thoroughly, then you will see that the lag time stats in Figure 3 show how long each vendor takes to recognize those things and that each vendor eventually did identify as malware, i.e. the final chart displays the lag time between when we were infected with a malware in the wild – by a nasty Web page or malicious Tweet or PPC link or whatever – and when that vendor eventually recognized the threat.”

Randy’s complaint appears to say “you shouldn’t call something malware and penalize me for having missed it because my competitors call it malware and I don’t. Maybe I’m right and they’re wrong”. This is a fair comment. However, the central point of this study is to say that, “we’re not comparing you to the other guy. What the paper is actually saying is ‘for the things you, yourself said are malware, you didn’t say so until X days or weeks after I got infected with it.’” That was the point of the study.

Regardless of the difference of opinion in the methodology used, as mentioned in the article, the conclusions in the report are on target – you can’t rely solely on signature based protection for today’s Internet threats. This is validated time and again by our corporate customers, who use these same leading security programs, and who spend significant resources cleaning and re-imaging company machines that are constantly being infected by the many threats that pass right through them.

Additional Posts

Clamping Down on American Companies That Assist Cybercrime

The National Institute of Standards and Technology recently released a request for comment from the ...

The Safety of Popular Hosting Environments (or Lack Thereof)

If you don't mow the lawn often enough, you may find unwelcome guests in your yard. Image courtesy ...