[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <411B6430.10741.A9BEFB9A@localhost>
From: nick at virus-l.demon.co.uk (Nick FitzGerald)
Subject: National Database of Variants with
Fixes-non-vendor specific
John Hall wrote:
<<snip>>
> Going even further off-topic (par for the course for FD), does
> anyone have any ideas how they might create such a trojan (there
> seems to be no mention of self-replication in any of the articles)
> that could be recognized and ignored by AV software, ...
Simple -- most (probably all) decent virus detection engines have
mechanisms to prevent them detecting specific files or groups of files.
This is necessary because much virus detection uses _far_ from precise
detection mechanisms, yet it is _necessary_ to have very low false
positive (FP) rates. Thus, you will often find that some "dirty trick"
can be used to very efficiently detect a specific, otherwise difficult
to detect malware, most variants of a particular malware family or even
much (previously unknown) malware that has a particular functionality.
Sadly, such "tricks" will often also detect a small handful of known
legitimate, "normal" program. Depending on scale factors, the
efficiency savings of retaining the "trick detection", etc, rather than
discarding the "trick", the developer will decide to add exclusion data
for the known FPs.
Such FP exclusion functionality is effectively a mechanism to tell the
scanner "do not report anything for these files", even though the will
be (at least partially) scanned. Thus, these mechanisms can be used to
identify any arbitrary program that, regardless of what the scan engine
reports having found, will have detection reporting suppressed.
_If_ an AV developer were to agree to not detect, say, the FBI's latest
spyware, they would most likely request a sample of it and then treat
that sample as if it were an FP report from the field, but not such an
FP as is fixed by tweaking a detection definition but the type to be
treated via exclusion.
> ... but prevent
> others from using the same methodology to shield their malware?
That is, of course, the trick. The exclusion mechanism has to be
robust enough to avoid trivial (and even modestly advanced) attempts to
fake it.
For example, in the dim, distant past some AV products used very lame
exclusion rules, that were pretty easily worked out by the VX'ers, as a
solution to another problem -- how to prevent scanning your own program
files so as to prevent your heuristic detection methods triggering on
all the nasty, low-level tricks in that code and thus reporting
"scanner.exe is probably a new virus". Generally, I think the industry
has moved well past such trivialities as putting magic values in
"unused" .EXE header fields, depending on extremely odd (i.e. never to
be generated by a stock compiler/linker combination) code sequences at
a .EXE's entrypoint, etc, etc, etc as self-recognition methods and for
other exclusion mechanisms, but you largely have to take that on faith,
and backed by the observation that we haven't seen any showstopper
examples of such things going horribly wrong for a decade or so now
(though that may simply mean today's VX'ers are lamer than those of
yesteryear...).
--
Nick FitzGerald
Computer Virus Consulting Ltd.
Ph/FAX: +64 3 3529854
Powered by blists - more mailing lists