lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 18 Aug 2008 10:11:24 +1000
From:	"Peter Dolding" <oiaohm@...il.com>
To:	david@...g.hm
Cc:	"Theodore Tso" <tytso@....edu>,
	"Arjan van de Ven" <arjan@...radead.org>, rmeijer@...all.nl,
	"Alan Cox" <alan@...rguk.ukuu.org.uk>, capibara@...all.nl,
	"Eric Paris" <eparis@...hat.com>, "Rik van Riel" <riel@...hat.com>,
	davecb@....com, linux-security-module@...r.kernel.org,
	"Adrian Bunk" <bunk@...nel.org>,
	"Mihai Don??u" <mdontu@...defender.com>,
	linux-kernel@...r.kernel.org, malware-list@...ts.printk.net,
	"Pavel Machek" <pavel@...e.cz>
Subject: Re: [malware-list] [RFC 0/5] [TALPA] Intro to alinuxinterfaceforon access scanning

On Sun, Aug 17, 2008 at 6:58 PM,  <david@...g.hm> wrote:
> On Sun, 17 Aug 2008, Peter Dolding wrote:
>
>> On Sun, Aug 17, 2008 at 1:17 AM, Theodore Tso <tytso@....edu> wrote:
>>>
>>> On Sat, Aug 16, 2008 at 09:38:30PM +1000, Peter Dolding wrote:
>>>>
>> I am not saying in that it has to be displayed in the normal VFS.  I
>> am saying provide way to see everything the driver can to the
>> scanner/HIDS.   Desktop users could find it useful to see what the
>> real permissions are on disk surface useful for when they are
>> transferring disks between systems.  HIDS will find it useful for Max
>> confirm that nothing has been touched since last scan.   White list
>> scanning finds it useful because they can be sure nothing was missed.
>
> unless you have a signed file of hashses of the filesystem, and you check
> that all the hashes are the same, you have no way of knowing if the
> filesystem was modified by any other system.

That is called a HIDS.  Network form even has central databases of
hashes of applications that should be on the machine.  Its tampering
detection.
>
> you may be able to detect if OS Y mounted and modified it via notmal rules
> of that OS, but you have no way to know that someone didn't plug the drive
> into an embeded system that spit raw writes out to the drive to just modify
> a single block of data.

Exactly why I am saying the lower level needs work.   Everything the
file system driver can process needs to go to Hids for most effective
detection of tampering.  Ok not 100 percent but the closest to 100
percent you can get.   2 causes of failure are hash collisions that
can happen either way and data hidden outside the drivers reach.   All
execute data leading into the OS will be covered by a TPM chip in time
so that will only leave non accessible data not a threat to current
OS.
>
>> You mentioned the other reason why you want to be under the vfs.   As
>> you just said every time you mount a file system you have to presume
>> that its dirty.  What about remount?   Presume its all dirty just
>> because user changed a option to the filesystem?  Or do we locate
>> ourself in a location that remount don't equal starting over from
>> scratch.   Location in the inodes wrong for max effectiveness.   Even
>> on snapshoting file systems when you change snapshot displayed not
>> every file has changed.
>
> this is a policy decision that different people will answer differently. put
> the decision in userspace. if the user/tool thinks that these things require
> a re-scan then they can change the generation number and everything will get
> re-scanned. if not don't change it.
>
With out a clear path were user space tools can tell that its the same
files they have no option bar to mark the complete lot dirty.

Hands are tied that is the issue while only in the inode and vfs
system.   To untie hands and allow most effective scanning the black
box of the file system driver has to be opened.

>
>> Logic that scanning will always be needed again due to signatures
>> needing updates every few hours is foolish.   Please note signatures
>> updating massively only apply to black list scanning like
>> anti-viruses.   If I am running white list scanning on those disks
>> redoing it is not that required unless disk has changed or defect
>> found in the white list scanning system.  The cases that a white list
>> system needs updating is far more limited:  New file formats,   New
>> software,  New approved parts or defect in scanner itself.
>> Virus/Malware writer creating a new bit of malware really does not
>> count if the malware fails the white list.  Far less chasing.  100
>> percent coverage against unknown viruses is possible if you are
>> prepared to live with the limitations of white list.   There are quite
>> a few places where the limitations of white list is not a major
>> problem.
>
> the mechanism I outlined will work just fine for a whitelist scanner. the
> user can configure it as the first scanner in the stack and to trust it's
> approval completely, and due to the stackable design, you can have thigns
> that fall through the whitelist examined by other software (or blocked, or
> the scanning software can move/delete/change permissions/etc, whatever you
> configure it to do)
>
>> Anti-Virus companies are going to have to lift there game stop just
>> chasing viruses because soon or latter the black list is going to get
>> that long that its going to be unable to be processed quickly.
>> Particularly with Linux's still running on 1.5 ghz or smaller
>> machines.
>
> forget the speed of the machines, if you have a tens of TB array can will
> take several days to scan using the full IO bandwith of the system (so even
> longer as a background task), you already can't afford to scan everything
> every update on every system.
>
> however, you may not need to. if a small enough set of files are accessed
> (and you are willing to pay the penalty on the first access of each file)
> you can configure your system to only do on-access scanning. or you can
> choose to do your updates less frequently (which may be appropriate for your
> environment)
>

You missed it part of that was a answer to Ted saying that we should
give up on a perfect system due to the fact current AV tech fails
there is other tech out there that works.

In answer to the small enough set of files idea.   The simple issue is
that one time cost of black list scanning gets longer and longer and
longer as the black list gets longer and longer and longer.   Sooner
or latter its going to be longer than the amount of time people are
prepared to wait for a file to be approved and longer than the time
taken to white list scan the file by a large margin.  It is already
longer by a large margin to white list scanning.    CPU sizes not
expanding as fast on Linux kind brings the black list o heck problem
sooner.   Lot of anti-virus black lists are embeding white lists
methods so they can operate now inside the time window.   The wall is
coming and its simply not avoidable all they are currently doing is
just stopping themselves from going splat into it.  White list methods
will have to become more dominate one day there is no other path
forward for scanning content.

Most common reason to need to be sure disks are clean on a different
machine is after a mess.   Anti-Virus and protection tech has let you
down.   Backups could be infected before restoring scanning those
backups to sort out what files you can salvage and what backups
predate the infection or breach.   These backups of course are
normally not scanned on the destination machine.   Missing anything
scanning those backups in not acceptable ever.

By the way for people who don't know the differences.  TPM is a HIDS
hardware support it must know the files its protecting exactly.
White list scanning covers a lot more than just HIDS.   White List
scanners that knows file formats themselves sorts the files by unknown
format, damaged ie not to format like containing buffer oversize and
the like, Containing executable parts unknown, Containing only
executable parts known safe and 100 percent safe.  First 3 are blocked
by while list scanners last 2 are approved.   Getting past a white
list scanner is hard.   White list scanning is the major reason we
need all formats to documents used in business so they can be scanned
white list style.   White List format style does not fall pray to
checksum collisions.  Also when you have TB's and PB of data you don't
want to be storing damaged files or viruses.   Most black list
scanners only point out viruses some viruses so are poor compared to
what some forms of white list scanning offer of trust able clean and
undamaged.

Peter Dolding
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists