[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <200310300052.h9U0qY9X004281@turing-police.cc.vt.edu>
From: Valdis.Kletnieks at vt.edu (Valdis.Kletnieks@...edu)
Subject: System monitor scheme
On Wed, 29 Oct 2003 22:36:21 +0200, Caraciola said:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> That will open a big can of worms.... to start the exeloader has to supply an
> image of TEXT and CODE segments (x86), feed that to a function which
> fingerprints this ( PoC with gnupg ?), a daemon has to check every
> process/thread each ? second or so, housekeeping of the results... i think it
Once a second is way too little. On a 2Gz Intel chipset, you just gave them
time to execute a billion instructions (remember you take hits on L1/L2 cache
misses, pipeline stalls, and the like).
> will be costly in performance terms. And where do you start, it would have to
Yes, it would be very costly. That's why most hardware approaches take the
sandbox approach - if the process started in a legal state, and isn't allowed to
write or execute naughty memory locations, it must still be in a legal state when
it loses its timeslice.
> be done on the OS itself, should spread of course to the disk-images of exes
> and so on. In the end you will need hardware to secure the machine itself (
> heard of TCPA ?). Easiest way to achieve this would be a machine with
Quite correct. You really need hardware security even before the BIOS gets
control (otherwise a hacked BIOS can bypass everything). Similarly for the bootloader.
> seperate memory for data and program, so the hardware grants there is no
> write to the code area after initial load.....
Separate I/D space is called a Harvard architecture, as opposed to the more familiar
single-space Von Neumann architecture.
It's actually pretty easy to simulate a Harvard architecture by setting the
code segment mapping (or whatever the equivalent on your CPU chipset is)
to only point at ROM. This is a very common design technique for embedded
controllers like in a microwave or a car or a home entertainment system - you
know that if there's a software crash, rebooting is trivial and guaranteed to make
the system come back in a known state.
Of course, this may be fine for an embedded controller where you have exactly
one program to run and it almost never gets changed. On the other hand, it
truly sucks if you decide you're done with Microsoft Office and want to launch
a web browser (hint - you need some way to replace the programs, even though
the program store is supposed to be read-only...)
The usual compromise is to allow the trusted supervisor mode to read/write
all memory, but whenever user code is loaded, it's handed a read-only program
area and non-executable data areas.
> have fun thinking about the ins and outs of this ...
Oh, this is an area that's been very heavily thought out for a LONG time.
John von Neumann and the Princeton crew claimed the fact that his architecture
allowed self-modifying code was a feature - Aiken and the Harvard crew
considered it a bug. Remember that the first good multitasking operating system
was still 10 years away at this point, and loading programs from paper tape was
high tech. Lot of systems were still using programming plugboards (yes, sticking in
diodes and resistors to create ones and zeros...)
If I have my history right, that makes it at least a 55 year old idea.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 226 bytes
Desc: not available
Url : http://lists.grok.org.uk/pipermail/full-disclosure/attachments/20031029/c8068aba/attachment.bin
Powered by blists - more mailing lists