[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <200507300339.j6U3dpH5009372@turing-police.cc.vt.edu>
Date: Sat Jul 30 04:40:07 2005
From: Valdis.Kletnieks at vt.edu (Valdis.Kletnieks@...edu)
Subject: Cisco IOS Shellcode Presentation
On Fri, 29 Jul 2005 16:28:31 -1000, Jason Coombs said:
> We're not talking about proving/disproving the result of computation
> here, we're talking about a simple logical step inserted prior to
> transmission of operating instructions and data to a turing machine.
> It does not invoke the Turing Halting Problem to ask the question
> "should the following opcode be sent to the CPU / should the opcode be
> read from memory and acted upon" ?
Actually, it does. Consider if the opcode is the one that moves the one
byte into an apparently innocuous location that eventually causes a program
malfunction. Remember the ntpd exploit? That started as a one-byte overlay ;)
LOAD R7,DATA
Move the contents of the storage location 'DATA' to R7. Should it do it?
It seems reasonable, right?
What if the *entire* code looks like:
LOAD R7,DATA
LOAD R3,FOO
LOAD R9,OTHER
TEST R9,23 Is it 23?
BNE AROUND If not, go around
DIVIDE R3,R7 If it was 23, divide R3 by R7..
AROUND ...
Have a nice divide-by-zero, on the house, complements of Alan Turing. You
certainly can't suggest that the DIVIDE do the checking - because that's the
operation that will finally detect the problem *ANYHOW*. So where do you want
to flag it? When R9 is loaded? When R3 is? When R7 is? When the program failed
to check for non-zero before it *stored* into DATA?
(And that one is only a few opcodes away - I once had the thrill of chasing
down a bug that didn't cause a problem on one system, and only caused a problem
on another after an intervening 6 million malloc() calls had allocated 200M
more heap. And even then, it was data dependent and only failed sometimes....)
But yeah - hardware can check for that, no problem... ;)
> The simplest solution is to duplicate the machine code, placing one copy
> in a protected storage and requiring the CPU to confirm that both the
> active/RAM-resident copy and the protected storage copy match before
> proceeding with computation.
Just store the program in a frikking *ROM*, and disallow execution of
opcodes from RAM. It's called a Harvard architecture.
It can still be buggy and exploitable (although a lot harder - you're
essentially restricted to return-to-libc style attacks).
> Turing has nothing to say on this subject because he never contemplated
> it, to the best of my knowledge. Turing never tried to defend against
> buffer overflows back in the 1930s, yet people invoke him as a sage
> unerring philosopher of our time. Why?
Actually, Turing didn't try to defend against buffer overflows because he was
busy working on much more subtle attacks.
Why do people invoke him as a sage? Because he pointed out the very basic issues
of "data as programs" and "programs as data" that cause us so many problems today.
For instance, if you understood what Turing was talking about, you'd have been
able to just *know* that Javascript was going to be a continual source of
security headaches (how many Javascript bugs were because somebody didn't keep
straight if something was "code" or "data"?).
Or why Microsoft Word macros can be viruses, even though Word documents are
usually thought of as "data" - the problem is that the macro is allowed to be
introspective (and of course, a Word macro that *isn't* allowed to be introspective
is just *useless*.. ;)
But no, other than that, Turing didn't have a *clue* :)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 226 bytes
Desc: not available
Url : http://lists.grok.org.uk/pipermail/full-disclosure/attachments/20050729/ba8185f0/attachment.bin
Powered by blists - more mailing lists