lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
From: pageexec at freemail.hu (pageexec@...email.hu)
Subject: Re: PointGuard: It's not the Size of the Buffer, it's the Address

Subject:  Re: PointGuard: It's not the Size of the Buffer, it's the Address
From:     Crispin Cowan <crispin () immunix ! com>
Date:     2003-08-15 18:00:04

> Please address technical commentary to the paper (which addresses this 
> point) and not to the cute tag line.

Here we go then (all quotes are from your paper).

1. "This key is then never shared with any entity outside the process's
    address space. To obtain the key, the attacker would either have to
    already have permission to manipulate the process with debugging
    tools (e.g. ptrace) or would have to have already successfully
    perpetrated a buffer overflow attack against the process."

   "However, PointGuard never gives the attacker a look at the
    ciphertext."

   "Nor can the key be extracted by looking at ciphertext, because the
    ciphertext is never actually shared with anyone. To obtain a sample
    of ciphertext, the attacker would have to coerce the victim program
    into exposing internal pointer values to the attacker. Attackers
    seeking the privileges of the victim process normally do not have
    read access to the processes address space. Programs do not normally
    dump data structures containing pointers outside of their address
    space, because such pointers lose any meaning outside of the address
    space."

   "Thus we cannot identify any feasible means by which the attacker can
    obtain the PointGuard key."

You are wrong (and even self-contradicting) here, in any case, so-called
information leaking can happen without having to corrupt pointers ([1],
[2]). Also, section 3.4.3 sublates the above.


2. In section 3.3.1 you talk about various implementation strategies.
   Earlier in section 3.2 you declare that:

   "PointGuard seeks to provide integrity for pointers, so that pointers
    cannot be modified in ways the programmer did not intend."

However in the implementation part you talk about only those pointers
that are visible at the C language level whereas we know all too well
that there is more than that (ELF GOT/PLT, saved program counter and
frame pointer, etc). Because of this omission it appears that PG does
not protect these pointers at all even if they have been the primary
targets of address space corruption bugs in the past. Is this really
the case or is the paper missing something?

What really piqued my interest is that the PLT/GOT are not generated
by the compiler hence the implementation you describe cannot possibly
handle them without changes to the dynamic linker - something you do
not mention at all. It would also be interesting to know how you can
handle the saved program counter and frame pointer just after the AST
level where as far as i know these entities do not even exist (and
hence cannot be manipulated/controlled there).


3. In section 3.4.1 you say about statically initialized data that:

   "[...] we modify the initialization code emitted by the compiler
    (stuff that runs before main()) to re-initialize statically
    initialized pointers with values encrypted with the current
    process's key."

Can you clarify what initialization code the compiler emits before
main()? As far as i know, on entry only the dynamic linker, library
initialization and some statically linked-in object code (various
crt*.o stuff and what they call) gets to run before main() - none
of this is emitted by the compiler, at least not for each executable
as you made it sound to be.

If your initializer code gets to run after the dynamic linker's entry
point then there will or might be pointers in static data that are used
before your code gets to run - they are obviously not covered by your
protection (i.e., PG does not provide 100% coverage).


4. As mentioned above, section 3.4.3 admits that there are still ways to
   modify non-encrypted pointers in the current implementation (beyond
   the information leaking attacks i mentioned). To me it also means that
   not all pointer stores/loads are protected but only those visible at
   the C language level (refer to the problematic pointers pointed out in
   2). It also begs the question of what kind of performance impact PG
   will have once all these omissions are rectified (more on your
   performance evaluation below).


5. In section 3.4.4 you talk about mixed-mode code (PG vs. non-PG). You
   seem to be focused on marking function parameters for use by PG or
   non-PG code but you do not mention what happens with pointers stored
   in data structures which are used by both kinds of code. Do/can you
   mark such structure members with __std_ptr_mode_on__? Also what happens
   with functions that take format strings and hence accept arguments of
   variable types (i.e., pointers and non-pointers), do you parse such
   format strings and convert the pointer arguments accordingly or do
   you turn off PG altogether for such code? What happens with system
   calls that take pointers? You mention in the paper that you have not
   created a PG version of glibc, so are all pointers passed to system
   calls unprotected? What happens to system calls that do not go through
   glibc (there are applications that do this)?

   In the same section you all of a sudden introduce the notion of
   'hashed pointers' without explaining what they are and how PG uses
   them. Can you elaborate on this?

   Finally i am wondering how you plan to implement pointer mode tracking
   in the compiler, or more precisely, why you have to do it in the compiler
   only and not at runtime (in the latter case you would have to extend the
   pointer representation and open a whole can of worms).


6. In section 5 you admit that you do not indeed have a PG protected
   glibc and hence heap pointers are not protected at all, this calls
   into question the seriousness of your security and performance
   testing (especially since you compare your results to mature
   solutions which cannot be said of PG yet).


7. In section 5 you also say that

   "Unfortunately, it is not possible to use testing and
    experimentation to show non-bypassability: testing can
    only show bypassability. Non-bypassability must be
    established by inspection. Our argument is that:"

This is a welcome change in your opinion as you finally realized that
your claims about StackGuard's experimentally 'proven' non-bypassability
are also bogus (more on this later). So let's take a look at your
arguments about PG's non-bypassability:

   "2. Usefully corrupting a pointer requires pointing it at a
    specific location."

This is false, the hijacked pointer may very well point to a set of
specific values (e.g. any GOT entry that is used later, any member of
a linked list, etc).

   "3. Under PointGuard protection, a pointer cannot be corrupted
    to point to a specific location without knowing the secret key."

This is correct provided the implementation is bug-free - something
that cannot be verified until you actually release PG.

   "4. Learning the secret key requires either obtaining the secret
    key directly, or cryptanalysis against a sample pointer value."

These methods are called information leaking as discussed above. The
term 'cryptanalysis' is a bogus term here, as it makes it sound like
an expensive operation whereas all it takes is knowing the valid
pointer value (something an attacker can observe on a test system)
and xor'ing it against the leaked one.

   "5. Obtaining the secret key directly would require corrupting a
    pointer precisely, which begs the question (see Section 3.4.2)."

It depends, if the page holding the key follows another (i.e., no
non-mapped gaps surround it), and there is a buffer in the previous
page(s) that can be leaked, it may/will leak the key as well - did
you take countermeasures against this (you did not say anything about
it in the paper)?

   "6. Obtaining a sample of ciphertext (an encrypted pointer) would
    require either corrupting a pointer precisely (which begs the
    question) or a program that leaks pointer values (which is highly
    unusual)."

The latter claim ("highly unusual") is unsubstantiated, what is the
basis for it? At least neither your paper nor anything you referenced
present research data on this. Also there have been papers published
recently on this very topic ([1] and [2]), so it seems we are just
beginning to see the real nature of information leaking (this has
also been pointed out in the PaX ASLR paper [3]).


8. In section 6 you present performance evaluation data. The fundamental
   problem with it is of course that PG has apparently not been finished
   yet (something you do not make clear there), therefore any claims about
   its impact are to be taken with a grain of salt.

   There are more problems here however. First, you use the very latest
   member of the Intel CPU family (P4-M) which is known to have special
   microarchitecture changes to provide better performance at the same
   clock speeds than its predecessors. It would have been more interesting
   to see performance data on a variety of CPU families (P5, P2, P3, and
   the various AMD cores). If you do not have the necessary hardware at
   your disposal, i can offer you to do some testing on P3 myself, just
   make your test binaries available.

   Second, gcc is known to not produce the best code for P4 and therefore
   your negative performance impact highlights the shortcomings of gcc
   rather than provide any meaningful data on PG's performance. These
   tests should be repeated once gcc has matured or you should have run
   the tests on older families where gcc is known to generate reasonably
   good code (and therefore you would be more likely measuring PG's real
   impact and not compiler/CPU artifacts).

   Third, there is related work ([4] and [5], all of which predates PG
   by years and you failed to reference) that appears to show more real
   performance impact of function pointer encryption (something PG does
   not seem to do yet universally).


9. In section 7.1 you say that:

   "A developer can port an application to these safer dialects in a few
    hours or days, where as PointGuard was designed to allow a developer
    to compile & protect millions of lines of code in a few hours or day."

whereas you admit before that PG requires programmer intervention (as it
is not possible to have a pure PG system right now), i doubt a programmer
can compile (port) millions of lines of code in a day.


10. In section 7.2 you claim that:

   "The main limitation is that this defense can be bypassed, because
    suitable attack payload code (effectively "exec(sh)")) is almost
    always resident in victim program address spaces, and so pointer
    corruption is all that is necessary for the determined attacker
    to succeed."

Where is this "exec(sh)" supposed to be 'almost always'? Can you substantiate
this claim? Also, it is not always enough to execute a shell (i would even
say 'rarely'), you often have to change UIDs, break chroot, etc - code
that is even less likely to be present in a given program's address space.

Next you make certain claims about PaX [6] (please observe the proper
capitalization) without providing any reference to our project - why?
What is worse however is that your claims are false and/or misleading.

PaX does not provide a mere non-executable heap as you state, it provides
proper separation between writable and executable pages (it is called the
NOEXEC feature, [7]). Contrary to your claim, this feature is not bypassable,
you cannot introduce new executable code into the target's address space
(the special case mentioned in [8] is trivial to handle with ACL systems
or grsecurity's safe chroot() in a read-only directory). If you still
believe that there is a way to bypass NOEXEC then please let me know
because it must be a bug i want to correct. Your claim about StackGuard's
non-bypassability stands in stark contrast to several papers about its
failing ([9], [10]) and even your own advisories for ImmunixOS ([11]).

You also fail to substantiate your claims about the performance of PaX.
My best guess is that you are probably referring to a very old and long
outdated paper, not the current implementation. For your information,
NOEXEC has no performance impact on alpha, i386 (when SEGMEXEC is used,
which is the default, [12]), parisc, sparc and sparc64 and has a small
impact on ppc. I am curious to learn why you cited this information
when you have already been made aware of the current situation ([13]).


11. In section 7.3 you claim that:

   "PAX also incorporates ASLR (Address Space Layout Randomization) which
    can be viewed as the dual of PointGuard: rather than randomizing
    pointers, ASLR randomizes the location of key memory objects."

This is a false claim, ASLR does the exact same thing to pointers as PG.
Think about it, if you randomize all memory regions, then all pointers
to these regions will necessarily be randomized as well. There is a
difference in the amount of randomization, the number of differently
randomized regions and the classes of randomized pointers - on the first
PG is better (at least at the first sight, if you examine real life
exploit situations you will realize that an attacker will likely need
to guess pointers from more than one region at once, hence the total
randomness to be guessed is at least 32 bits, more likely 40 or more),
on the latter two PaX is. You further claim that:

  "[...] there is residual risk of attackers exploiting adjacency and
   approximate memory location."

Why would PG be immune to these kinds of attacks? My understanding is that
this class of exploit techniques does not need to know memory addresses,
hence it will work against both ASLR and PG. You also claim that:

   "Sekar et al [3] have a new implementation of this concept that
    randomizes more elements of the address spacelayout, which may
    make it harder to bypass than PAX/ASLR."

This is misleading because Address Obfuscation is vulnerable to the exact
same information leaking problem as ASLR or PG, otherwise an attacker has
to guess addresses (if he needs any, that is), there is no (determinisctic)
way around that.

The extra measures AO takes are meaningless because they introduce so little
randomization and/or can be bypassed by simply sending multiple copies of
the attack payload. You also failed point out that AO does not fully
randomize the address space, it leaves the main executable and the dynamic
linker at a fixed address and makes them a nice target for return-to-libc
style attacks (this is an inherent flaw of their technique as they try to
do everything in userland whereas these two files are mapped by the kernel
itself and without changes to the dynamic linker, they cannot be
randomized from userland).


12. Table 6 in section 7 has some bogus information:

    - as discussed above, the performance impact of NOEXEC is 0%, not
      the 10-30% as you quote it.

    - you state that ASLR is 'probably' bypassable without explaining what
      it is supposed to mean, especially in contrast to AO and PG which
      you say 'maybe' bypassable - as i pointed out above, every randomization
      based method is vulnerable to information leaking, and in fact PG is
      the worse in this regard as a leak of any pointer will reveal the
      randomness, whereas in PaX and AO you would need leaks of pointers
      to different regions to learn all.

    - StackGuard is bypassable as has been shown numerous times in the past
      ([9], [10]).

    - the PG performance claim is meaningless because of issues discussed
      above.


In conclusion, i have to ask how your paper managed to go through the
USENIX peer-review process with so many misleading, unsubstantiated
or false claims, omissions.


References:

[1] http://marc.theaimsgroup.com/?l=bugtraq&m=105941103709264&w=2
[2] http://phrack.org/show.php?p=61&a=6 section 4.4
[3] http://pageexec.virtualave.net/docs/aslr.txt
[4] http://www.eng.iastate.edu/abstracts/facultymenu.asp?PI=tyagi
    (Encoded Program Counter: Self-Protection from Buffer Overflow
     Attacks)
[5] http://link.springer.de/link/service/series/0558/bibs/2513/25130025.htm
[6] http://pageexec.virtualave.net/
[7] http://pageexec.virtualave.net/docs/noexec.txt
[8] http://pageexec.virtualave.net/docs/pax.txt
[9] http://www.phrack.org/show.php?p=56&a=5
[10] http://www1.corest.com/common/showdoc.php?idx=242&idxseccion=11
[11] http://marc.theaimsgroup.com/?l=stackguard&m=106042055816779&w=2
[12] http://pageexec.virtualave.net/docs/segmexec.txt
[13] http://marc.theaimsgroup.com/?l=stackguard&m=104812282808922&w=2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ