[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ziy5q3cy.fsf@doppelsaurus.mobileactivedefense.com>
Date: Sun, 22 Nov 2015 18:46:21 +0000
From: Rainer Weikusat <rweikusat@...ileactivedefense.com>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Rainer Weikusat <rweikusat@...ileactivedefense.com>,
Jason Baron <jbaron@...mai.com>,
Al Viro <viro@...iv.linux.org.uk>,
David Miller <davem@...emloft.net>,
LKML <linux-kernel@...r.kernel.org>,
David Howells <dhowells@...hat.com>,
netdev <netdev@...r.kernel.org>,
syzkaller <syzkaller@...glegroups.com>,
Kostya Serebryany <kcc@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Sasha Levin <sasha.levin@...cle.com>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: Use-after-free in ppoll
Dmitry Vyukov <dvyukov@...gle.com> writes:
> On Sun, Nov 22, 2015 at 3:32 PM, Rainer Weikusat
> <rweikusat@...ileactivedefense.com> wrote:
>> Dmitry Vyukov <dvyukov@...gle.com> writes:
>>> Hello,
>>>
>>> On commit f2d10565b9bdbb722bd43e6e1a759eeddb9645c8 (Nov 20).
>>>
>>> The following program triggers use-after-free:
>>>
>>> // autogenerated by syzkaller (http://github.com/google/syzkaller)
>>> #include <syscall.h>
>>> #include <string.h>
>>> #include <stdint.h>
>>> #include <pthread.h>
>>>
>>> void *thread(void *p)
>>> {
>>> syscall(SYS_write, (long)p, 0x2000278ful, 0x1ul, 0, 0, 0);
>>> return 0;
>>> }
>>
>> [...]
>>
>>
>>> long r1 = syscall(SYS_socketpair, 0x1ul, 0x3ul, 0x0ul,
>>
>> [...]
>>
>>> long r5 = syscall(SYS_close, r2, 0, 0, 0, 0, 0);
>>> pthread_t th;
>>> pthread_create(&th, 0, thread, (void*)(long)r3);
>>
>> [...]
>>
>>> long r21 = syscall(SYS_ppoll, 0x20000ffful, 0x3ul, 0x20000ffcul, 0x20000ffdul, 0x8ul, 0);
>>> return 0;
>>> }
>>
>> That's one of the already known sequences for triggering this issue:
[...]
> I have not read the code. But I just want to point out that all 3
> reports are different. For example, in the first one, ppoll both frees
> the object and then accesses it. That is, it is not write that frees
> the object.
The call trace is always the same:
[ 2672.994366] [<ffffffff812ca0fa>] __asan_load4+0x6a/0x70
[ 2672.994366] [<ffffffff81126832>] do_raw_spin_lock+0x22/0x220
[ 2672.994366] [<ffffffff821d6061>] _raw_spin_lock_irqsave+0x51/0x60
[ 2672.994366] [<ffffffff8110d748>] remove_wait_queue+0x18/0x80
[ 2672.994366] [<ffffffff812fddab>] poll_freewait+0x7b/0x130
[ 2672.994366] [<ffffffff8130063c>] do_sys_poll+0x4dc/0x860
[ 2672.994366] [<ffffffff81300eb9>] SyS_ppoll+0x1a9/0x310
And if you look at the poll implementation, the important part is this
(fs/ select.c, do_sys_poll)
fdcount = do_poll(nfds, head, &table, end_time);
poll_freewait(&table);
do_poll calls the poll routine of the file descriptors which cause
"enqueuing of something" via poll wait callback. For poll, that's the
__pollwait routine in select.c:
static void __pollwait(struct file *filp, wait_queue_head_t *wait_address,
poll_table *p)
{
struct poll_wqueues *pwq = container_of(p, struct poll_wqueues, pt);
struct poll_table_entry *entry = poll_get_entry(pwq);
if (!entry)
return;
entry->filp = get_file(filp);
entry->wait_address = wait_address;
entry->key = p->_key;
init_waitqueue_func_entry(&entry->wait, pollwake);
entry->wait.private = pwq;
add_wait_queue(wait_address, &entry->wait);
}
because of the close, this routine will be called with the peer_wait
wait_queue_head of the non-closed socket of the socket pair as
wait_address argument. And poll_freewait calls free_poll_entry for all
entries on the poll table which is
static void free_poll_entry(struct poll_table_entry *entry)
{
remove_wait_queue(entry->wait_address, &entry->wait);
fput(entry->filp);
}
but by this time, the wait_address points to freed memory because the
only thing which kept the socket it belonged to alive after the
corresponding file descriptor was closed was the reference the other
socket held. But that was dropped by unix_dgram_sendmsg upon detecting a
dead peer.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists