[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <469C86EB.8090501@sw.ru>
Date: Tue, 17 Jul 2007 13:07:55 +0400
From: Kirill Korotaev <dev@...ru>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Pavel Emelianov <xemul@...nvz.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
devel@...nvz.org
Subject: Re: [Devel] Re: [PATCH] Fix user struct leakage with locked IPC shem
segment
Andrew Morton wrote:
> On Mon, 16 Jul 2007 16:24:12 +0400
> Pavel Emelianov <xemul@...nvz.org> wrote:
>
>
>>When user locks an ipc shmem segmant with SHM_LOCK ctl and the
>>segment is already locked the shmem_lock() function returns 0.
>>After this the subsequent code leaks the existing user struct:
>
>
> I'm curious. For the past few months, people@...nvz.org have discovered
> (and fixed) an ongoing stream of obscure but serious and quite
> long-standing bugs.
thanks a lot :@)
> How are you discovering these bugs?
Not sure what to answer :) Just trying to do our best.
This bug was thought over by Pavel for about 3 month after a single
uid leak in container was detected by beancounters' kernel memory accounting...
>>== ipc/shm.c: sys_shmctl() ==
>> ...
>> err = shmem_lock(shp->shm_file, 1, user);
>> if (!err) {
>> shp->shm_perm.mode |= SHM_LOCKED;
>> shp->mlock_user = user;
>> }
>> ...
>>==
>>
>>Other results of this are:
>>1. the new shp->mlock_user is not get-ed and will point to freed
>> memory when the task dies.
>
>
> That sounds fairly serious - can this lead to memory corruption and crashes?
Yes it can. According to Pavel when the shmem segment is destroyed it
puts the mlock_user pointer, which can already be stalled.
Kirill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists