[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47F10DF7.5010702@colorfullife.com>
Date: Mon, 31 Mar 2008 18:14:47 +0200
From: Manfred Spraul <manfred@...orfullife.com>
To: Pavel Emelyanov <xemul@...nvz.org>
CC: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC, PATCH] fix SEM_UNDO with namespaces
Pavel Emelyanov wrote:
> Manfred Spraul wrote:
>
>> Hi,
>>
>> the attached patch should fix the combination of CLONE_NEWIPC with
>> shared sysv undo structures (the common case, just
>> sys_unshare(CLONE_NEWIPC)):
>> lookup_undo() now locates the undo array based on both semid and the
>> namespace pointer.
>>
>
> If you start using any IPC object and then call unshare with CLONE_NEWIPC,
> then it's your problem, but not the kernel.
>
The result is a kernel memory corruption, and kernel memory corruptions
are always the kernel's problem.
The code assumed that a semaphore id is globally unique. With
namespaces, this is not true anymore.
If two semaphore arrays exist with the same id, but different sizes,
then semops will cause memory corruptions: The undo structure contains
one element for each semaphore, thus the semop will write behind the end
of the memory allocation.
> I agree, that we should probably destroy this one when the task calls
> unshare, but trying to keep this list relevant is useless.
>
A very tricky question: Let's assume we have a process with two threads.
The undo structure is shared, as per opengroup standard.
Now one thread calls unshare(CLONE_NEWIPC). What should happen? We
cannot destroy the undo structure, the other thread might be still
interested in it.
If we allow sys_unshare() for multithreaded processes with CLONE_NEWIPC
and without CLONE_SYSVSEM, then we must handle this case.
--
Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists