[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <67bb03e5-f79c-1815-e2bf-949c67047418@colorfullife.com>
Date: Mon, 8 Nov 2021 19:34:34 +0100
From: Manfred Spraul <manfred@...orfullife.com>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: Alexander Mikhalitsyn <alexander.mikhalitsyn@...tuozzo.com>,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Davidlohr Bueso <dave@...olabs.net>,
Greg KH <gregkh@...uxfoundation.org>,
Andrei Vagin <avagin@...il.com>,
Pavel Tikhomirov <ptikhomirov@...tuozzo.com>,
Vasily Averin <vvs@...tuozzo.com>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
stable@...r.kernel.org
Subject: Re: [RFC] shm: extend forced shm destroy to support objects from
several IPC nses (simplified)
Hi Eric,
On 11/7/21 20:51, Eric W. Biederman wrote:
> Manfred Spraul <manfred@...orfullife.com> writes:
>
>>
>>> +
>>> + /* Guarantee shp lives after task_lock is dropped */
>>> + ipc_getref(&shp->shm_perm);
>>> +
>> task_lock() doesn't help: As soon as shm_creator is set to NULL,
>> IPC_RMID won't acquire task_lock() anymore.
>>
>> Thus shp can disappear before we arrive at this ipc_getref.
>>
>> [Yes, I think I have introduced this bug. ]
>>
>> Corrected version attached.
>>
>>
[...]
>> + /* 2) unlink */
>> + list_del_init(&shp->shm_clist);
>> +
[...]
>> + /*
>> + * 5) get a reference to the namespace.
>> + * The refcount could be already 0. If it is 0, then
>> + * the shm objects will be free by free_ipc_work().
>> + */
>> + ns = get_ipc_ns_not_zero(ns);
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Isn't this increment also too late? Doesn't this need to move up
> by ipc_rcu_getref while shp is still on the list?
Yes, thanks.
Updated patch attached.
> Assuming the code is running in parallel with shm_exit_ns after removal
> from shm_clist shm_destroy can run to completion and shm_exit_ns can
> run to completion and the ipc namespace can be freed.
>
> Eric
--
Manfred
View attachment "0001-shm-extend-forced-shm-destroy-to-support-objects-fro.patch" of type "text/x-patch" (12950 bytes)
Powered by blists - more mailing lists