[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f4f08bc4-bdf8-cfd7-7453-e512ff0bbc67@suse.com>
Date: Mon, 10 May 2021 12:26:22 +0200
From: Varad Gautam <varad.gautam@...e.com>
To: Davidlohr Bueso <dbueso@...e.de>,
Manfred Spraul <manfred@...orfullife.com>
CC: linux-kernel@...r.kernel.org,
Matthias von Faber <matthias.vonfaber@...-tech.de>,
stable@...r.kernel.org,
Christian Brauner <christian.brauner@...ntu.com>,
Oleg Nesterov <oleg@...hat.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Andrew Morton <akpm@...ux-foundation.org>,
James Morris <jamorris@...ux.microsoft.com>,
Serge Hallyn <serge@...lyn.com>
Subject: Re: [PATCH v3] ipc/mqueue: Avoid relying on a stack reference past
its expiry
On 5/10/21 3:10 AM, Davidlohr Bueso wrote:
> On 2021-05-08 12:23, Manfred Spraul wrote:
>> Hi Varad,
>>
>> On 5/7/21 3:38 PM, Varad Gautam wrote:
>>> @@ -1005,11 +1022,9 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
>>> struct ext_wait_queue *this)
>>> {
>>> list_del(&this->list);
>>> - get_task_struct(this->task);
>>> -
>>> + wake_q_add(wake_q, this->task);
>>> /* see MQ_BARRIER for purpose/pairing */
>>> smp_store_release(&this->state, STATE_READY);
>>> - wake_q_add_safe(wake_q, this->task);
>>> }
>>> /* pipelined_send() - send a message directly to the task waiting in
>>
>> First, I was too fast: I had assumed that wake_q_add() before
>> smp_store_release() would be a potential lost wakeup.
>
> Yeah you need wake_up_q() to actually wake anything up.
>
>>
>> As __pipelined_op() is called within spin_lock(&info->lock), and as
>> wq_sleep() will reread this->state after acquiring
>> spin_lock(&info->lock), I do not see a bug anymore.
>
> Right, and when I proposed this version of the fix I was mostly focusing on STATE_READY
> being set as the last operation, but the fact of the matter is we had moved to the
> wake_q_add_safe() version for two reasons:
>
> (1) Ensuring the ->state = STATE_READY is done after the reference count and avoid
> racing with exit. In mqueue's original use of wake_q we were relying on the call's
> implied barrier from wake_q_add() in order to avoid reordering of setting the state.
> But this turned out to be insufficient hence the explicit smp_store_release().
>
> (2) In order to prevent a potential lost wakeup when the blocked task is already queued
> for wakeup by another task (the failed cmpxchg case in wake_q_add), and therefore we need
> to set the return condition (->state = STATE_READY) before adding the task to the wake_q.
>
> But I'm not seeing how race (2) can happen in mqueue. The race was always theoretical to
> begin with, with the exception of rwsems[1] in which actually the wakee task could end up in
> the waker's wake_q without actually blocking.
>
> So all in all I now agree that we should keep the order of how we currently have things,
> just to be on the safer side, if nothing else.
>
Considering that moving the wake_q_add call in v2 / v3 has the potential to cause lost
wakeups which has shown up in other cases, I would argue for merging the approach from v1
as the path of least surprises in favor of first factoring out the race [1]. I will
resurrect it for a v4.
[1] https://lore.kernel.org/lkml/20210504155534.17270-1-varad.gautam@suse.com/
Thanks,
Varad
> [1] https://lore.kernel.org/lkml/1543495830-2644-1-git-send-email-xieyongji@baidu.com
>
> Thanks,
> Davidlohr
>
--
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5
90409 Nürnberg
Germany
HRB 36809, AG Nürnberg
Geschäftsführer: Felix Imendörffer
Powered by blists - more mailing lists