[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170810044005.GA2907@andrea>
Date: Thu, 10 Aug 2017 06:40:05 +0200
From: Andrea Parri <parri.andrea@...il.com>
To: Prateek Sood <prsood@...eaurora.org>, peterz@...radead.org,
mingo@...hat.com
Cc: sramana@...eaurora.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] osq_lock: fix osq_lock queue corruption
On Mon, Jul 31, 2017 at 10:54:50PM +0530, Prateek Sood wrote:
> Fix ordering of link creation between node->prev and prev->next in
> osq_lock(). A case in which the status of optimistic spin queue is
> CPU6->CPU2 in which CPU6 has acquired the lock.
>
> tail
> v
> ,-. <- ,-.
> |6| |2|
> `-' -> `-'
>
> At this point if CPU0 comes in to acquire osq_lock, it will update the
> tail count.
>
> CPU2 CPU0
> ----------------------------------
>
> tail
> v
> ,-. <- ,-. ,-.
> |6| |2| |0|
> `-' -> `-' `-'
>
> After tail count update if CPU2 starts to unqueue itself from
> optimistic spin queue, it will find updated tail count with CPU0 and
> update CPU2 node->next to NULL in osq_wait_next().
>
> unqueue-A
>
> tail
> v
> ,-. <- ,-. ,-.
> |6| |2| |0|
> `-' `-' `-'
>
> unqueue-B
>
> ->tail != curr && !node->next
>
> If reordering of following stores happen then
> prev->next where prev being CPU2 would be updated to point to CPU0 node:
>
> tail
> v
> ,-. <- ,-. ,-.
> |6| |2| |0|
> `-' -> `-' -> `-'
>
> osq_wait_next()
> node->next <- 0
> xchg(node->next, NULL)
>
> tail
> v
> ,-. <- ,-. ,-.
> |6| |2| |0|
> `-' `-' `-'
>
> unqueue-C
>
> At this point if next instruction
> WRITE_ONCE(next->prev, prev);
> in CPU2 path is committed before the update of CPU0 node->prev = prev then
> CPU0 node->prev will point to CPU6 node.
>
> tail
> V----------. v
> ,-. <- ,-. ,-.
> |6| |2| |0|
> `-' `-' `-'
> `----------^
>
> At this point if CPU0 path's node->prev = prev is committed resulting
> in change of CPU0 prev back to CPU2 node. CPU2 node->next is NULL
> currently,
>
> tail
> v
> ,-. <- ,-. <- ,-.
> |6| |2| |0|
> `-' `-' `-'
> `----------^
>
> so if CPU0 gets into unqueue path of osq_lock it will keep spinning
> in infinite loop as condition prev->next == node will never be true.
>
> Signed-off-by: Prateek Sood <prsood@...eaurora.org>
> ---
> kernel/locking/osq_lock.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
> index a316794..9f4afa3 100644
> --- a/kernel/locking/osq_lock.c
> +++ b/kernel/locking/osq_lock.c
> @@ -109,6 +109,19 @@ bool osq_lock(struct optimistic_spin_queue *lock)
>
> prev = decode_cpu(old);
> node->prev = prev;
> +
> + /*
> + * osq_lock() unqueue
> + *
> + * node->prev = prev osq_wait_next()
> + * WMB MB
> + * prev->next = node next->prev = prev //unqueue-C
> + *
> + * Here 'node->prev' and 'next->prev' are the same variable and we need
> + * to ensure these stores happen in-order to avoid corrupting the list.
> + */
The interested pattern/behavior remains somehow implicit in this snippet
(for example, as you described above, a load "reading from" the store to
prev->next is implicit in that osq_wait_next()); however I was unable to
come up with an alternative solution without complicating the comment.
Reviewed-by: Andrea Parri <parri.andrea@...il.com>
Andrea
> + smp_wmb();
> +
> WRITE_ONCE(prev->next, node);
>
> /*
> --
> Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.,
> is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
>
Powered by blists - more mailing lists