[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150213210913.GZ4166@linux.vnet.ibm.com>
Date: Fri, 13 Feb 2015 13:09:14 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, dave@...olabs.net,
waiman.long@...com, raghavendra.kt@...ux.vnet.ibm.com
Subject: Re: [PATCH] sched/completion: completion_done() should serialize
with complete()
On Thu, Feb 12, 2015 at 08:59:13PM +0100, Oleg Nesterov wrote:
> Commit de30ec47302c "Remove unnecessary ->wait.lock serialization when
> reading completion state" was not correct, without lock/unlock the code
> like stop_machine_from_inactive_cpu()
>
> while (!completion_done())
> cpu_relax();
>
> can return before complete() finishes its spin_unlock() which writes to
> this memory. And spin_unlock_wait().
>
> While at it, change try_wait_for_completion() to use READ_ONCE().
>
> Reported-by: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> Reported-by: Davidlohr Bueso <dave@...olabs.net>
> Signed-off-by: Oleg Nesterov <oleg@...hat.com>
So I am having some difficulty reproducing the original problem, but
the patch passes rcutorture testing. So...
Tested-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> --- x/kernel/sched/completion.c
> +++ x/kernel/sched/completion.c
> @@ -274,7 +274,7 @@ bool try_wait_for_completion(struct comp
> * first without taking the lock so we can
> * return early in the blocking case.
> */
> - if (!ACCESS_ONCE(x->done))
> + if (!READ_ONCE(x->done))
> return 0;
>
> spin_lock_irqsave(&x->wait.lock, flags);
> @@ -297,6 +297,11 @@ EXPORT_SYMBOL(try_wait_for_completion);
> */
> bool completion_done(struct completion *x)
> {
> - return !!ACCESS_ONCE(x->done);
> + if (!READ_ONCE(x->done))
> + return false;
> +
> + smp_rmb();
> + spin_unlock_wait(&x->wait.lock);
> + return true;
> }
> EXPORT_SYMBOL(completion_done);
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists