[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170816112235.3acc59f2@gandalf.local.home>
Date: Wed, 16 Aug 2017 11:22:35 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>,
Alan Stern <stern@...land.harvard.edu>,
Andrea Parri <parri.andrea@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v5 tip/core/rcu 4/9] completion: Replace
spin_unlock_wait() with lock/unlock pair
On Tue, 15 Aug 2017 09:16:29 -0700
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> There is no agreed-upon definition of spin_unlock_wait()'s semantics,
> and it appears that all callers could do just as well with a lock/unlock
> pair. This commit therefore replaces the spin_unlock_wait() call in
> completion_done() with spin_lock() followed immediately by spin_unlock().
> This should be safe from a performance perspective because the lock
> will be held only the wakeup happens really quickly.
>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Will Deacon <will.deacon@....com>
> Cc: Alan Stern <stern@...land.harvard.edu>
> Cc: Andrea Parri <parri.andrea@...il.com>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> [ paulmck: Updated to use irqsave based on 0day Test Robot feedback. ]
>
> diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c
> index 13fc5ae9bf2f..c9524d2d9316 100644
> --- a/kernel/sched/completion.c
> +++ b/kernel/sched/completion.c
> @@ -300,6 +300,8 @@ EXPORT_SYMBOL(try_wait_for_completion);
> */
> bool completion_done(struct completion *x)
> {
> + unsigned long flags;
> +
> if (!READ_ONCE(x->done))
> return false;
>
> @@ -307,14 +309,9 @@ bool completion_done(struct completion *x)
> * If ->done, we need to wait for complete() to release ->wait.lock
> * otherwise we can end up freeing the completion before complete()
> * is done referencing it.
> - *
> - * The RMB pairs with complete()'s RELEASE of ->wait.lock and orders
> - * the loads of ->done and ->wait.lock such that we cannot observe
> - * the lock before complete() acquires it while observing the ->done
> - * after it's acquired the lock.
> */
> - smp_rmb();
> - spin_unlock_wait(&x->wait.lock);
> + spin_lock_irqsave(&x->wait.lock, flags);
> + spin_unlock_irqrestore(&x->wait.lock, flags);
> return true;
> }
> EXPORT_SYMBOL(completion_done);
For this patch:
Reviewed-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
But I was looking at this function, and it is a little worrisome, as it
says it should return false if there are waiters and true otherwise.
But it can also return false if there are no waiters and the completion
is already done.
Basically we have:
wait_for_completion() {
while (!done)
wait();
done--;
}
complete() {
done++;
wake_up_waiters();
}
Thus, completion_done() only returns true if a complete happened and a
wait_for_completion has not. It does not return true if the complete
has not yet occurred, but there are still waiters.
I looked at a couple of use cases, and this does not appear to be an
issue, but the documentation about the completion_done() does not
exactly fit the implementation. Should that be addressed?
Also, if complete_all() is called, then reinit_completion() must be
called before that completion is used. The reinit_completion() has a
comment stating this, but there's no comment by complete_all() stating
this, which is where it really should be. I'll send a patch to fix this
one.
-- Steve
Powered by blists - more mailing lists