[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170817082616.t34xbzbpdxd2lye2@gmail.com>
Date: Thu, 17 Aug 2017 10:26:16 +0200
From: Ingo Molnar <mingo@...nel.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, jiangshanlai@...il.com,
dipankar@...ibm.com, akpm@...ux-foundation.org,
mathieu.desnoyers@...icios.com, josh@...htriplett.org,
tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>,
Alan Stern <stern@...land.harvard.edu>,
Andrea Parri <parri.andrea@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v5 tip/core/rcu 4/9] completion: Replace
spin_unlock_wait() with lock/unlock pair
* Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:
> There is no agreed-upon definition of spin_unlock_wait()'s semantics,
> and it appears that all callers could do just as well with a lock/unlock
> pair. This commit therefore replaces the spin_unlock_wait() call in
> completion_done() with spin_lock() followed immediately by spin_unlock().
> This should be safe from a performance perspective because the lock
> will be held only the wakeup happens really quickly.
>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Will Deacon <will.deacon@....com>
> Cc: Alan Stern <stern@...land.harvard.edu>
> Cc: Andrea Parri <parri.andrea@...il.com>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> [ paulmck: Updated to use irqsave based on 0day Test Robot feedback. ]
>
> diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c
> index 13fc5ae9bf2f..c9524d2d9316 100644
> --- a/kernel/sched/completion.c
> +++ b/kernel/sched/completion.c
> @@ -300,6 +300,8 @@ EXPORT_SYMBOL(try_wait_for_completion);
> */
> bool completion_done(struct completion *x)
> {
> + unsigned long flags;
> +
> if (!READ_ONCE(x->done))
> return false;
>
> @@ -307,14 +309,9 @@ bool completion_done(struct completion *x)
> * If ->done, we need to wait for complete() to release ->wait.lock
> * otherwise we can end up freeing the completion before complete()
> * is done referencing it.
> - *
> - * The RMB pairs with complete()'s RELEASE of ->wait.lock and orders
> - * the loads of ->done and ->wait.lock such that we cannot observe
> - * the lock before complete() acquires it while observing the ->done
> - * after it's acquired the lock.
> */
> - smp_rmb();
> - spin_unlock_wait(&x->wait.lock);
> + spin_lock_irqsave(&x->wait.lock, flags);
> + spin_unlock_irqrestore(&x->wait.lock, flags);
> return true;
> }
> EXPORT_SYMBOL(completion_done);
I'm fine with this patch - as long as there are no performance regression reports.
(which I suspect there won't be.)
Would you like to carry this in the RCU tree, due to other changes depending on
this change - or can I pick this up into the scheduler tree?
Thanks,
Ingo
Powered by blists - more mailing lists