[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150216165156.GA20089@redhat.com>
Date: Mon, 16 Feb 2015 17:51:56 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, dave@...olabs.net,
waiman.long@...com, raghavendra.kt@...ux.vnet.ibm.com,
Nicholas Mc Guire <der.herr@...r.at>,
Linus Torvalds <torvalds@...ux-foundation.org>,
mingo@...nel.org
Subject: Re: [PATCH] sched/completion: completion_done() should serialize
with complete()
On 02/16, Peter Zijlstra wrote:
>
> On Thu, Feb 12, 2015 at 08:59:13PM +0100, Oleg Nesterov wrote:
> > Commit de30ec47302c "Remove unnecessary ->wait.lock serialization when
> > reading completion state" was not correct, without lock/unlock the code
> > like stop_machine_from_inactive_cpu()
> >
> > while (!completion_done())
> > cpu_relax();
> >
> > can return before complete() finishes its spin_unlock() which writes to
> > this memory. And spin_unlock_wait().
> >
> > While at it, change try_wait_for_completion() to use READ_ONCE().
>
> So I share Davidlohrs concern
Ah. I forgot to reply to Davidlohr's email. Sorry.
> if we should not simply revert that
> change; but given we've now gone over it detail I suppose we should just
> keep the optimized version.
Yes, I was going to say that of course I won't argue if we simply revert
that commit. As he rigthly pointed the lockless check doesn't make sense
performance-wise.
However, this code needs a comment to explain why we can't simply check
->done and return, unlock_wait() is more documentation than optimization.
But,
> I did add a comment to your patch; and queued the below for
> sched/urgent.
Thanks!
Now this logic is actually documented ;) unlock_wait() alone could confuse
the reader too.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists