[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150626161415.GY3717@linux.vnet.ibm.com>
Date: Fri, 26 Jun 2015 09:14:28 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Oleg Nesterov <oleg@...hat.com>, tj@...nel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, der.herr@...r.at, dave@...olabs.net,
riel@...hat.com, viro@...IV.linux.org.uk,
torvalds@...ux-foundation.org
Subject: Re: [RFC][PATCH 12/13] stop_machine: Remove lglock
On Fri, Jun 26, 2015 at 02:32:07PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 25, 2015 at 07:51:46AM -0700, Paul E. McKenney wrote:
> > > So please humour me and explain how all this is far more complicated ;-)
> >
> > Yeah, I do need to get RCU design/implementation documentation put together.
> >
> > In the meantime, RCU's normal grace-period machinery is designed to be
> > quite loosely coupled. The idea is that almost all actions occur locally,
> > reducing contention and cache thrashing. But an expedited grace period
> > needs tight coupling in order to be able to complete quickly. Making
> > something that switches between loose and tight coupling in short order
> > is not at all simple.
>
> But expedited just means faster, we never promised that
> sync_rcu_expedited is the absolute fastest primitive ever.
Which is good, because given that it is doing something to each and
every CPU, it most assuredly won't in any way resemble the absolute
fastest primitive ever. ;-)
> So I really should go read the RCU code I suppose, but I don't get
> what's wrong with starting a forced quiescent state, then doing the
> stop_work spray, where each work will run the regular RCU tick thing to
> push it forwards.
>
> >From my feeble memories, what I remember is that the last cpu to
> complete a GP on a leaf node will push the completion up to the next
> level, until at last we've reached the root of your tree and we can
> complete the GP globally.
That is true, the task that notices the last required quiescent state
will push up the tree and notice that the grace period has ended.
If that task is not the grace-period kthread, it will then awaken
the grace-period kthread.
> To me it just makes more sense to have a single RCU state machine. With
> expedited we'll push it as fast as we can, but no faster.
Suppose that someone invokes synchronize_sched_expedited(), but there
is no normal grace period in flight. Then each CPU will note its own
quiescent state, but when it later might have tried to push it up the
tree, it will see that there is no grace period in effect, and will
therefore not bother.
OK, we could have synchronize_sched_expedited() tell the grace-period
kthread to start a grace period if one was not already in progress.
But that still isn't good enough, because the grace-period kthread will
take some time to initialize the new grace period, and if we hammer all
the CPUs before the initialization is complete, the resulting quiescent
states cannot be counted against the new grace period. (The reason for
this is that there is some delay between the actual quiescent state
and the time that it is reported, so we have to be very careful not
to incorrectly report a quiescent state from an earlier grace period
against the current grace period.)
OK, the grace-period kthread could tell synchronize_sched_expedited()
when it has finished initializing the grace period, though this is
starting to get a bit on the Rube Goldberg side. But this -still- is
not good enough, because even though the grace-period kthread has fully
initialized the new grace period, the individual CPUs are unaware of it.
And they will therefore continue to ignore any quiescent state that they
encounter, because they cannot prove that it actually happened after
the start of the current grace period.
OK, we could have some sort of indication when all CPUs become aware
of the new grace period by having them atomically manipulate a global
counter. Presumably we have some flag indicating when this is and is
not needed so that we avoid the killer memory contention in the common
case where it is not needed. But this -still- isn't good enough, because
idle CPUs never will become aware of the new grace period -- by design,
as they are supposed to be able to sleep through an arbitrary number of
grace periods.
OK, so we could have some sort of indication when all non-idle CPUs
become aware of the new grace period. But there could be races where
an idle CPU suddenly becomes non-idle just after it was reported that
the all non-idle CPUs were aware of the grace period. This would result
in a hang, because this this newly non-idle CPU might not have noticed
the new grace period at the time that synchronize_sched_expedited()
hammers it, which would mean that this newly non-idle CPU would refuse
to report the resulting quiescent state.
OK, so the grace-period kthread could track and report the set of CPUs
that had ever been idle since synchronize_sched_expedited() contacted it.
But holy overhead Batman!!!
And that is just one of the possible interactions with the grace-period
kthread. It might be in the middle of setting up a new grace period.
It might be in the middle of cleaning up after the last grace period.
It might be waiting for a grace period to complete, and the last quiescent
state was just reported, but hasn't propagated all the way up yet. All
of these would need to be handled correctly, and a number of them would
be as messy as the above scenario. Some might be even more messy.
I feel like there is a much easier way, but cannot yet articulate it.
I came across a couple of complications and a blind alley with it thus
far, but it still looks promising. I expect to be able to generate
actual code for it within a few days, but right now it is just weird
abstract shapes in my head. (Sorry, if I knew how to describe them,
I could just write the code! When I do write the code, it will probably
seem obvious and trivial, that being the usual outcome...)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists