[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150629075645.GD19282@twins.programming.kicks-ass.net>
Date: Mon, 29 Jun 2015 09:56:46 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Oleg Nesterov <oleg@...hat.com>, tj@...nel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, der.herr@...r.at, dave@...olabs.net,
riel@...hat.com, viro@...IV.linux.org.uk,
torvalds@...ux-foundation.org
Subject: Re: [RFC][PATCH 12/13] stop_machine: Remove lglock
On Fri, Jun 26, 2015 at 09:14:28AM -0700, Paul E. McKenney wrote:
> > To me it just makes more sense to have a single RCU state machine. With
> > expedited we'll push it as fast as we can, but no faster.
>
> Suppose that someone invokes synchronize_sched_expedited(), but there
> is no normal grace period in flight. Then each CPU will note its own
> quiescent state, but when it later might have tried to push it up the
> tree, it will see that there is no grace period in effect, and will
> therefore not bother.
Right, I did mention the force grace period machinery to make sure we
start one before poking :-)
> OK, we could have synchronize_sched_expedited() tell the grace-period
> kthread to start a grace period if one was not already in progress.
I had indeed forgotten that got farmed out to the kthread; on which, my
poor desktop seems to have spend ~140 minutes of its (most recent)
existence poking RCU things.
7 root 20 0 0 0 0 S 0.0 0.0 56:34.66 rcu_sched
8 root 20 0 0 0 0 S 0.0 0.0 20:58.19 rcuos/0
9 root 20 0 0 0 0 S 0.0 0.0 18:50.75 rcuos/1
10 root 20 0 0 0 0 S 0.0 0.0 18:30.62 rcuos/2
11 root 20 0 0 0 0 S 0.0 0.0 17:33.24 rcuos/3
12 root 20 0 0 0 0 S 0.0 0.0 2:43.54 rcuos/4
13 root 20 0 0 0 0 S 0.0 0.0 3:00.31 rcuos/5
14 root 20 0 0 0 0 S 0.0 0.0 3:09.27 rcuos/6
15 root 20 0 0 0 0 S 0.0 0.0 2:52.98 rcuos/7
Which is almost as much time as my konsole:
2853 peterz 20 0 586240 103664 41848 S 1.0 0.3 147:39.50 konsole
Which seems somewhat excessive. But who knows.
> OK, the grace-period kthread could tell synchronize_sched_expedited()
> when it has finished initializing the grace period, though this is
> starting to get a bit on the Rube Goldberg side. But this -still- is
> not good enough, because even though the grace-period kthread has fully
> initialized the new grace period, the individual CPUs are unaware of it.
Right, so over the weekend -- I had postponed reading this rather long
email for I was knackered -- I had figured that because we trickle the
GP completion up, you probably equally trickle the GP start down of
sorts and there might be 'interesting' things there.
> And they will therefore continue to ignore any quiescent state that they
> encounter, because they cannot prove that it actually happened after
> the start of the current grace period.
Right, badness :-)
Although here I'll once again go ahead and say something ignorant; how
come that's a problem? Surely if we know the kthread thing has finished
starting a GP, any one CPU issuing a full memory barrier (as would be
implied by switching to the stop worker) must then indeed observe that
global state? due to that transitivity thing.
That is, I'm having a wee bit of bother for seeing how you'd need
manipulation of global variables as you elude to below.
> But this -still- isn't good enough, because
> idle CPUs never will become aware of the new grace period -- by design,
> as they are supposed to be able to sleep through an arbitrary number of
> grace periods.
Yes, I'm sure. Waking up seems like a serializing experience though; but
I suppose that's not good enough if we wake up right before we force
start the GP.
> I feel like there is a much easier way, but cannot yet articulate it.
> I came across a couple of complications and a blind alley with it thus
> far, but it still looks promising. I expect to be able to generate
> actual code for it within a few days, but right now it is just weird
> abstract shapes in my head. (Sorry, if I knew how to describe them,
> I could just write the code! When I do write the code, it will probably
> seem obvious and trivial, that being the usual outcome...)
Hehe, glad to have been of help :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists