[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150626123207.GZ19282@twins.programming.kicks-ass.net>
Date: Fri, 26 Jun 2015 14:32:07 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Oleg Nesterov <oleg@...hat.com>, tj@...nel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, der.herr@...r.at, dave@...olabs.net,
riel@...hat.com, viro@...IV.linux.org.uk,
torvalds@...ux-foundation.org
Subject: Re: [RFC][PATCH 12/13] stop_machine: Remove lglock
On Thu, Jun 25, 2015 at 07:51:46AM -0700, Paul E. McKenney wrote:
> > So please humour me and explain how all this is far more complicated ;-)
>
> Yeah, I do need to get RCU design/implementation documentation put together.
>
> In the meantime, RCU's normal grace-period machinery is designed to be
> quite loosely coupled. The idea is that almost all actions occur locally,
> reducing contention and cache thrashing. But an expedited grace period
> needs tight coupling in order to be able to complete quickly. Making
> something that switches between loose and tight coupling in short order
> is not at all simple.
But expedited just means faster, we never promised that
sync_rcu_expedited is the absolute fastest primitive ever.
So I really should go read the RCU code I suppose, but I don't get
what's wrong with starting a forced quiescent state, then doing the
stop_work spray, where each work will run the regular RCU tick thing to
push it forwards.
>From my feeble memories, what I remember is that the last cpu to
complete a GP on a leaf node will push the completion up to the next
level, until at last we've reached the root of your tree and we can
complete the GP globally.
To me it just makes more sense to have a single RCU state machine. With
expedited we'll push it as fast as we can, but no faster.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists