[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170419153703.GQ3956@linux.vnet.ibm.com>
Date: Wed, 19 Apr 2017 08:37:03 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com,
marc.zyngier@....com
Subject: Re: [PATCH v2 tip/core/rcu 0/13] Miscellaneous fixes for 4.12
On Wed, Apr 19, 2017 at 03:15:53PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 19, 2017 at 06:02:45AM -0700, Paul E. McKenney wrote:
> > On Wed, Apr 19, 2017 at 01:28:45PM +0200, Peter Zijlstra wrote:
> > >
> > > So the thing Maz complained about is because KVM assumes
> > > synchronize_srcu() is 'free' when there is no srcu_read_lock() activity.
> > > This series 'breaks' that.
> > >
> > > I've not looked hard enough at the new SRCU to see if its possible to
> > > re-instate that feature.
> >
> > And with the fix I gave Maz, the parallelized version is near enough
> > to being free as well. It was just a stupid bug on my part: I forgot
> > to check for expedited when scheduling callbacks.
>
> Right, although for the old SRCU it was true for !expedited as well.
Which is all good fun until someone does a call_srcu() on each and
every munmap() syscall. ;-)
> Just turns out the KVM memslots crud already uses
> synchronize_srcu_expedited().
>
> <rant>without a friggin' comment; hate @expedited</rant>
And I won't even try defend the old try_stop_cpus()-based expedited
algorithm in today's context, even if it did seem to be a good idea
at the time. That said, back at that time, the expectation was that
expedited grace periods would only be used for very rare boot-time
configuration changes, at which time who cares? But there are a lot
more expedited use cases these days, so the implementation had to change.
But the current code is much better housebroken. ;-)
Thanx, Paul
Powered by blists - more mailing lists