[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150624160851.GF3717@linux.vnet.ibm.com>
Date: Wed, 24 Jun 2015 09:09:04 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Oleg Nesterov <oleg@...hat.com>, tj@...nel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, der.herr@...r.at, dave@...olabs.net,
riel@...hat.com, viro@...IV.linux.org.uk,
torvalds@...ux-foundation.org
Subject: Re: [RFC][PATCH 12/13] stop_machine: Remove lglock
On Wed, Jun 24, 2015 at 05:40:10PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 24, 2015 at 08:27:19AM -0700, Paul E. McKenney wrote:
> > > The thing is, if we're stalled on a stop_one_cpu() call, the sync_rcu()
> > > is equally stalled. The sync_rcu() cannot wait more efficient than we're
> > > already waiting either.
> >
> > Ah, but synchronize_rcu() doesn't force waiting on more than one extra
> > grace period. With strictly queued mutex, you can end up waiting on
> > several.
>
> But you could fix that by replacing/augmenting the expedited ticket with
> gpnum/copmleted as used in get_state_synchronize_rcu()/cond_synchronize_rcu().
Yes, good point, that would be a way of speeding the existing polling
loop up in the case where the polling loop took longer than a normal
grace period. Might also be a way to speed up the new "polling" regime,
but I am still beating up the counters. ;-)
But if the mutex serializes everything unconditionally, then you have
already potentially waited for several grace periods worth of time
before you get a chance to check the ticket, so the check doesn't help.
Or am I missing something subtle here?
It looks like I do need to use smp_call_function_single() and your
resched_cpu() because calling stop_one_cpu() sequentially is about
twice as slow as try_stop_cpus() in rcutorture runs of up to 16 CPUs.
But either way, your point about not stopping all the CPUs does hold.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists