[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150623143935.GI3892@linux.vnet.ibm.com>
Date: Tue, 23 Jun 2015 07:39:35 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Oleg Nesterov <oleg@...hat.com>, tj@...nel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, der.herr@...r.at, dave@...olabs.net,
riel@...hat.com, viro@...IV.linux.org.uk,
torvalds@...ux-foundation.org
Subject: Re: [RFC][PATCH 12/13] stop_machine: Remove lglock
On Tue, Jun 23, 2015 at 12:55:48PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 23, 2015 at 12:09:32PM +0200, Peter Zijlstra wrote:
> > We can of course slap a percpu-rwsem in, but I wonder if there's
> > anything smarter we can do here.
>
> Urgh, we cannot use percpu-rwsem here, because that would require
> percpu_down_write_trylock(), and I'm not sure we can get around the
> sync_sched() for that.
>
> Now try_stop_cpus(), which requires the down_write_trylock() is used to
> implement synchronize_sched_expedited().
>
> Using sync_sched() to implement sync_sched_expedited would make me
> happy, but it does somewhat defeat the purpose.
>
>
>
> Also, I think _expedited is used too eagerly, look at this:
>
> +void dm_sync_table(struct mapped_device *md)
> +{
> + synchronize_srcu(&md->io_barrier);
> + synchronize_rcu_expedited();
> +}
>
> sync_srcu() is slow already, why then bother with an
> sync_rcu_expedited() :/
Actually, this code was added in 2013, which was after the new variant of
synchronize_srcu(), which last I checked is reasonably fast in the common
case (no readers and not having tons of concurrent synchronize_srcu()
calls on the same srcu_struct), especially on systems with a small number
of CPUs, courtesy of srcu_read_lock()'s and srcu_read_unlock()'s read-side
memory barriers.
So synchronize_rcu() really would be expected to have quite a bit higher
latency than synchronize_srcu().
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists