[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180626181937.GG3593@linux.vnet.ibm.com>
Date: Tue, 26 Jun 2018 11:19:37 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH tip/core/rcu 13/22] rcu: Fix grace-period hangs due to
race with CPU offline
On Tue, Jun 26, 2018 at 07:44:24PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 26, 2018 at 10:10:39AM -0700, Paul E. McKenney wrote:
> > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> > index 3def94fc9c74..6683da6e4ecc 100644
> > --- a/kernel/rcu/tree.h
> > +++ b/kernel/rcu/tree.h
> > @@ -363,6 +363,10 @@ struct rcu_state {
> > const char *name; /* Name of structure. */
> > char abbr; /* Abbreviated name. */
> > struct list_head flavors; /* List of RCU flavors. */
> > +
> > + spinlock_t ofl_lock ____cacheline_internodealigned_in_smp;
>
> Are you really sure you didn't mean to use ____cacheline_aligned_in_smp
> ? This internode crap gives you full page alignment under certain rare
> configs.
When I get done consolidating, there will only be one rcu_state structure
in the kernel.
On the other hand, the choice of ____cacheline_internodealigned_in_smp
was made a very long time ago, so this would not be a bad time to
discuss the pros and cons of a change. There are six more of these in
kernel/rcu/tree.h, and three of them are in rcu_node, two of them in
rcu_data, and another in rcu_state. The ones in rcu_node and especially
in rcu_data (which is per-CPU) would be quite a bit more painful from
a memory-size viewpoint than the two in rcu_state.
The initial reason for ____cacheline_internodealigned_in_smp was that
some of the fields can be accessed by random CPUs, while others are
used more locally, give or take our usual contention over the handling
of CPU numbers. ;-)
Thanx, Paul
Powered by blists - more mailing lists