[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180626204033.GM3593@linux.vnet.ibm.com>
Date: Tue, 26 Jun 2018 13:40:33 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH tip/core/rcu 13/22] rcu: Fix grace-period hangs due to
race with CPU offline
On Tue, Jun 26, 2018 at 09:48:07PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 26, 2018 at 11:19:37AM -0700, Paul E. McKenney wrote:
> > The initial reason for ____cacheline_internodealigned_in_smp was that
> > some of the fields can be accessed by random CPUs, while others are
> > used more locally, give or take our usual contention over the handling
> > of CPU numbers. ;-)
>
> So that whole internode thing only matters for the insane VSMP case,
> where they take node to mean a cluster node, not a numa node.
>
> VSMP is a networked shared memory machine where, by necessity, the MESI
> protocol operates on page granularity.
>
> In general I tend to completely and utterly ignore that and let the VSMP
> people worry about things.
Sounds like I should queue a patch to replace them all with
____cacheline_aligned_in_smp.
Any objections?
(I don't consider this an emergency, so I would queue it for the merge
window following the next one.)
Thanx, Paul
Powered by blists - more mailing lists