[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140619205137.GK4904@linux.vnet.ibm.com>
Date: Thu, 19 Jun 2014 13:51:37 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Christoph Lameter <cl@...two.org>
Cc: Tejun Heo <tj@...nel.org>, David Howells <dhowells@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] percpu: add data dependency barrier in percpu
accessors and operations
On Thu, Jun 19, 2014 at 03:42:07PM -0500, Christoph Lameter wrote:
> On Tue, 17 Jun 2014, Paul E. McKenney wrote:
>
> > > Similar in swapon(). The percpu allocation is performed before access to
> > > the containing structure (via enable_swap_info).
> >
> > Those are indeed common use cases. However...
> >
> > There is code where one CPU writes to another CPU's per-CPU variables.
> > One example is RCU callback offloading, where a kernel thread (which
> > might be running anywhere) dequeues a given CPU's RCU callbacks and
> > processes them. The act of dequeuing requires write access to that
> > CPU's per-CPU rcu_data structure. And yes, atomic operations and memory
> > barriers are of course required to make this work.
>
> In that case special care needs to be taken to get this right. True.
>
> I typically avoid these scenarios by sending an IPI with a pointer to the
> data structure. The modification is done by the cpu for which the per cpu
> data is local.
>
> Maybe rewrite the code to avoid writing to other processors percpu data
> would be the right approach?
Or just keep doing what I am doing. What exactly is the problem with it?
(Other than probably needing to clean up the cache alignment of some
of the per-CPU structures?)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists