[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110818144153.GA19920@redhat.com>
Date: Thu, 18 Aug 2011 16:41:53 +0200
From: Johannes Weiner <jweiner@...hat.com>
To: Valdis.Kletnieks@...edu
Cc: Greg Thelen <gthelen@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Balbir Singh <bsingharora@...il.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Subject: Re: [PATCH] memcg: remove unneeded preempt_disable
On Thu, Aug 18, 2011 at 10:26:58AM -0400, Valdis.Kletnieks@...edu wrote:
> On Thu, 18 Aug 2011 11:38:00 +0200, Johannes Weiner said:
>
> > Note that on non-x86, these operations themselves actually disable and
> > reenable preemption each time, so you trade a pair of add and sub on
> > x86
> >
> > - preempt_disable()
> > __this_cpu_xxx()
> > __this_cpu_yyy()
> > - preempt_enable()
> >
> > with
> >
> > preempt_disable()
> > __this_cpu_xxx()
> > + preempt_enable()
> > + preempt_disable()
> > __this_cpu_yyy()
> > preempt_enable()
> >
> > everywhere else.
>
> That would be an unexpected race condition on non-x86, if you expected _xxx and
> _yyy to be done together without a preempt between them. Would take mere
> mortals forever to figure that one out. :)
That should be fine, we don't require the two counters to be perfectly
coherent with respect to each other, which is the justification for
this optimization in the first place.
But on non-x86, the operation to increase a single per-cpu counter
(read-modify-write) itself is made atomic by disabling preemption.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists