[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0806091158020.30776@schroedinger.engr.sgi.com>
Date: Mon, 9 Jun 2008 12:00:36 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Rusty Russell <rusty@...tcorp.com.au>
cc: Mike Travis <travis@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
David Miller <davem@...emloft.net>,
Eric Dumazet <dada1@...mosbay.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [patch 04/41] cpu ops: Core piece for generic atomic per cpu
operations
On Fri, 6 Jun 2008, Rusty Russell wrote:
> > Also, the above cpu_local_wrap(...) adds:
> >
> > #define cpu_local_wrap(l) \
> > ({ \
> > preempt_disable(); \
> > (l); \
> > preempt_enable(); \
> > }) \
> >
> > ... and there isn't a non-preemption version that I can find.
>
> Yes, this should be fixed. I thought i386 had optimized versions pre-merge,
> but I was wrong (%gs for per-cpu came later, and noone cleaned up these naive
> versions). Did you want me to write them?
How can that be fixed? You have no atomic instruction that calculates the
per cpu address in one go. And as long as that is the case you need to
disable preempt. Otherwise you may increment the per cpu variable of
another processor because the process was rescheduled after the address
was calculated but before the increment was done.
> > One other distinction is CPU_INC increments an arbitrary sized variable
> > while local_inc requires a local_t variable. This may not make it usable
> > in all cases.
>
> You might be right, but note that local_t is 64 bit on 64-bit platforms. And
> speculation of possible use cases isn't a good reason to rip out working
> infrastructure :)
Its fundamentally broken since because of the preemption issue. This is
also why local_t is rarely used.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists