[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A2F40A3.5020602@kernel.org>
Date: Wed, 10 Jun 2009 14:12:03 +0900
From: Tejun Heo <tj@...nel.org>
To: cl@...ux-foundation.org
CC: linux-kernel@...r.kernel.org, David Howells <dhowells@...hat.com>,
Ingo Molnar <mingo@...e.hu>,
Rusty Russell <rusty@...tcorp.com.au>,
Eric Dumazet <dada1@...mosbay.com>, davem@...emloft.net
Subject: Re: [this_cpu_xx 01/11] Introduce this_cpu_ptr() and generic this_cpu_*
operations
Hello,
cl@...ux-foundation.org wrote:
...
> The operations are guaranteed to be atomic vs preemption if they modify
> the scalar (unless they are prefixed by __ in which case they do not need
> to be). The calculation of the per cpu offset is also guaranteed to be atomic.
>
> this_cpu_read(scalar)
> this_cpu_write(scalar, value)
> this_cpu_add(scale, value)
> this_cpu_sub(scalar, value)
> this_cpu_inc(scalar)
> this_cpu_dec(scalar)
> this_cpu_and(scalar, value)
> this_cpu_or(scalar, value)
> this_cpu_xor(scalar, value)
Looks good to me. The only qualm I have is that I wish these macros
take pointer instead of the symbol name directly. Currently it's not
possible due to the per_cpu__ appending thing but those should go with
Rusty's patches and the same ops should be useable for both static and
dynamic ones. One problem which may occur with such scheme is when
the arch+compiler can't handle indirect dereferencing atomically. At
any rate, it's a separate issue and we can deal with it later.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists