[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200808261513.40586.nickpiggin@yahoo.com.au>
Date: Tue, 26 Aug 2008 15:13:40 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>, paulmck@...ux.vnet.ibm.com,
Pekka Enberg <penberg@...helsinki.fi>,
Ingo Molnar <mingo@...e.hu>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Andi Kleen <andi@...stfloor.org>,
"Pallipadi, Venkatesh" <venkatesh.pallipadi@...el.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Jens Axboe <jens.axboe@...cle.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] smp_call_function: use rwlocks on queues rather than rcu
On Tuesday 26 August 2008 01:46, Christoph Lameter wrote:
> Peter Zijlstra wrote:
> > If we combine these two cases, and flip the counter as soon as we've
> > enqueued one callback, unless we're already waiting for a grace period
> > to end - which gives us a longer window to collect callbacks.
> >
> > And then the rcu_read_unlock() can do:
> >
> > if (dec_and_zero(my_counter) && my_index == dying)
> > raise_softirq(RCU)
> >
> > to fire off the callback stuff.
> >
> > /me ponders - there must be something wrong with that...
> >
> > Aaah, yes, the dec_and_zero is non trivial due to the fact that its a
> > distributed counter. Bugger..
>
> Then lets make it per cpu. If we get the cpu ops in then dec_and_zero would
> be very cheap.
Let's be very careful before making rcu read locks costly. Any reduction
in grace periods would be great, but IMO RCU should not be used in cases
where performance depends on the freed data remaining in cache.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists