lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Aug 2008 16:07:00 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	paulmck@...ux.vnet.ibm.com
Cc:	Christoph Lameter <cl@...ux-foundation.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Ingo Molnar <mingo@...e.hu>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Andi Kleen <andi@...stfloor.org>,
	"Pallipadi, Venkatesh" <venkatesh.pallipadi@...el.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] smp_call_function: use rwlocks on queues rather
	than rcu

On Tue, 2008-08-26 at 06:43 -0700, Paul E. McKenney wrote:
> On Mon, Aug 25, 2008 at 05:51:32PM +0200, Peter Zijlstra wrote:
> > On Mon, 2008-08-25 at 10:46 -0500, Christoph Lameter wrote:
> > > Peter Zijlstra wrote:
> > > >
> > > > If we combine these two cases, and flip the counter as soon as we've
> > > > enqueued one callback, unless we're already waiting for a grace period
> > > > to end - which gives us a longer window to collect callbacks.
> > > > 
> > > > And then the rcu_read_unlock() can do:
> > > > 
> > > >   if (dec_and_zero(my_counter) && my_index == dying)
> > > >     raise_softirq(RCU)
> > > > 
> > > > to fire off the callback stuff.
> > > > 
> > > > /me ponders - there must be something wrong with that...
> > > > 
> > > > Aaah, yes, the dec_and_zero is non trivial due to the fact that its a
> > > > distributed counter. Bugger..
> > > 
> > > Then lets make it per cpu. If we get the cpu ops in then dec_and_zero would be
> > > very cheap.
> > 
> > Hmm, perhaps that might work for classic RCU, as that disables
> > preemption and thus the counters should always be balanced.
> 
> Unless you use a pair of global counters (like QRCU), you will still
> need to check a large number of counters for zero.  I suppose that one
> approach would be to do something like QRCU, but with some smallish
> number of counter pairs, each of which is shared by a moderate group of
> CPUs.  For example, for 4,096 CPUs, use 64 pairs of counters, each
> shared by 64 CPUs.  My guess is that the rcu_read_lock() overhead would
> make this be a case of "Holy overhead, Batman!!!", but then again, I
> cannot claim to be an expert on 4,096-CPU machines.

right - while the local count will be balanced and will always end up on
zero, you have to check remote counts for zero as well.

But after a counter flip, the dying counter will only reach zero once
per cpu.

So each cpu gets to tickle a softirq once per cycle. That softirq can
then check all remote counters, and kick off the callback list when it
finds them all zero.

Of course, this scan is very expensive, n^2 at worst, each cpu
triggering a full scan, until finally the last cpu is done.

We could optimize this by keeping cpu masks of cpus found to have !0
counts - those who were found to have 0, will always stay zero, so we'll
not have to look at them again.

Another is making use of a scanning hierarchy.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ