lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 06 Aug 2008 07:30:13 +0200
From:	Manfred Spraul <manfred@...orfullife.com>
To:	paulmck@...ux.vnet.ibm.com
CC:	linux-kernel@...r.kernel.org, mingo@...e.hu,
	akpm@...ux-foundation.org, oleg@...sign.ru, dipankar@...ibm.com,
	rostedt@...dmis.org, dvhltc@...ibm.com, niv@...ibm.com
Subject: Re: [PATCH tip/core/rcu] classic RCU locking and memory-barrier cleanups

Hi Paul,

Paul E. McKenney wrote:
> This patch is in preparation for moving to a hierarchical
> algorithm to allow the very large SMP machines -- requested by some
> people at OLS, and there seem to have been a few recent patches in the
> 4096-CPU direction as well.

I thought about hierarchical RCU, but I never found the time to 
implement it.
Do you have a concept in mind?

Right now, I try to understand the current code first - and some of it 
doesn't make much sense.

There are three per-cpu lists:
->nxt
->cur
->done.

Obviously, there must be a quiescent state between cur and done.
But why does the code require a quiescent state between nxt and cur?
I think that's superflous. The only thing that is required is that all 
cpus have moved their callbacks from nxt to cur. That doesn't need a 
quiescent state, this operation could be done in hard interrupt as well.

Thus I think this should work:

1) A callback is inserted into ->nxt.
2) As soon as too many objects are sitting in the ->nxt lists, a new rcu 
cycle is started.
3) As soon as a cpu sees that a new rcu cycle is started, it moves it's 
callbacks from ->nxt to ->cur. No checks for hard_irq_count & friends 
necessary. Especially: same rule for _bh and normal.
4) As soon as all cpus have moved their lists from ->nxt to ->cur, the 
real grace period is started.
5) As soon as all cpus passed a quiescent state (i.e.: now with tests 
for hard_irq_count, different rules for _bh and normal), the list is 
moved from ->cur to ->completed. Once in completed, they can be 
destroyed by performing the callbacks.

What do you think? would that work? It doesn't make much sense that step 
3) tests for a quiescent state.

Step 2) could depend memory pressure.
Step 3) and 4) could be accelerated by force_quiescent_state(), if the 
memory pressure is too high.

--
    Manfred
-> nxt

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ