lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48A93D49.2000601@colorfullife.com>
Date:	Mon, 18 Aug 2008 11:13:45 +0200
From:	Manfred Spraul <manfred@...orfullife.com>
To:	paulmck@...ux.vnet.ibm.com
CC:	linux-kernel@...r.kernel.org, mingo@...e.hu,
	akpm@...ux-foundation.org, oleg@...sign.ru, dipankar@...ibm.com,
	rostedt@...dmis.org, dvhltc@...ibm.com, niv@...ibm.com
Subject: Re: [PATCH tip/core/rcu] classic RCU locking and memory-barrier cleanups

Paul E. McKenney wrote:
>
>> Right now, I try to understand the current code first - and some of it 
>> doesn't make much sense.
>>
>> There are three per-cpu lists:
>> ->nxt
>> ->cur
>> ->done.
>>
>> Obviously, there must be a quiescent state between cur and done.
>> But why does the code require a quiescent state between nxt and cur?
>> I think that's superflous. The only thing that is required is that all cpus 
>> have moved their callbacks from nxt to cur. That doesn't need a quiescent 
>> state, this operation could be done in hard interrupt as well.
>>     
>
> The deal is that we have to put incoming callbacks somewhere while
> the batch in ->cur waits for an RCU grace period.  That somewhere is
> ->nxt.  So to be painfully pedantic, the callbacks in ->nxt are not
> waiting for an RCU grace period.  Instead, they are waiting for the
> callbacks in ->cur to get out of the way.
>
>   
Ok, thanks.
If I understand the new code in tip/rcu correctly, you have rewritten 
that block anyway.

I'll try to implement my proposal - on paper, it looks far simpler than 
the current code.
On the one hand, a state machine that keeps track of a global state:
- collect the callbacks in a nxt list.
- wait for quiecent
- destroy the callbacks in the nxt list.
(actually, there will be 5 states, 2 additional for "start the next rcu 
cycle immediately")

On the other hand a cpu bitmap that keeps track of the cpus that have 
completed the work that must be done after a state change.
The last cpu advances the global state.

The state machine could be seq_lock protected, the cpu bitmap could be 
either hierarchical or flat or for uniprocessor just a nop.

Do you have any statistics about rcu_check_callbacks? On my single-cpu 
system, around 2/3 of the calls are from "normal" context, i.e. 
rcu_qsctr_inc() is called.

--
    Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ