lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4911F872.8010400@colorfullife.com>
Date:	Wed, 05 Nov 2008 20:48:02 +0100
From:	Manfred Spraul <manfred@...orfullife.com>
To:	paulmck@...ux.vnet.ibm.com
CC:	linux-kernel@...r.kernel.org, cl@...ux-foundation.org,
	mingo@...e.hu, akpm@...ux-foundation.org, dipankar@...ibm.com,
	josht@...ux.vnet.ibm.com, schamp@....com, niv@...ibm.com,
	dvhltc@...ibm.com, ego@...ibm.com, laijs@...fujitsu.com,
	rostedt@...dmis.org, peterz@...radead.org, penberg@...helsinki.fi,
	andi@...stfloor.org, tglx@...utronix.de
Subject: Re: [PATCH, RFC] v7 scalable classic RCU implementation

Paul E. McKenney wrote:
>
>> Attached is a hack that I use right now for myself.
>> Btw - on my 4-cpu system, the average latency from call_rcu() to the rcu 
>> callback is 4-5 milliseconds, (CONFIG_HZ_1000).
>>     
>
> Hmmm...  I would expect that if you have some CPUs in dyntick idle mode.
> But if I run treercu on an CONFIG_HZ_250 8-CPU Power box, I see 2.5
> jiffies per grace period if CPUs are kept out of dyntick idle mode, and
> 4 jiffies per grace period if CPUs are allowed to enter dyntick idle mode.
>
> Alternatively, if you were testing with multiple concurrent
> synchronize_rcu() invocations, you can also see longer grace-period
> latencies due to the fact that a new synchronize_rcu() must wait for an
> earlier grace period to complete before starting a new one.
>   
That's the reason why I decided to measure the real latency, from 
call_rcu() to the final callback. It includes the delays for waiting 
until the current grace period completes, until the softirq is 
scheduled, etc.
Probably one cpu was not in user space when the timer interrupt arrived.
I'll continue to investigate that. Unfortunately, my first attempt 
failed: adding too many printk's results in too much time spent within 
do_syslog(). And then the timer interrupt always arrives on the 
spin_unlock_irqrestore in do_syslog()....

--
    Manfred

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ