lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081105212717.GA6692@linux.vnet.ibm.com>
Date:	Wed, 5 Nov 2008 13:27:17 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Manfred Spraul <manfred@...orfullife.com>
Cc:	linux-kernel@...r.kernel.org, cl@...ux-foundation.org,
	mingo@...e.hu, akpm@...ux-foundation.org, dipankar@...ibm.com,
	josht@...ux.vnet.ibm.com, schamp@....com, niv@...ibm.com,
	dvhltc@...ibm.com, ego@...ibm.com, laijs@...fujitsu.com,
	rostedt@...dmis.org, peterz@...radead.org, penberg@...helsinki.fi,
	andi@...stfloor.org, tglx@...utronix.de
Subject: Re: [PATCH, RFC] v7 scalable classic RCU implementation

On Wed, Nov 05, 2008 at 08:48:02PM +0100, Manfred Spraul wrote:
> Paul E. McKenney wrote:
>>
>>> Attached is a hack that I use right now for myself.
>>> Btw - on my 4-cpu system, the average latency from call_rcu() to the rcu 
>>> callback is 4-5 milliseconds, (CONFIG_HZ_1000).
>>
>> Hmmm...  I would expect that if you have some CPUs in dyntick idle mode.
>> But if I run treercu on an CONFIG_HZ_250 8-CPU Power box, I see 2.5
>> jiffies per grace period if CPUs are kept out of dyntick idle mode, and
>> 4 jiffies per grace period if CPUs are allowed to enter dyntick idle mode.
>>
>> Alternatively, if you were testing with multiple concurrent
>> synchronize_rcu() invocations, you can also see longer grace-period
>> latencies due to the fact that a new synchronize_rcu() must wait for an
>> earlier grace period to complete before starting a new one.
>>   
> That's the reason why I decided to measure the real latency, from 
> call_rcu() to the final callback. It includes the delays for waiting until 
> the current grace period completes, until the softirq is scheduled, etc.

I believe that I get very close to the same effect by timing a call to
synchronize_rcu() in a kernel module.  Repeating measurements and
printing out cumulative statistics periodically reduces the heisenberg
effect.

> Probably one cpu was not in user space when the timer interrupt arrived.
> I'll continue to investigate that. Unfortunately, my first attempt failed: 
> adding too many printk's results in too much time spent within do_syslog(). 
> And then the timer interrupt always arrives on the spin_unlock_irqrestore 
> in do_syslog()....

;-)

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ