lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Jul 2009 15:43:07 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	mitake@....info.waseda.ac.jp, andi@...stfloor.org,
	fweisbec@...il.com, acme@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH][RFC] Adding information of counts processes acquired
	how many spinlocks to schedstat


* Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:

> On Fri, 2009-07-10 at 21:45 +0900, mitake@....info.waseda.ac.jp wrote:
> > From: Andi Kleen <andi@...stfloor.org>
> > Subject: Re: [PATCH][RFC] Adding information of counts processes acquired how many spinlocks to schedstat
> > Date: Mon, 6 Jul 2009 13:54:51 +0200
> > 
> > Thank you for your replying, Peter and Andi.
> > 
> > > > Maybe re-use the LOCK_CONTENDED macros for this, but I'm not sure we
> > > > want to go there and put code like this on the lock hot-paths for !debug
> > > > kernels.
> > > 
> > > My concern was similar.
> > > 
> > > I suspect it would be in theory ok for the slow spinning path, but I am 
> > > somewhat concerned about the additional cache miss for checking
> > > the global flag even in this case. This could hurt when
> > > the kernel is running fully cache hold, in that the cache miss
> > > might be far more expensive that  short spin.
> > 
> > Yes, there will be overhead. This is certain.
> > But there's the radical way to ignore this,
> > adding subcategory to Kconfig for measuring spinlocks and #ifdef to spinlock.c.
> > So people who wants to avoid this overhead can disable measurement of spinlocks completely.
> > 
> > And there's another way to avoid the overhead of measurement. 
> > Making _spin_lock variable of function pointer. When you don't 
> > want to measure spinlocks, assign _spin_lock_raw() which is 
> > equals to current _spin_lock(). When you want to measure 
> > spinlocks, assign _spin_lock_perf() which locks and measures. 
> > This way will banish the cache miss problem you said. I think 
> > this may be useful for avoiding problem of recursion.
> 
> We already have that, its called CONFIG_LOCKDEP && 
> CONFIG_EVENT_TRACING && CONFIG_EVENT_PROFILE, with those enabled 
> you get tracepoints on every lock acquire and lock release, and 
> perf can already use those as event sources.

Yes, that could be reused for this facility too.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ