lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Feb 2012 21:54:27 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Josh Triplett <josh@...htriplett.org>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	mathieu.desnoyers@...ymtl.ca, niv@...ibm.com, tglx@...utronix.de,
	peterz@...radead.org, rostedt@...dmis.org, Valdis.Kletnieks@...edu,
	dhowells@...hat.com, eric.dumazet@...il.com, darren@...art.com,
	fweisbec@...il.com, patches@...aro.org,
	"Paul E. McKenney" <paul.mckenney@...aro.org>
Subject: Re: [PATCH RFC tip/core/rcu 14/41] rcu: Limit lazy-callback duration

On Thu, Feb 02, 2012 at 08:07:51PM -0800, Josh Triplett wrote:
> On Thu, Feb 02, 2012 at 09:13:42AM -0800, Paul E. McKenney wrote:
> > On Wed, Feb 01, 2012 at 06:03:56PM -0800, Josh Triplett wrote:
> > > On Wed, Feb 01, 2012 at 11:41:32AM -0800, Paul E. McKenney wrote:
> > > > Currently, a given CPU is permitted to remain in dyntick-idle mode
> > > > indefinitely if it has only lazy RCU callbacks queued.  This is vulnerable
> > > > to corner cases in NUMA systems, so limit the time to six seconds by
> > > > default.  (Currently controlled by a cpp macro.)
> > > 
> > > I wonder: should this scale with the number of callbacks, or do we not
> > > want to make estimates about memory usage based on that?
> > 
> > Interesting.  Which way would you scale it?  ;-)
> 
> Heh, I'd figured "don't wait too long if you have a giant pile of
> callbacks", but I can see how the other direction could make sense as
> well. :)

;-)

> > > Interestingly, with kfree_rcu, we actually know at callback queuing time
> > > *exactly* how much memory we'll get back by calling the callback, and we
> > > could sum up those numbers.
> > 
> > We can indeed calculate for kfree_rcu(), but we won't be able to for
> > call_rcu_lazy(), which is my current approach for cases where you cannot
> > use kfree_rcu() due to (for example) freeing up a linked structure.
> > A very large fraction of the call_rcu()s in the kernel could become
> > call_rcu_lazy().
> 
> So, doing anything other than freeing memory makes a callback non-lazy?
> Based on that, I'd find it at least somewhat surprising if any of the
> current callers of call_rcu (other than synchronize_rcu() and similar)
> had non-lazy callbacks.

Yep!  But the caller has to tell me.

Something like 90% of the call_rcu()s could be call_rcu_lazy(), but there
are a significant number that wake someone up, manipulate a reference
counter that someone else is paying attention to, etc.

> > At some point in the future, it might make sense to tie into the
> > low-memory notifier, which could potentially allow the longer timeout
> > to be omitted.
> 
> Exactly the kind of thing that made me wonder about tracking the actual
> amount of memory to free.  Still seems like a potentially useful
> statistic to track on its own.

There is the qlen statistic in the debugfs tracing, tracked on a per-CPU
basis.  But unless it is kfree_rcu(), I have no way to tell how much
memory a given callback frees.

> > My current guess is that the recent change allowing idle CPUs to
> > exhaust their callback lists will make this kind of fine-tuning
> > unnecessary, but we will see!
> 
> Good point; given that fix, idle CPUs should never need to wake up for
> callbacks at all.

Here is hoping!  ;-)

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ