lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 4 Jan 2010 07:55:59 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"minchan.kim@...il.com" <minchan.kim@...il.com>,
	cl@...ux-foundation.org,
	"hugh.dickins" <hugh.dickins@...cali.co.uk>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Ingo Molnar <mingo@...e.hu>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [RFC PATCH] asynchronous page fault.

On Mon, Jan 04, 2010 at 08:53:23AM +0100, Peter Zijlstra wrote:
> On Sun, 2010-01-03 at 19:02 -0800, Paul E. McKenney wrote:
> > It would not be all that hard for me to make a call_srcu(), but...
> > 
> > 1.      How are you avoiding OOM by SRCU callback?  (I am sure you
> >         have this worked out, but I do have to ask!)
> 
> Well, I was thinking srcu to have this force quiescent state in
> call_srcu() much like you did for the preemptible rcu.

Ah, so the idea would be that you register a function with the srcu_struct
that is invoked when some readers are stuck for too long in their SRCU
read-side critical sections?  Presumably you also supply a time value for
"too long" as well.  Hmmm...  What do you do, cancel the corresponding
I/O or something?

This would not be hard once I get SRCU folded into the treercu
infrastructure.  However, at the moment, SRCU has no way of tracking
which tasks are holding things up.  So not 2.6.34 material, but definitely
doable longer term.

> Alternatively we could actively throttle the call_srcu() call when we've
> got too much pending work.

This could be done with the existing SRCU implementation.  This could be
a call to a function you registered.  In theory, it would be possible
to make call_srcu() refuse to enqueue a callback when there were too
many, but that really badly violates the spirit of the call_rcu() family
of functions.

> > 2.      How many srcu_struct data structures are you envisioning?
> >         One globally?  One per process?  One per struct vma?
> >         (Not necessary to know this for call_srcu(), but will be needed
> >         as I work out how to make SRCU scale with large numbers of CPUs.)
> 
> For this patch in particular, one global one, covering all vmas.

Whew!!!  ;-)

Then it would still be feasible for the CPU-hotplug code to scan all
SRCU structures, if need be.  (The reason that SRCU gets away without
worrying about CPU hotplug now is that it doesn't keep track of which
tasks are in read-side critical sections.)

> One reason to keep the vma RCU domain separate from other RCU objects is
> that these VMA thingies can have rather long quiescent periods due to
> this sleep stuff. So mixing that in with other RCU users which have much
> better defined periods will just degrade everything bringing that OOM
> scenario much closer.

Fair enough!

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ