lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20091210164136.GA6756@linux.vnet.ibm.com>
Date:	Thu, 10 Dec 2009 08:41:36 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	Oleg Nesterov <oleg@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...e.hu>,
	Christoph Hellwig <hch@...radead.org>,
	Nick Piggin <npiggin@...e.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [rfc] "fair" rw spinlocks

On Thu, Dec 10, 2009 at 02:31:39AM -0800, Eric W. Biederman wrote:
> "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> writes:
> 
> > My main concern would be "fork storms", where each CPU in a large
> > system is spawning children in a pgrp that some other CPU is attempting
> > to kill.  The CPUs spawning children might be able to keep ahead of
> > the single CPU, so that the pgrp never is completely killed.
> >
> > Enlisting the aid of the CPUs doing the spawning (e.g., by having them
> > consult a list of signals being sent) prevents this fork-storm scenario.
> 
> We almost have a worst case bound. We can have at most max_thread
> processes.  Unfortunately it appears we don't force an rcu grace
> period anywhere.  So It does appear theoretically possible to fork and
> exit on a bunch of other cpus infinitely extending the rcu interval.

The RCU grace period will still complete in a timely fashion, at least
assuming that each RCU read-side critical section completes in a timely
fashion.  The old Classic implememntations need only a context switch on
each CPU (which should happen at some point upon return to user space,
and the counter-based implementations (SRCU and preemptible RCU) use
pairs of counters to avoid waiting on new RCU read-side critical sections.

Either way, the RCU grace period waits only for the RCU read-side
critical sections that started before it did, not for any later RCU
read-side critical sections.

> Still that is all inside the tasklist_lock, which serializes all of those
> other cpus.  So as long as the cost of queuing signals is less than the
> cost of adding processes to the task lists we won't have a problem.

Agreed, as long as we continue to serialize task creation, we should be OK.

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ