lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1226015568.2186.20.camel@bobble.smo.corp.google.com>
Date:	Thu, 06 Nov 2008 15:52:48 -0800
From:	Frank Mayhar <fmayhar@...gle.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Christoph Lameter <cl@...ux-foundation.org>,
	Doug Chapman <doug.chapman@...com>, mingo@...e.hu,
	roland@...hat.com, adobriyan@...il.com, akpm@...ux-foundation.org,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: regression introduced by - timers: fix itimer/many thread hang

On Thu, 2008-11-06 at 16:08 +0100, Peter Zijlstra wrote: 
> On Thu, 2008-11-06 at 09:03 -0600, Christoph Lameter wrote:
> > On Thu, 6 Nov 2008, Peter Zijlstra wrote:
> > 
> > > Also, you just introduced per-cpu allocations for each thread-group,
> > > while Christoph is reworking the per-cpu allocator, with one unfortunate
> > > side-effect - its going to have a limited size pool. Therefore this will
> > > limit the number of thread-groups we can have.
> > 
> > Patches exist that implement a dynamically growable percpu pool (using
> > virtual mappings though). If the cost of the additional complexity /
> > overhead is justifiable then we can make the percpu pool dynamically
> > extendable.
> 
> Right, but I don't think the patch under consideration will fly anyway,
> doing a for_each_possible_cpu() loop on every tick on all cpus isn't
> really healthy, even for moderate sized machines.

I personally think that you're overstating this.  First, the current
implementation walks all threads for each tick, which is simply not
scalable and results in soft lockups with large numbers of threads.
This patch fixes a real bug.  Second, this only happens "on every tick"
for processes that have more than one thread _and_ that use posix
interval timers.  Roland and I went to some effort to keep loops like
the on you're referring to out of the common paths.

In any event, while this particular implementation may not be optimal,
at least it's _right_.  Whatever happened to "make it right, then make
it fast?"
-- 
Frank Mayhar <fmayhar@...gle.com>
Google, Inc.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ