lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 07 Nov 2008 10:10:48 -0800
From:	Frank Mayhar <fmayhar@...gle.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Christoph Lameter <cl@...ux-foundation.org>,
	Doug Chapman <doug.chapman@...com>, mingo@...e.hu,
	roland@...hat.com, adobriyan@...il.com, akpm@...ux-foundation.org,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: regression introduced by - timers: fix itimer/many thread hang

On Fri, 2008-11-07 at 11:29 +0100, Peter Zijlstra wrote: 
> (fwiw your email doesn't come across properly, evo refuses to display
> them, there's some mangling of headers which makes it think there's an
> attachment)

Strange, evolution is what I used.  It's what I'm using to write this.

> On Thu, 2008-11-06 at 15:52 -0800, Frank Mayhar wrote:
> Well, I'm not thinking you did it right ;-)

Well you would be wrong, then, wouldn't you? :-)

> While I agree that the linear loop is sub-optimal, but it only really
> becomes a problem when you have hundreds or thousands of threads in your
> application, which I'll argue to be insane anyway.

You might argue that (and a few months ago I would have agreed with you)
but you would, I'm afraid, be wrong.  It very much depends on the
application.  We ran into the problem when we were running applications
with more than 4500 threads; there are good reasons to have lots-n-lots
of threads having to do with efficient use of resources, which I can't
go into at the moment.

> But with your new scheme it'll be a problem regardless of how many
> threads you have, as long as each running application will have at least
> 2 (not uncommon these days).

If and only if the process on the CPU at the tick is using a POSIX
interval timer.

Further, the former implementation had quadratic (or nearly so)
performance varying by the number of threads in a process (making it
impossible to predict how long the processing will take).  This
implementation has linear performance based on the number of CPUs which
is fixed over the life of the running kernel instance.

> Furthermore, the memory requirements for your solution now also scale
> with cpus instead of threads, again something not really appreciated.
> 
> Therefore I say your solution is worse than the one we had.

I'm sorry, I can't agree.  While I again admit that it may not be
optimal, it's certainly never going to hit a soft lockup until the
numbers of CPUs are far greater than 4096 and even then it's going to be
difficult to make it fail since the algorithm is more efficient.

The simple fact is that with the old implementation it is possible to
wedge the kernel simply by having a single process spawn a large number
of threads with an interval timer running.

> You should optimize for the common case, and ensure the worst case
> doesn't suck. You did it backwards.

You're saying that having four or fewer CPUs isn't the common case?

Oh, and you keep calling it "my" solution.  I should really share credit
(or blame) with Roland McGrath, who collaborated with me to write this
thing.
-- 
Frank Mayhar <fmayhar@...gle.com>
Google, Inc.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ