[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <469BF768.6040200@gmail.com>
Date: Tue, 17 Jul 2007 00:55:36 +0200
From: Rene Herman <rene.herman@...il.com>
To: Ray Lee <ray-lk@...rabbit.org>
CC: Bodo Eggert <7eggert@....de>, Matt Mackall <mpm@...enic.com>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Jesper Juhl <jesper.juhl@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
William Lee Irwin III <wli@...omorphy.com>,
David Chinner <dgc@....com>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?
On 07/17/2007 12:37 AM, Ray Lee wrote:
> On 7/16/07, Rene Herman <rene.herman@...il.com> wrote:
>> Seeing as how single-page stacks are much easier on the VM so that
>> creating those zillion threads should also be faster, at _some_
>> percentage we get to say "and now to hell with the rest".
>
> This is the core dispute here. Stated differently, I hope you never
> design a bridge that I have to drive over.
>
> Correctness first, optimization second. Introducing random and
> difficult to trace crashes upon an unsuspecting audience of sysadmins
> and users is not a viable option.
Quite. But unfortunately you didn't actually go into the bit on how given
seperate interrupt stacks, available stackspace might not actually _be_ less
after selecting CONFIG_4KSTCKS nor into Fedora and RHEL shipping it already.
> If at some point one of the pro-4k stacks crowd can prove that all
> code paths are safe
I'll do that the minute you prove the current shared 8K stacks are safe. Do
we have a deal?
> or introduce another viable alternative (such as Matt's idea for
> extending the stack dynamically), then removing the 8k stacks option
> makes sense.
I'm still waiting for larger soft-pages... does anyone in this thread have a
clue on their status?
Rene.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists