[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0710091138250.32162@schroedinger.engr.sgi.com>
Date: Tue, 9 Oct 2007 11:39:21 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Nick Piggin <nickpiggin@...oo.com.au>
cc: Rik van Riel <riel@...hat.com>, Andi Kleen <ak@...e.de>,
akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, travis@....com
Subject: Re: [13/18] x86_64: Allow fallback for the stack
On Mon, 8 Oct 2007, Nick Piggin wrote:
> The tight memory restrictions on stack usage do not come about because
> of the difficulty in increasing the stack size :) It is because we want to
> keep stack sizes small!
>
> Increasing the stack size 4K uses another 4MB of memory for every 1000
> threads you have, right?
>
> It would take a lot of good reason to move away from the general direction
> we've been taking over the past years that 4/8K stacks are a good idea for
> regular 32 and 64 bit builds in general.
We already use 32k stacks on IA64. So the memory argument fail there.
> > I have some concerns about the medium NUMA systems (a few dozen of nodes)
> > also running out of stack since more data is placed on the stack through
> > the policy layer and since we may end up with a couple of stacked
> > filesystems. Most of the current NUMA systems on x86_64 are basically
> > two nodes on one motherboard. The use of NUMA controls is likely
> > limited there and the complexity of the filesystems is also not high.
>
> The solution has until now always been to fix the problems so they don't
> use so much stack. Maybe a bigger stack is OK for you for 1024+ CPU
> systems, but I don't think you'd be able to make that assumption for most
> normal systems.
Yes that is why I made the stack size configurable.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists