[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20081117133137.616cf287.akpm@linux-foundation.org>
Date: Mon, 17 Nov 2008 13:31:37 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: rostedt@...dmis.org, linux-kernel@...r.kernel.org,
paulus@...ba.org, benh@...nel.crashing.org,
linuxppc-dev@...abs.org, mingo@...e.hu, tglx@...utronix.de,
linux-mm@...ck.org
Subject: Re: Large stack usage in fs code (especially for PPC64)
On Mon, 17 Nov 2008 13:23:23 -0800 (PST)
Linus Torvalds <torvalds@...ux-foundation.org> wrote:
>
>
> On Mon, 17 Nov 2008, Andrew Morton wrote:
> >
> > Far be it from me to apportion blame, but THIS IS ALL LINUS'S FAULT!!!!! :)
> >
> > I fixed this six years ago. See http://lkml.org/lkml/2002/6/17/68
>
> Btw, in that thread I also said:
>
> "If we have 64kB pages, such architectures will have to have a bigger
> kernel stack. Which they will have, simply by virtue of having the very
> same bigger page. So that problem kind of solves itself."
>
> and that may still be the "right" solution - if somebody is so insane that
> they want 64kB pages, then they might as well have a 64kB kernel stack as
> well.
I'd have thought so, but I'm sure we're about to hear how important an
optimisation the smaller stacks are ;)
> Trust me, the kernel stack isn't where you blow your memory with a 64kB
> page. You blow all your memory on the memory fragmentation of your page
> cache. I did the stats for the kernel source tree a long time ago, and I
> think you wasted something like 4GB of RAM with a 64kB page size.
>
Yup. That being said, the younger me did assert that "this is a neater
implementation anyway". If we can implement those loops without
needing those on-stack temporary arrays then things probably are better
overall.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists