[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <480B8932.1090308@firstfloor.org>
Date: Sun, 20 Apr 2008 20:19:30 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Jörn Engel <joern@...fs.org>
CC: Willy Tarreau <w@....eu>, Mark Lord <lkml@....ca>,
Adrian Bunk <bunk@...nel.org>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Shawn Bohrer <shawn.bohrer@...il.com>,
Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: x86: 4kstacks default
Jörn Engel wrote:
> On Sun, 20 April 2008 19:19:26 +0200, Andi Kleen wrote:
>> But these are SoC systems. Do they really run x86?
>> (note we're talking about an x86 default option here)
>>
>> Also I suspect in a true 16MB system you have to strip down
>> everything kernel side so much that you're pretty much outside
>> the "validated by testers" realm that Adrian cares about.
>
> Maybe. I merely showed that embedded people (not me) have good reasons
> to care about small stacks.
Sure but I don't think they're x86 embedded people. Right now there
are very little x86 SOCs if any (iirc there is only some obscure rise
core) and future SOCs will likely have more RAM.
Anyways I don't have a problem to give these people any special options
they need to do whatever they want. I just object to changing the
default options on important architectures to force people in completely
different setups to do part of their testing.
Whether they care enough to actually spend
> work on it - doubtful.
>
>>> When dealing in those dimensions, savings of 100k are substantial. In
>>> some causes they may be the difference between 16MiB or 32MiB, which
>>> translates to manufacturing costs. In others it simply means that the
>>> system can cache
>> If you need the stack you don't have any less cache foot print.
>> If you don't need it you don't have any either.
>
> This part I don't understand.
I was just objecting to your claim that small stack implies smaller
cache foot print. Smaller stacks rarely give you smaller cache foot
print in my kernel coding experience:
First some stack is always safety and in practice unused. It won't
be in cache.
Then typically standard kernel stack pigs are just too large
buffers on the stack which are not fully used. These also
don't have much cache foot print.
Or if you have a complicated call stack the typical fix
is to move parts of it into another thread. But that doesn't
give you less cache footprint because the cache foot print
is just in someone else's stack. In fact you'll likely
have slightly more cache foot print from that due to the
context of the other thread.
In theory if you e.g. convert a recursive algorithm
to iterative you might save some cache foot print, but I don't
think that really happens in kernel code.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists