[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8763ub8rcu.fsf@basil.nowhere.org>
Date: Mon, 21 Apr 2008 11:55:13 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Denys Vlasenko <vda.linux@...glemail.com>
Cc: Eric Sandeen <sandeen@...deen.net>, Adrian Bunk <bunk@...nel.org>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Shawn Bohrer <shawn.bohrer@...il.com>,
Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: x86: 4kstacks default
Denys Vlasenko <vda.linux@...glemail.com> writes:
>
> Forget about 50k threads. 4k of waste per process is a waste nevertheless.
> It's not at all unusual to have 250+ processes, and 250 processes with 8k
> stack each waste 1M. Do you think extra 1M won't be useful to have?
If the 1M gives you more reliability (and I think it does) I don't
think it is "wasted". Would you trade occasional crashes for 1MB?
I wouldn't.
Also a typical process uses much more memory than just 4K. If it's
not a thread it needs own page tables and from those alone you're
easily into 10+ pages even for a quite small process. But even threads
in practice have other overheads too if they actually do something.
The 4K won't save or break you.
[BTW if you're really interested in saving memory there are lots
of other subsystems where you could very likely save more. A common
example are the standard hash tables which are still too big]
The trends are also against it: kernel code is getting more and more
complex all the time with more and more complicated stacks of
different subsystems on top of each other. It wouldn't surprise me if
at some point 8KB isn't even enough anymore. Going into the
other direction is definitely the wrong way.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists