[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <469BF104.1040703@gmail.com>
Date: Tue, 17 Jul 2007 00:28:20 +0200
From: Rene Herman <rene.herman@...il.com>
To: Bodo Eggert <7eggert@....de>
CC: Matt Mackall <mpm@...enic.com>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Jesper Juhl <jesper.juhl@...il.com>,
Ray Lee <ray-lk@...rabbit.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
William Lee Irwin III <wli@...omorphy.com>,
David Chinner <dgc@....com>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?
On 07/16/2007 03:43 PM, Bodo Eggert wrote:
> So we are in a desperate situation, we can almost make no progress,
> adding another task is going to push the system into an unrecoverable
> situation, and we make sure this task can be started.-)
Hnng. He was just saying that odds of two pages being buddies is quite
small, not talking about literally two pages.
>> Moreoveover -- literally two pages free was hardly his point. The point is
>> just that (with a page being the allocation unit) single page allocations
>> are guaranteed to succeed if _any_ memory is free, while two adjacent (yes,
>> and stacksize aligned) pages will be pretty hard to get by once the system
>> has been up and running for some time.
>
> That never happened on my servers, therefore I'd opt for the little extra
> security of having spare 4k on the stack. (I made a patch which would
> printk a message if allocating a stack would ever fail).
>
> I'm not at all opposed to letting the guys with zillions of threads
> benefit from having less unused kernel stack, but unless it's secure for
> all users, it should not be default=y.
Given that as Arjan stated Fedora and even RHEL have been using 4K stacks
for some time now, and certainly the latter being a distribution which I
would expect to both host a relatively large number of lvm/md/xfs and what
stackeaters have you users and to be fairly conservative with respect to the
chances of scribbling over kernel memory (I'm a trusting person...) it seems
there might at this stage only be very few offenders left.
Seeing as how single-page stacks are much easier on the VM so that creating
those zillion threads should also be faster, at _some_ percentage we get to
say "and now to hell with the rest".
Do also note that with interrupts of the process stack, available stack is
definitely not halved. I don't have data (if anyone reading does, please
say) but I expect that on the kinds of busy networked systems that want
many-thread creation to be fastest, their many concurrent interrupt sources
might mean they are not actually experiencing less stack at all. That is,
that "little extra security" you speak of might very well be none at all in
practice and perhaps even negative.
Getting interrupts onto their own stack(s) certainly made for better (more
deterministic that is) behaviour as well -- you're then independent on how
deep into the stack you already are when the interrupt comes in which is
otherwise anyone's guess. Now I must say I'm not particularly sure why you
couldn't still also have those even if you don't pick 4K stacks, but as far
as I'm aware they're a package deal at least today.
Single page stacks are much nicer on anyone. For all I care, that single
page migth be a larger soft-page. No idea how the hell you'd investigate the
optimum page-size for any given system, but I quite fully expect it's larger
than 4K these days _anyway_ even on x86 and for modern loads.
Since Linux doesn't yet have those that's also not very important currently
though. 4K is 1024 32-bit datums which isn't all that little if recursion is
limited and programmers somewhat competent. With the interrupt stacks
meaning available stack might not be worse or even _better_ for some
systems, well, I's argue to just go with 4K if at all possible.
Rene.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists