[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070719001539.GC29728@v2.random>
Date: Thu, 19 Jul 2007 02:15:39 +0200
From: Andrea Arcangeli <andrea@...e.de>
To: Matt Mackall <mpm@...enic.com>
Cc: Rene Herman <rene.herman@...il.com>,
Ray Lee <ray-lk@...rabbit.org>, Bodo Eggert <7eggert@....de>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Jesper Juhl <jesper.juhl@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
William Lee Irwin III <wli@...omorphy.com>,
David Chinner <dgc@....com>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?
On Mon, Jul 16, 2007 at 06:27:55PM -0500, Matt Mackall wrote:
> So it's absolutely no help in fixing our order-1 allocation problem
> because we don't want to force large pages on people.
Using kmalloc(8k) instead of alloc_page() doesn't sound a too big deal
and that will solve the problem. The whole idea is to avoid the memcpy
+ pte mangling of defrag while hopefully lowering cpu utilization in
allocations at the same time.
About 4k stacks I was generally against them, much better to fail in
fork than to risk corruption. The per-irq stack part is great feature
instead (too bad it wasn't enabled for the safer 8k stacks).
Failing in a do_no_page with variable order page size allocation is a
fatal event (the task will be killed), failing in fork is graceful,
userland can retry etc... Fork can fail for different reasons, ulimit
itself is the most likely source of fork failures. I don't think the
8k stacks have ever been a problem, yes you will run out of stack
sooner (sooner also because the 4k stacks takes less memory) but
nothing is terribly wrong if the 8k allocation fails.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists