[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0707280627110.26545@blonde.wat.veritas.com>
Date: Sat, 28 Jul 2007 06:54:37 +0100 (BST)
From: Hugh Dickins <hugh@...itas.com>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>
cc: Peter Zijlstra <peterz@...radead.org>,
Andi Kleen <andi@...stfloor.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: [patch] mm: reduce pagetable-freeing latencies
On Sat, 28 Jul 2007, Benjamin Herrenschmidt wrote:
>
> As I'm sweeping through arch code etc... preparing the ground for the
> proper mmu_gather surgery, I've been thinking about the way to deal with
> that per-cpu page list and finally came up with the idea that the best
> we can do is around the lines of trying to allocate the list via gfp,
> and if that fails, fallback to a (smaller than now) per-cpu. I'm
> reworking the interfaces such that the higher level code doesn't have to
> care whether preemption is enabled or disabled at a given point.
That doesn't sound like the best way to me at all. Using two means
of buffering, one with preemption enabled and the other not, seems
complex and prone to error (perhaps not while you're working on it,
but later on). We do already have that problem (the i_mmap_lock case
versus the others), but it's not a complication I'd want to extend.
The onstack array seems fine to me, even if you do end up deciding on
an array of one. Is there any evidence that it's a problem getting a
page for the freeing (other than in circumstances that are already
badly slowed down)? It's obvious that we need a fallback route,
but optimizing throughput on that route seems premature.
Hugh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists