[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1233910649.29891.26.camel@penberg-laptop>
Date: Fri, 06 Feb 2009 10:57:29 +0200
From: Pekka Enberg <penberg@...helsinki.fi>
To: Hugh Dickins <hugh@...itas.com>
Cc: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Nick Piggin <npiggin@...e.de>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Lin Ming <ming.m.lin@...el.com>,
Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator
Hi Hugh,
On Thu, 2009-02-05 at 19:04 +0000, Hugh Dickins wrote:
> I then tried a patch I thought obviously better than yours: just mask
> off __GFP_WAIT in that __GFP_NOWARN|__GFP_NORETRY preliminary call to
> alloc_slab_page(): so we're not trying to infer anything about high-
> order availability from the number of free order-0 pages, but actually
> going to look for it and taking it if it's free, forgetting it if not.
>
> That didn't work well at all: almost as bad as the unmodified slub.c.
> I decided that was due to __alloc_pages_internal()'s
> wakeup_kswapd(zone, order): just expressing an interest in a high-
> order page was enough to send it off trying to reclaim them, though
> not directly. Hacked in a condition to suppress that in this case:
> worked a lot better, but not nearly as well as yours. I supposed
> that was somehow(?) due to the subsequent get_page_from_freelist()
> calls with different watermarking: hacked in another __GFP flag to
> break out to nopage just like the NUMA_BUILD GFP_THISNODE case does.
> Much better, getting close, but still not as good as yours.
Did you look at it with oprofile? One thing to keep in mind is that if
there are 4K allocations going on, your approach will get double the
overhead of page allocations (which can be substantial performance hit
for slab).
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists