[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48AACFDA.5090600@linux-foundation.org>
Date: Tue, 19 Aug 2008 08:51:22 -0500
From: Christoph Lameter <cl@...ux-foundation.org>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC: Matthew Wilcox <matthew@....cx>,
Pekka Enberg <penberg@...helsinki.fi>,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, Mel Gorman <mel@...net.ie>,
andi@...stfloor.org, Rik van Riel <riel@...hat.com>
Subject: Re: No, really, stop trying to delete slab until you've finished
making slub perform as well
KOSAKI Motohiro wrote:
> IOW, My box didn't happend performance regression.
> but I think it isn't typical.
Well that is typical for small NUMA system. Maybe this patch will fix it for
now? Large systems can be tuned by setting the ratio lower.
Subject: slub/NUMA: Disable remote node defragmentation by default
Switch remote node defragmentation off by default. The current settings can
cause excessive node local allocations with hackbench. (Note that this feature
is not related to slab defragmentation).
Signed-off-by: Christoph Lameter <cl@...ux-foundation.org>
---
mm/slub.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2008-08-19 06:45:54.732348449 -0700
+++ linux-2.6/mm/slub.c 2008-08-19 06:46:12.442348249 -0700
@@ -2312,7 +2312,7 @@ static int kmem_cache_open(struct kmem_c
s->refcount = 1;
#ifdef CONFIG_NUMA
- s->remote_node_defrag_ratio = 100;
+ s->remote_node_defrag_ratio = 1000;
#endif
if (!init_kmem_cache_nodes(s, gfpflags & ~SLUB_DMA))
goto error;
@@ -4058,7 +4058,7 @@ static ssize_t remote_node_defrag_ratio_
if (err)
return err;
- if (ratio < 100)
+ if (ratio <= 100)
s->remote_node_defrag_ratio = ratio * 10;
return length;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists