[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1234416153.2604.387.camel@ymzhang>
Date: Thu, 12 Feb 2009 13:22:33 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Pekka Enberg <penberg@...helsinki.fi>
Cc: Christoph Lameter <cl@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Matthew Wilcox <matthew@....cx>,
Nick Piggin <nickpiggin@...oo.com.au>,
Andrew Morton <akpm@...ux-foundation.org>,
netdev@...r.kernel.org, Stephen Rothwell <sfr@...b.auug.org.au>,
matthew.r.wilcox@...el.com, chinang.ma@...el.com,
linux-kernel@...r.kernel.org, sharad.c.tripathi@...el.com,
arjan@...ux.intel.com, suresh.b.siddha@...el.com,
harita.chilukuri@...el.com, douglas.w.styner@...el.com,
peter.xihong.wang@...el.com, hubert.nueckel@...el.com,
chris.mason@...cle.com, srostedt@...hat.com,
linux-scsi@...r.kernel.org, andrew.vasquez@...gic.com,
anirban.chakraborty@...gic.com, Ingo Molnar <mingo@...e.hu>
Subject: Re: Mainline kernel OLTP performance update
On Sat, 2009-01-24 at 09:36 +0200, Pekka Enberg wrote:
> On Fri, 2009-01-23 at 10:22 -0500, Christoph Lameter wrote:
> >> No there is another way. Increase the allocator order to 3 for the
> >> kmalloc-8192 slab then multiple 8k blocks can be allocated from one of the
> >> larger chunks of data gotten from the page allocator. That will allow slub
> >> to do fast allocs.
>
> On Sat, Jan 24, 2009 at 4:55 AM, Zhang, Yanmin
> <yanmin_zhang@...ux.intel.com> wrote:
> > After I change kmalloc-8192/order to 3, the result(pinned netperf UDP-U-4k)
> > difference between SLUB and SLQB becomes 1% which can be considered as fluctuation.
>
> Great. We should fix calculate_order() to be order 3 for kmalloc-8192.
> Are you interested in doing that?
Pekka,
Sorry for the late update.
The default order of kmalloc-8192 on 2*4 stoakley is really an issue of calculate_order.
slab_size order name
-------------------------------------------------
4096 3 sgpool-128
8192 2 kmalloc-8192
16384 3 kmalloc-16384
kmalloc-8192's default order is smaller than sgpool-128's.
On 4*4 tigerton machine, a similiar issue appears on another kmem_cache.
Function calculate_order uses 'min_objects /= 2;' to shrink. Plus size calculation/checking
in slab_order, sometimes above issue appear.
Below patch against 2.6.29-rc2 fixes it.
I checked the default orders of all kmem_cache and they don't become smaller than before. So
the patch wouldn't hurt performance.
Signed-off-by Zhang Yanmin <yanmin.zhang@...ux.intel.com>
---
diff -Nraup linux-2.6.29-rc2/mm/slub.c linux-2.6.29-rc2_slubcalc_order/mm/slub.c
--- linux-2.6.29-rc2/mm/slub.c 2009-02-11 00:49:48.000000000 -0500
+++ linux-2.6.29-rc2_slubcalc_order/mm/slub.c 2009-02-12 00:08:24.000000000 -0500
@@ -1856,6 +1856,7 @@ static inline int calculate_order(int si
min_objects = slub_min_objects;
if (!min_objects)
min_objects = 4 * (fls(nr_cpu_ids) + 1);
+ min_objects = min(min_objects, (PAGE_SIZE << slub_max_order)/size);
while (min_objects > 1) {
fraction = 16;
while (fraction >= 4) {
@@ -1865,7 +1866,7 @@ static inline int calculate_order(int si
return order;
fraction /= 2;
}
- min_objects /= 2;
+ min_objects --;
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists