[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4979692B.3050703@cs.helsinki.fi>
Date: Fri, 23 Jan 2009 08:52:27 +0200
From: Pekka Enberg <penberg@...helsinki.fi>
To: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
CC: Christoph Lameter <cl@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Matthew Wilcox <matthew@....cx>,
Nick Piggin <nickpiggin@...oo.com.au>,
Andrew Morton <akpm@...ux-foundation.org>,
netdev@...r.kernel.org, sfr@...b.auug.org.au,
matthew.r.wilcox@...el.com, chinang.ma@...el.com,
linux-kernel@...r.kernel.org, sharad.c.tripathi@...el.com,
arjan@...ux.intel.com, suresh.b.siddha@...el.com,
harita.chilukuri@...el.com, douglas.w.styner@...el.com,
peter.xihong.wang@...el.com, hubert.nueckel@...el.com,
chris.mason@...cle.com, srostedt@...hat.com,
linux-scsi@...r.kernel.org, andrew.vasquez@...gic.com,
anirban.chakraborty@...gic.com, mingo@...e.hu
Subject: Re: Mainline kernel OLTP performance update
Zhang, Yanmin wrote:
>>>> If it's the former, with big enough size passed to __alloc_skb(), the
>>>> networking code might be taking a hit from the SLUB page allocator
>>>> pass-through.
>> Do we know what kind of size is being passed to __alloc_skb() in this
>> case?
> In function __alloc_skb, original parameter size=4155,
> SKB_DATA_ALIGN(size)=4224, sizeof(struct skb_shared_info)=472, so
> __kmalloc_track_caller's parameter size=4696.
OK, so all allocations go straight to the page allocator.
>
>> Maybe we want to do something like this.
>>
>> SLUB: revert page allocator pass-through
> This patch amost fixes the netperf UDP-U-4k issue.
>
> #slabinfo -AD
> Name Objects Alloc Free %Fast
> :0000256 1658 70350463 70348946 99 99
> kmalloc-8192 31 70322309 70322293 99 99
> :0000168 2592 143154 140684 93 28
> :0004096 1456 91072 89644 99 96
> :0000192 3402 63838 60491 89 11
> :0000064 6177 49635 43743 98 77
>
> So kmalloc-8192 appears. Without the patch, kmalloc-8192 hides.
> kmalloc-8192's default order on my 8-core stoakley is 2.
Christoph, should we merge my patch as-is or do you have an alternative
fix in mind? We could, of course, increase kmalloc() caches one level up
to 8192 or higher.
>
> 1) If I start CPU_NUM clients and servers, SLUB's result is about 2% better than SLQB's;
> 2) If I start 1 clinet and 1 server, and bind them to different physical cpu, SLQB's result
> is about 10% better than SLUB's.
>
> I don't know why there is still 10% difference with item 2). Maybe cachemiss causes it?
Maybe we can use the perfstat and/or kerneltop utilities of the new perf
counters patch to diagnose this:
http://lkml.org/lkml/2009/1/21/273
And do oprofile, of course. Thanks!
Pekka
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists