[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1232699401.11429.163.camel@ymzhang>
Date: Fri, 23 Jan 2009 16:30:01 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Pekka Enberg <penberg@...helsinki.fi>
Cc: Christoph Lameter <cl@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Matthew Wilcox <matthew@....cx>,
Nick Piggin <nickpiggin@...oo.com.au>,
Andrew Morton <akpm@...ux-foundation.org>,
netdev@...r.kernel.org, sfr@...b.auug.org.au,
matthew.r.wilcox@...el.com, chinang.ma@...el.com,
linux-kernel@...r.kernel.org, sharad.c.tripathi@...el.com,
arjan@...ux.intel.com, suresh.b.siddha@...el.com,
harita.chilukuri@...el.com, douglas.w.styner@...el.com,
peter.xihong.wang@...el.com, hubert.nueckel@...el.com,
chris.mason@...cle.com, srostedt@...hat.com,
linux-scsi@...r.kernel.org, andrew.vasquez@...gic.com,
anirban.chakraborty@...gic.com, mingo@...e.hu
Subject: Re: Mainline kernel OLTP performance update
On Fri, 2009-01-23 at 10:06 +0200, Pekka Enberg wrote:
> On Fri, 2009-01-23 at 08:52 +0200, Pekka Enberg wrote:
> > > 1) If I start CPU_NUM clients and servers, SLUB's result is about 2% better than SLQB's;
> > > 2) If I start 1 clinet and 1 server, and bind them to different physical cpu, SLQB's result
> > > is about 10% better than SLUB's.
> > >
> > > I don't know why there is still 10% difference with item 2). Maybe cachemiss causes it?
> >
> > Maybe we can use the perfstat and/or kerneltop utilities of the new perf
> > counters patch to diagnose this:
> >
> > http://lkml.org/lkml/2009/1/21/273
> >
> > And do oprofile, of course. Thanks!
>
> I assume binding the client and the server to different physical CPUs
> also means that the SKB is always allocated on CPU 1 and freed on CPU
> 2? If so, we will be taking the __slab_free() slow path all the time on
> kfree() which will cause cache effects, no doubt.
>
> But there's another potential performance hit we're taking because the
> object size of the cache is so big. As allocations from CPU 1 keep
> coming in, we need to allocate new pages and unfreeze the per-cpu page.
> That in turn causes __slab_free() to be more eager to discard the slab
> (see the PageSlubFrozen check there).
>
> So before going for cache profiling, I'd really like to see an oprofile
> report. I suspect we're still going to see much more page allocator
> activity
Theoretically, it should, but oprofile doesn't show that.
> there than with SLAB or SLQB which is why we're still behaving
> so badly here.
oprofile output with 2.6.29-rc2-slubrevertlarge:
CPU: Core 2, speed 2666.71 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000
samples % app name symbol name
132779 32.9951 vmlinux copy_user_generic_string
25334 6.2954 vmlinux schedule
21032 5.2264 vmlinux tg_shares_up
17175 4.2679 vmlinux __skb_recv_datagram
9091 2.2591 vmlinux sock_def_readable
8934 2.2201 vmlinux mwait_idle
8796 2.1858 vmlinux try_to_wake_up
6940 1.7246 vmlinux __slab_free
#slaninfo -AD
Name Objects Alloc Free %Fast
:0000256 1643 5215544 5214027 94 0
kmalloc-8192 28 5189576 5189560 0 0
:0000168 2631 141466 138976 92 28
:0004096 1452 88697 87269 99 96
:0000192 3402 63050 59732 89 11
:0000064 6265 46611 40721 98 82
:0000128 1895 30429 28654 93 32
oprofile output with kernel 2.6.29-rc2-slqb0121:
CPU: Core 2, speed 2666.76 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000
samples % image name app name symbol name
114793 28.7163 vmlinux vmlinux copy_user_generic_string
27880 6.9744 vmlinux vmlinux tg_shares_up
22218 5.5580 vmlinux vmlinux schedule
12238 3.0614 vmlinux vmlinux mwait_idle
7395 1.8499 vmlinux vmlinux task_rq_lock
7348 1.8382 vmlinux vmlinux sock_def_readable
7202 1.8016 vmlinux vmlinux sched_clock_cpu
6981 1.7464 vmlinux vmlinux __skb_recv_datagram
6566 1.6425 vmlinux vmlinux udp_queue_rcv_skb
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists