[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.00.0910131305440.3529@chino.kir.corp.google.com>
Date: Tue, 13 Oct 2009 13:15:27 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Christoph Lameter <cl@...ux-foundation.org>
cc: Pekka Enberg <penberg@...helsinki.fi>, Tejun Heo <tj@...nel.org>,
linux-kernel@...r.kernel.org,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Mel Gorman <mel@....ul.ie>,
Zhang Yanmin <yanmin_zhang@...ux.intel.com>
Subject: Re: [this_cpu_xx V6 7/7] this_cpu: slub aggressive use of this_cpu
operations in the hotpaths
On Tue, 13 Oct 2009, Christoph Lameter wrote:
> > I wonder how reliable these numbers are. We did similar testing a while back
> > because we thought kmalloc-96 caches had weird cache behavior but finally
> > figured out the anomaly was explained by the order of the tests run, not cache
> > size.
>
> Well you need to look behind these numbers to see when the allocator uses
> the fastpath or slow path. Only the fast path is optimized here.
>
With the netperf -t TCP_RR -l 60 benchmark I ran, CONFIG_SLUB_STATS shows
the allocation fastpath is utilized quite a bit for a couple of key
caches:
cache ALLOC_FASTPATH ALLOC_SLOWPATH
kmalloc-256 98125871 31585955
kmalloc-2048 77243698 52347453
For an optimized fastpath, I'd expect such a workload would result in at
least a slightly higher transfer rate.
I'll try the irqless patch, but this particular benchmark may not
appropriately demonstrate any performance gain because of the added code
in the also significantly-used slowpath.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists