lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Oct 2010 10:23:38 +0100
From:	Mel Gorman <mel@....ul.ie>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Pekka Enberg <penberg@...nel.org>,
	Pekka Enberg <penberg@...helsinki.fi>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, David Rientjes <rientjes@...gle.com>,
	npiggin@...nel.dk, yanmin_zhang@...ux.intel.com
Subject: Re: [UnifiedV4 00/16] The Unified slab allocator (V4)

On Mon, Oct 18, 2010 at 01:13:42PM -0500, Christoph Lameter wrote:
> On Wed, 13 Oct 2010, Mel Gorman wrote:
> 
> > Minimally, I see the same sort of hackbench socket performance regression
> > as reported elsewhere (10-15% regression). Otherwise, it isn't particularly
> > exciting results. The machine is very basic - 2 socket, 4 cores, x86-64,
> > 2G RAM. Macine model is an IBM BladeCenter HS20. Processor is Xeon but I'm
> > not sure exact what model. It appears to be from around the P4 times.
> 
> Looks not good. Something must still be screwed up. Trouble is to find
> time to do this work. When working on SLAB we had a team to implement the
> NUMA stuff and deal with the performance issues.
> 
> > Christoph, in particular while it tests netperf, it is not binding to any
> > particular CPU (although it can), server and client are running on the local
> > machine (which has particular performance characterisitcs of its own) and
> > the tests is STREAM, not RR so the tarball is not a replacement for more
> > targetting testing or workload-specific testing. Still, it should catch
> > some of the common snags before getting into specific workloads without
> > taking an extraordinary amount of time to complete. sysbench might take a
> > long time for many-core machines, limit the number of threads it tests with
> > OLTP_MAX_THREADS in the config file.
> 
> That should not matter too much. The performance results should replicate
> SLABs caching behavior and I do not see that in the tests.
> 

On the other hand, the unified figures are very close to slab in terms of
behaviour. Very small gains and losses. Considering that the server and
clients are not bound to any particular CPU either and the data set it is
working on is quite large, a small amount of noise is expected.

> > NETPERF UDP
> >                    netperf-udp       netperf-udp          udp-slub
> >                   slab-vanilla      slub-vanilla      unified-v4r1
> >       64    52.23 ( 0.00%)*    53.80 ( 2.92%)     50.56 (-3.30%)               1.36%             1.00%             1.00%
> >      128   103.70 ( 0.00%)    107.43 ( 3.47%)    101.23 (-2.44%)
> >      256   208.62 ( 0.00%)*   212.15 ( 1.66%)    202.35 (-3.10%)               1.73%             1.00%             1.00%
> >     1024   814.86 ( 0.00%)    827.42 ( 1.52%)    799.13 (-1.97%)
> >     2048  1585.65 ( 0.00%)   1614.76 ( 1.80%)   1563.52 (-1.42%)
> >     3312  2512.44 ( 0.00%)   2556.70 ( 1.73%)   2460.37 (-2.12%)
> >     4096  3016.81 ( 0.00%)*  3058.16 ( 1.35%)   2901.87 (-3.96%)               1.15%             1.00%             1.00%
> >     8192  5384.46 ( 0.00%)   5092.95 (-5.72%)   4912.71 (-9.60%)
> >    16384  8091.96 ( 0.00%)*  8249.26 ( 1.91%)   8004.40 (-1.09%)               1.70%             1.00%             1.00%
> 
> Seems that we lost some of the netperf wins.

It's a different test being run here. UDP_STREAM versus UDP_RR and that could
be one factor in the differences between my results and your own. I'll look
into redoing these for *_RR to rule that out as one factor.  The results
are outside statistical noise though.

> 
> > SYSBENCH
> >             sysbench-slab-vanilla-sysbenchsysbench-slub-vanilla-sysbench     sysbench-slub
> >                   slab-vanilla      slub-vanilla      unified-v4r1
> >            1  7521.24 ( 0.00%)  7719.38 ( 2.57%)  7589.13 ( 0.89%)
> >            2 14872.85 ( 0.00%) 15275.09 ( 2.63%) 15054.08 ( 1.20%)
> >            3 16502.53 ( 0.00%) 16676.53 ( 1.04%) 16465.69 (-0.22%)
> >            4 17831.19 ( 0.00%) 17900.09 ( 0.38%) 17819.03 (-0.07%)
> >            5 18158.40 ( 0.00%) 18432.74 ( 1.49%) 18341.99 ( 1.00%)
> >            6 18673.68 ( 0.00%) 18878.41 ( 1.08%) 18614.92 (-0.32%)
> >            7 17689.75 ( 0.00%) 17871.89 ( 1.02%) 17633.19 (-0.32%)
> >            8 16885.68 ( 0.00%) 16838.37 (-0.28%) 16498.41 (-2.35%)
> 
> Same here. Seems that we combined the worst of both.
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ