lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Sep 2007 23:04:33 -0700
From:	"Siddha, Suresh B" <suresh.b.siddha@...el.com>
To:	Christoph Lameter <clameter@....com>
Cc:	Nick Piggin <nickpiggin@...oo.com.au>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>, mingo@...e.hu,
	Mel Gorman <mel@...net.ie>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: tbench regression - Why process scheduler has impact on tbench and why small per-cpu slab (SLUB) cache creates the scenario?

On Tue, Sep 11, 2007 at 01:19:30PM -0700, Christoph Lameter wrote:
> On Tue, 11 Sep 2007, Nick Piggin wrote:
> 
> > The impression I got at vm meeting was that SLUB was good to go :(
> 
> Its not? I have had Intel test this thoroughly and they assured me that it 
> is up to SLAB.

Christoph, Not sure if you are referring to me or not here. But our
tests(atleast on with the database workloads) approx 1.5 months or so back
showed that on ia64 slub was on par with slab and on x86_64, slub was 9% down.
And after changing the slub min order and max order, slub perf on x86_64 is
down approx 3.5% or so compared to slab.

While I don't rule out large sized allocations like PAGE_SIZE, I am mostly
certain that the critical allocations in this workload are not PAGE_SIZE
based.  Mostly they are in the range less than 300-500 bytes or so.

Any changes in the recent slub which takes the pressure away from the page
allocator especially for smaller page sized architectures? If so, we can
redo some of the experiments. Looking at this thread, it doesn't sound like?

thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ