lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 2 Aug 2011 09:24:16 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Christoph Lameter <cl@...ux.com>
cc:	Pekka Enberg <penberg@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>, hughd@...gle.com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [GIT PULL] Lockless SLUB slowpaths for v3.1-rc1

On Tue, 2 Aug 2011, Christoph Lameter wrote:

> > Yes, slub _did_ use more memory than slab until the alignment of
> > struct page.  That cost an additional 128MB on each of these 64GB
> > machines, while the total slab usage on the client machine systemwide is
> > ~75MB while running netperf TCP_RR with 160 threads.
> 
> I guess that calculation did not include metadata structures (alien caches
> and the NR_CPU arrays in kmem_cache) etc? These are particularly costly on SLAB.
> 

It certainly is costly on slab, but that 75MB number is from a casual 
observation of grep Slab /proc/meminfo while running the benchmark.  For 
slub, that turns into ~55MB.  The true slub usage, though, includes the 
struct page alignment for cmpxchg16b which added 128MB of padding into its 
memory usage even though it appears to be unattributed to slub.  A casual 
grep MemFree /proc/meminfo reveals the lost 100MB for the slower 
allocator, in this case.  And the per-cpu partial list will add even 
additional slab usage for slub, so this is where my "throwing more memory 
at slub to get better performance" came from.  I understand that this is a 
large NUMA machine, though, and the cost of slub may be substantially 
lower on smaller machines.

If you look through the various arch defconfigs, you'll see that we 
actually do a pretty good job of enabling CONFIG_SLAB for large systems.  
I wish we had a clear dividing line in the x86 kconfig that would at least 
guide users toward one allocator over another though, otherwise they 
receive little help.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ