lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0801072005380.23932@sbz-30.cs.Helsinki.FI>
Date:	Mon, 7 Jan 2008 20:06:28 +0200 (EET)
From:	Pekka J Enberg <penberg@...helsinki.fi>
To:	Matt Mackall <mpm@...enic.com>
cc:	Christoph Lameter <clameter@....com>, Ingo Molnar <mingo@...e.hu>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Hugh Dickins <hugh@...itas.com>,
	Andi Kleen <andi@...stfloor.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] procfs: provide slub's /proc/slabinfo

Hi Matt,

On Sun, 6 Jan 2008, Matt Mackall wrote:
> I don't have any particular "terrible" workloads for SLUB. But my
> attempts to simply boot with all three allocators to init=/bin/bash in,
> say, lguest show a fair margin for SLOB.

Sorry, I once again have bad news ;-). I did some testing with

  lguest --block=<rootfile> 32 /boot/vmlinuz-2.6.24-rc6 root=/dev/vda init=doit

where rootfile is

  http://uml.nagafix.co.uk/BusyBox-1.5.0/BusyBox-1.5.0-x86-root_fs.bz2

and the "doit" script in the guest passed as init= is just

  #!/bin/sh
  mount -t proc proc /proc
  cat /proc/meminfo | grep MemTotal
  cat /proc/meminfo | grep MemFree
  cat /proc/meminfo | grep Slab

and the results are:

[ the minimum, maximum, and average are of captured from 10 individual runs ]

                                 Free (kB)             Used (kB)
                    Total (kB)   min   max   average   min  max  average
  SLUB (no debug)   26536        23868 23892 23877.6   2644 2668 2658.4
  SLOB              26548        23472 23640 23579.6   2908 3076 2968.4
  SLAB (no debug)   26544        23316 23364 23343.2   3180 3228 3200.8
  SLUB (with debug) 26484        23120 23136 23127.2   3348 3364 3356.8

So it seems that on average SLUB uses 300 kilobytes *less memory* (!) (which is
roughly 1% of total memory available) after boot than SLOB for my
configuration.

One possible explanation is that the high internal fragmentation (space
allocated but not used) of SLUB kmalloc() only affects short-lived allocations
and thus does not show up in the more permanent memory footprint.  Likewise, it
could be that SLOB has higher external fragmentation (small blocks that are
unavailable for allocation) of which SLUB does not suffer from.  Dunno, haven't
investigated as my results are contradictory to yours.

I am beginning to think this is highly dependent on .config so would you mind
sending me one you're using for testing, Matt?

			Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ