lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 03 Jan 2008 22:34:44 -0600
From:	Matt Mackall <mpm@...enic.com>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	Christoph Lameter <clameter@....com>, Ingo Molnar <mingo@...e.hu>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Hugh Dickins <hugh@...itas.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] procfs: provide slub's /proc/slabinfo


On Fri, 2008-01-04 at 03:45 +0100, Andi Kleen wrote:
> > I still have trouble to see that SLOB still has much to offer. An embedded 
> > allocator that in many cases has more allocation overhead than the default 
> > one? Ok you still have advantages if allocations are rounded up to the 
> > next power of two for a kmalloc and because of the combining of different 
> > types of allocations in a single slab if there are an overall small number 
> > of allocations. If one would create a custom slab for the worst problems 
> > there then this may also go away.
> 
> I suspect it would be a good idea anyways to reevaluate the power of two
> slabs. Perhaps a better distribution can be found based on some profiling?
> I did profile kmalloc using a systemtap script some time ago but don't
> remember the results exactly, but iirc it looked like it could be improved.

We can roughly group kmalloced objects into two classes:

a) intrinsically variable-sized (strings, etc.)
b) fixed-sized objects that nonetheless don't have their own caches

For (a), we can expect the size distribution to be approximately a
scale-invariant power distribution. So buckets of the form n**x make a
fair amount of sense. We might consider n less than 2 though.

For objects of type (b) that occur in significant numbers, well, we
might just want to add more caches. SLUB's merging of same-sized caches
will reduce the pain here.

> A long time ago i also had some code to let the network stack give hints
> about its MMUs to slab to create fitting slabs for packets. But that
> was never really pushed forward because it turned out it didn't help
> much for the most common 1.5K MTU -- always only two packets fit into
> a page.

Yes, that and task_struct kinda make you want to cry. Large-order
SLAB/SLUB/SLOB would go a long way to fix that, but has its own problems
of course.

One could imagine restructuring things so that the buddy allocator only
extended down to 64k or so and below that, gfp and friends called
through SLAB/SLUB/SLOB.

-- 
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ