lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 04 Jan 2008 14:55:50 -0600
From:	Matt Mackall <mpm@...enic.com>
To:	Christoph Lameter <clameter@....com>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Hugh Dickins <hugh@...itas.com>,
	Andi Kleen <andi@...stfloor.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] procfs: provide slub's /proc/slabinfo


On Fri, 2008-01-04 at 12:34 -0800, Christoph Lameter wrote:
> On Thu, 3 Jan 2008, Matt Mackall wrote:
> 
> > > The advantage of SLOB is to be able to put objects of multiple sizes into 
> > > the same slab page. That advantage goes away once we have more than a few 
> > > objects per slab because SLUB can store object in a denser way than SLOB.
> > 
> > Ugh, Christoph. Can you please stop repeating this falsehood? I'm sick
> > and tired of debunking it. There is no overhead for any objects with
> > externally-known size. So unless SLUB actually has negative overhead,
> > this just isn't true.
> 
> Hmmm.. Seems that I still do not understand how it is possible then to mix 
> objects of different sizes in the same slab page. Somehow the allocator 
> needs to know the size. So it is not possible in SLOB to use 
> kmem_cache_alloc on an object and then free it using kfree?

Indeed. Mismatching allocator and deallocator is bug, even if it happens
to work for SLAB/SLUB.

> > > Well if you just have a few dentries then they are likely all pinned. A 
> > > large number of dentries will typically result in reclaimable slabs.
> > > The slab defrag patchset not only deals with the dcache issue but provides 
> > > similar solutions for inode and buffer_heads. Support for other slabs that
> > > defragment can be added by providing two hooks per slab.
> > 
> > What's your point? Slabs have a inherent pinning problem that's ugly to
> > combat. SLOB doesn't.
> 
> I thought we were talking about pinning problems of dentries. How are 
> slabs pinned and why does it matter?

If a slab contains a dentry that is pinned, it can only be used for
other dentries and cannot be recycled for other allocations. If updatedb
comes along and fills memory with dentry slabs, many of which get
permanently pinned, then you have wasted memory.

>  If slabs are pineed by a dentry that 
> is pinned then the slab page will be filled up with other dentries that 
> are not pinned. The slab defrag approach causes a coalescing of objects 
> around slabs that have pinned objects.

Yes. You've got (most of) a fix. It's overly-complicated and SLOB
doesn't need it. How many ways do I need to say this?


> > SLOB:
> > - internal overhead for kmalloc is 2 bytes (or 3 for odd-sized objects)
> 
> Well that increase if you need to align the object. For kmalloc this 
> usually means cache line align a power of two object right? So we have a 
> cacheline size of overhead?

a) alignment doesn't increase memory use because the memory before the
object is still allocatable
b) kmallocs aren't aligned!

> > - internal overhead for kmem_cache_alloc is 0 bytes (or 1 for odd-sized
> > objects)
> 
> You are not aligning to a double word boundary? This will create issues on 
> certain platforms.

The alignment minimum varies per arch. SLOB can go down to 2 bytes.

> > SLAB/SLUB
> > - internal overhead for kmalloc averages about 30%
> 
> I think that is valid for a random object size distribution?

It's a measurement from memory. But it roughly agrees with what you'd
expect from a random distribution.

> > The only time SLAB/SLUB can win in efficiency (assuming they're using
> > the same page size) is when all your kmallocs just happen to be powers
> > of two. Which, assuming any likely distribution of string or other
> > object sizes, isn't often.
> 
> In case of SLAB that is true. In case of SLUB we could convert the 
> kmallocs to kmem_cache_alloc. The newly created slab would in all 
> likelyhood be an alias of an already existing structure and thus be 
> essentially free. In that fashion SLUB can (in a limited way) put objects 
> for different slab caches into the same slab page too.

Uh, no. You'd need a new slab for every multiple of 2 bytes. And then
you'd just be making the underused and pinned slab problems worse.

-- 
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ