lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 4 Jan 2008 13:36:30 -0800 (PST)
From:	Christoph Lameter <clameter@....com>
To:	Matt Mackall <mpm@...enic.com>
cc:	Ingo Molnar <mingo@...e.hu>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Hugh Dickins <hugh@...itas.com>,
	Andi Kleen <andi@...stfloor.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] procfs: provide slub's /proc/slabinfo

On Fri, 4 Jan 2008, Matt Mackall wrote:

> > needs to know the size. So it is not possible in SLOB to use 
> > kmem_cache_alloc on an object and then free it using kfree?
> 
> Indeed. Mismatching allocator and deallocator is bug, even if it happens
> to work for SLAB/SLUB.

Was the kernel audited for this case? I saw some rather scary uses of slab 
objects for I/O purposes come up during SLUB development.

> Yes. You've got (most of) a fix. It's overly-complicated and SLOB
> doesn't need it. How many ways do I need to say this?

So SLOB is then not able to compact memory after an updatedb run? The 
memory must stay dedicated to slab uses. SLOB memory can be filled 
up with objects of other slab types but it cannot be reused for page 
cache and anonymous pages etc. Slab defrag is freeing memory back to the 
page allocator with SLUB. That is *not* provided by SLOB. Could be made to 
work though.

> > Well that increase if you need to align the object. For kmalloc this 
> > usually means cache line align a power of two object right? So we have a 
> > cacheline size of overhead?
> 
> a) alignment doesn't increase memory use because the memory before the
> object is still allocatable

Ok so we end up with lots of small holes on a list that has to be scanned 
to find free memory?

> b) kmallocs aren't aligned!

>From mm/slob.c:

#ifndef ARCH_KMALLOC_MINALIGN
#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long)
#endif

void *__kmalloc_node(size_t size, gfp_t gfp, int node)
{
        unsigned int *m;
        int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);

(IMHO it would be safer to set the minimum default to __alignof__(unsigned 
long long) like SLAB/SLUB).



Ok. So lets try a worst case scenario. If we do a 128 byte kmalloc then we 
can allocate the following number of object from one 4k slab

SLUB 32	    (all memory of the 4k page is used for 128 byte objects)
SLAB 29/30  (management structure occupies first two/three objects)
SLOB 30(?)  (Alignment results in object being 136 byte of effective size,
		we have 16 bytes leftover that could be used for a
		very small allocation. Right?)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ