lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Oct 2012 09:35:41 -0300
From:	Ezequiel Garcia <elezegarcia@...il.com>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-mm@...ck.org, Tim Bird <tim.bird@...sony.com>,
	celinux-dev@...ts.celinuxforum.org
Subject: Re: [Q] Default SLAB allocator

David,

On Mon, Oct 15, 2012 at 9:46 PM, David Rientjes <rientjes@...gle.com> wrote:
> On Sat, 13 Oct 2012, Ezequiel Garcia wrote:
>
>> But SLAB suffers from a lot more internal fragmentation than SLUB,
>> which I guess is a known fact. So memory-constrained devices
>> would waste more memory by using SLAB.
>
> Even with slub's per-cpu partial lists?

I'm not considering that, but rather plain fragmentation: the difference
between requested and allocated, per object.
Admittedly, perhaps this is a naive analysis.

However, devices where this matters would have only one cpu, right?
So the overhead imposed by per-cpu data shouldn't impact so much.

Study the difference in overhead imposed by allocators is
something that's still on my TODO.

Now, returning to the fragmentation. The problem with SLAB is that
its smaller cache available for kmalloced objects is 32 bytes;
while SLUB allows 8, 16, 24 ...

Perhaps adding smaller caches to SLAB might make sense?
Is there any strong reason for NOT doing this?

Thanks,

    Ezequiel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ