lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 16 Jul 2008 10:52:56 -0500 From: Christoph Lameter <cl@...ux-foundation.org> To: Richard Kennedy <richard@....demon.co.uk> CC: penberg@...helsinki.fi, mpm@...enic.com, linux-mm <linux-mm@...ck.org>, lkml <linux-kernel@...r.kernel.org>, Mel Gorman <mel@....ul.ie> Subject: Re: [PATCH][RFC] slub: increasing order reduces memory usage of some key caches You can get a similar effect by booting with a kernel parameter slub_min_objects=20 or so. The fundamental difference in your patch is that you check for the wasted space in terms of a fraction of the size of a single object whereas the current logic only checks in terms of fractions of a page. We could add an additional condition that the wasted space be no larger than half an object? Affected slab configurations 24 byte sized caches now become an order 1 cache. 72 byte sizes caches now become order 3 96 byte 0 - > 1 320 1 -> 2 448 2 -> 3 buffer_head 0 -> 1 idr_layer_cache 2 -> 3 inode_cache 2 -> 3 journal_* 1 -> 2 etc So the effect would be a significant enlargement of caches. In general the speed of slub is bigger the larger the allocations it can get from the page allocator. The page allocators performance is pretty slow compared to slub alloc logic so its a win to minimize calls to it. However, that in turn will put pressure on larger page allocations. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists