lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Jul 2008 13:29:31 +0100
From:	Richard Kennedy <richard@....demon.co.uk>
To:	cl@...ux-foundation.org, penberg@...helsinki.fi, mpm@...enic.com
Cc:	linux-mm <linux-mm@...ck.org>, lkml <linux-kernel@...r.kernel.org>
Subject: [PATCH][RFC] slub: increasing order reduces memory usage of some
	key caches

Hi,

This test patch increases the order of those caches that will gain an
extra object per slab. In particular on 64 bit this effects dentry &
radix_tree_node.

On a freshly booted box after a kernel compile (make clean;make) there
is significant savings in both dentry & radix_tree_node

on my amd64 3 gb ram desktop typical numbers :-

[kernel,objects,pages/slab,slabs,total pages,diff]
radix_tree_node
2.6.26 33922,2,2423 	4846
+patch 33541,4,1165	4660,-186
dentry
2.6.26	82136,1,4323	4323
+patch	79482,2,2038	4076,-247
the extra dentries would use 136 pages but that still leaves a saving of
111 pages.

I see some improvement in iozone write/rewrite numbers particularly
apparent at the beginning of a run (I guess when there are no dirty
pages ?). 

I've also run this patch on my old laptop( Pentuim M 384Mb ram) & it
works with no problems. After a kernel make there's not much difference
in the used memory but I think I'm seeing a improvement in the elapsed
time. 35 minutes -> 33 minutes. However I've not run this enough times
to tell if this is just luck or noise!   

I've been running this on my desktop for several weeks without any
problems.

Can anyone suggest any other tests that would be useful to run?
& Is there any way to measure what impact this is having on
fragmentation?

Patch against 2.6.26 git.

Thanks
Richard




diff --git a/mm/slub.c b/mm/slub.c
index 315c392..c365b04 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2301,6 +2301,14 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	if (order < 0)
 		return 0;
 
+	if (order < slub_max_order ) {
+		unsigned long waste = (PAGE_SIZE << order) % size;
+		if ( waste *2 >= size ) {
+			order++;
+			printk ( KERN_INFO "SLUB: increasing order %s->[%d] [%ld]\n",s->name,order,size);
+		}
+	}
+
 	s->allocflags = 0;
 	if (order)
 		s->allocflags |= __GFP_COMP;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ