lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LRH.2.02.1804171454020.26973@file01.intranet.prod.int.rdu2.redhat.com>
Date:   Tue, 17 Apr 2018 15:06:24 -0400 (EDT)
From:   Mikulas Patocka <mpatocka@...hat.com>
To:     Christopher Lameter <cl@...ux.com>
cc:     Mike Snitzer <snitzer@...hat.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Matthew Wilcox <willy@...radead.org>,
        Pekka Enberg <penberg@...nel.org>, linux-mm@...ck.org,
        dm-devel@...hat.com, David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE



On Tue, 17 Apr 2018, Christopher Lameter wrote:

> On Mon, 16 Apr 2018, Mikulas Patocka wrote:
> 
> > This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
> > flag causes allocation of larger slab caches in order to minimize wasted
> > space.
> >
> > This is needed because we want to use dm-bufio for deduplication index and
> > there are existing installations with non-power-of-two block sizes (such
> > as 640KB). The performance of the whole solution depends on efficient
> > memory use, so we must waste as little memory as possible.
> 
> Hmmm. Can we come up with a generic solution instead?
> 
> This may mean relaxing the enforcement of the allocation max order a bit
> so that we can get dense allocation through higher order allocs.
> 
> But then higher order allocs are generally seen as problematic.
> 
> Note that SLUB will fall back to smallest order already if a failure
> occurs so increasing slub_max_order may not be that much of an issue.
> 
> Maybe drop the max order limit completely and use MAX_ORDER instead? That
> means that callers need to be able to tolerate failures.

I can make a slub-only patch with no extra flag (on a freshly booted 
system it increases only the order of caches "TCPv6" and "sighand_cache" 
by one - so it should not have unexpected effects):

Doing a generic solution for slab would be more comlpicated because slab 
assumes that all slabs have the same order, so it can't fall-back to 
lower-order allocations.


From: Mikulas Patocka <mpatocka@...hat.com>
Subject: [PATCH] slub: minimize wasted space

When object size is greater than slub_max_order, the slub subsystem rounds
up the size to the next power of two. This causes a lot of wasted space -
i.e. 640KB block consumes 1MB of memory.

This patch makes the slub subsystem increase the order if it is benefical.
The order is increased as long as it reduces wasted space. There is cutoff
at 32 objects per slab.

Signed-off-by: Mikulas Patocka <mpatocka@...hat.com>

---
 mm/slub.c |   21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2018-04-17 19:59:49.000000000 +0200
+++ linux-2.6/mm/slub.c	2018-04-17 20:58:23.000000000 +0200
@@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
 static inline int calculate_order(unsigned int size, unsigned int reserved)
 {
 	unsigned int order;
+	unsigned int test_order;
 	unsigned int min_objects;
 	unsigned int max_objects;
 
@@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
 			order = slab_order(size, min_objects,
 					slub_max_order, fraction, reserved);
 			if (order <= slub_max_order)
-				return order;
+				goto ret_order;
 			fraction /= 2;
 		}
 		min_objects--;
@@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
 	 */
 	order = slab_order(size, 1, slub_max_order, 1, reserved);
 	if (order <= slub_max_order)
-		return order;
+		goto ret_order;
 
 	/*
 	 * Doh this slab cannot be placed using slub_max_order.
 	 */
 	order = slab_order(size, 1, MAX_ORDER, 1, reserved);
-	if (order < MAX_ORDER)
-		return order;
-	return -ENOSYS;
+	if (order >= MAX_ORDER)
+		return -ENOSYS;
+
+ret_order:
+	for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
+		unsigned long order_objects = ((PAGE_SIZE << order) - reserved) / size;
+		unsigned long test_order_objects = ((PAGE_SIZE << test_order) - reserved) / size;
+		if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
+			break;
+		if (test_order_objects > order_objects << (test_order - order))
+			order = test_order;
+	}
+	return order;
 }
 
 static void

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ