lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1804180952580.1334@nuc-kabylake>
Date:   Wed, 18 Apr 2018 09:55:31 -0500 (CDT)
From:   Christopher Lameter <cl@...ux.com>
To:     Mikulas Patocka <mpatocka@...hat.com>
cc:     Mike Snitzer <snitzer@...hat.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Matthew Wilcox <willy@...radead.org>,
        Pekka Enberg <penberg@...nel.org>, linux-mm@...ck.org,
        dm-devel@...hat.com, David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

On Tue, 17 Apr 2018, Mikulas Patocka wrote:

> I can make a slub-only patch with no extra flag (on a freshly booted
> system it increases only the order of caches "TCPv6" and "sighand_cache"
> by one - so it should not have unexpected effects):
>
> Doing a generic solution for slab would be more comlpicated because slab
> assumes that all slabs have the same order, so it can't fall-back to
> lower-order allocations.

Well again SLAB uses compound pages and thus would be able to detect the
size of the page. It may be some work but it could be done.

>
> Index: linux-2.6/mm/slub.c
> ===================================================================
> --- linux-2.6.orig/mm/slub.c	2018-04-17 19:59:49.000000000 +0200
> +++ linux-2.6/mm/slub.c	2018-04-17 20:58:23.000000000 +0200
> @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
>  static inline int calculate_order(unsigned int size, unsigned int reserved)
>  {
>  	unsigned int order;
> +	unsigned int test_order;
>  	unsigned int min_objects;
>  	unsigned int max_objects;
>
> @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
>  			order = slab_order(size, min_objects,
>  					slub_max_order, fraction, reserved);
>  			if (order <= slub_max_order)
> -				return order;
> +				goto ret_order;
>  			fraction /= 2;
>  		}
>  		min_objects--;
> @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
>  	 */
>  	order = slab_order(size, 1, slub_max_order, 1, reserved);

The slab order is determined in slab_order()

>  	if (order <= slub_max_order)
> -		return order;
> +		goto ret_order;
>
>  	/*
>  	 * Doh this slab cannot be placed using slub_max_order.
>  	 */
>  	order = slab_order(size, 1, MAX_ORDER, 1, reserved);
> -	if (order < MAX_ORDER)
> -		return order;
> -	return -ENOSYS;
> +	if (order >= MAX_ORDER)
> +		return -ENOSYS;
> +
> +ret_order:
> +	for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> +		unsigned long order_objects = ((PAGE_SIZE << order) - reserved) / size;
> +		unsigned long test_order_objects = ((PAGE_SIZE << test_order) - reserved) / size;
> +		if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
> +			break;
> +		if (test_order_objects > order_objects << (test_order - order))
> +			order = test_order;
> +	}
> +	return order;

Could yo move that logic into slab_order()? It does something awfully
similar.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ