lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49b0d611-e116-c78d-cf14-6d5f96ae500e@suse.cz>
Date:   Mon, 2 May 2022 12:00:44 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Wonhyuk Yang <vvghjk1234@...il.com>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>
Cc:     Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [Patch v3] mm/slub: Remove repeated action in calculate_order()

On 4/30/22 02:25, Wonhyuk Yang wrote:
> To calculate order, calc_slab_order() is called repeatly changing the
> fract_leftover. Thus, the branch which is not dependent on
> fract_leftover is executed repeatly. So make it run only once.
> 
> Plus, when min_object reached to 1, we set fract_leftover to 1. In
> this case, we can calculate order by max(slub_min_order,
> get_order(size)) instead of calling calc_slab_order().
> 
> No functional impact expected.
> 
> Signed-off-by: Wonhyuk Yang <vvghjk1234@...il.com>
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> ---
> 
>  mm/slub.c | 18 +++++++-----------
>  1 file changed, 7 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index ed5c2c03a47a..1fe4d62b72b8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3795,9 +3795,6 @@ static inline unsigned int calc_slab_order(unsigned int size,
>  	unsigned int min_order = slub_min_order;
>  	unsigned int order;
>  
> -	if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE)
> -		return get_order(size * MAX_OBJS_PER_PAGE) - 1;
> -
>  	for (order = max(min_order, (unsigned int)get_order(min_objects * size));
>  			order <= max_order; order++) {
>  
> @@ -3820,6 +3817,11 @@ static inline int calculate_order(unsigned int size)
>  	unsigned int max_objects;
>  	unsigned int nr_cpus;
>  
> +	if (unlikely(order_objects(slub_min_order, size) > MAX_OBJS_PER_PAGE)) {
> +		order = get_order(size * MAX_OBJS_PER_PAGE) - 1;
> +		goto out;
> +	}

Hm interestingly, both before and after your patch, MAX_OBJS_PER_PAGE might
be theoretically overflowed not by slub_min_order, but then with higher
orders. Seems to be prevented only as a side-effect of fragmentation close
to none, thus higher orders not attempted. Would be maybe less confusing to
check that explicitly. Even if that's wasteful, but this is not really perf
critical code.

> +
>  	/*
>  	 * Attempt to find best configuration for a slab. This
>  	 * works by first attempting to generate a layout with
> @@ -3865,14 +3867,8 @@ static inline int calculate_order(unsigned int size)
>  	 * We were unable to place multiple objects in a slab. Now
>  	 * lets see if we can place a single object there.
>  	 */
> -	order = calc_slab_order(size, 1, slub_max_order, 1);
> -	if (order <= slub_max_order)
> -		return order;
> -
> -	/*
> -	 * Doh this slab cannot be placed using slub_max_order.
> -	 */
> -	order = calc_slab_order(size, 1, MAX_ORDER, 1);
> +	order = max_t(unsigned int, slub_min_order, get_order(size));

If we failed to assign order above, then AFAICS it means even slub_min_order
will not give us more than 1 object per slub. Thus it doesn't make sense to
use it in a max() formula, and we can just se get_order(), no?

> +out:
>  	if (order < MAX_ORDER)
>  		return order;
>  	return -ENOSYS;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ