lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Sep 2023 15:56:24 +0800
From:   Feng Tang <feng.tang@...el.com>
To:     Vlastimil Babka <vbabka@...e.cz>
CC:     David Rientjes <rientjes@...gle.com>,
        Christoph Lameter <cl@...ux.com>,
        Hyeonggon Yoo <42.hyeyoo@...il.com>,
        Jay Patel <jaypatel@...ux.ibm.com>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Pekka Enberg <penberg@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "patches@...ts.linux.dev" <patches@...ts.linux.dev>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/4] mm/slub: simplify the last resort slab order
 calculation

Hi Vlastimil,

On Fri, Sep 08, 2023 at 10:53:04PM +0800, Vlastimil Babka wrote:
> If calculate_order() can't fit even a single large object within
> slub_max_order, it will try using the smallest necessary order that may
> exceed slub_max_order but not MAX_ORDER.
> 
> Currently this is done with a call to calc_slab_order() which is
> unecessary. We can simply use get_order(size). No functional change.
> 
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> ---
>  mm/slub.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index f7940048138c..c6e694cb17b9 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4193,7 +4193,7 @@ static inline int calculate_order(unsigned int size)
>  	/*
>  	 * Doh this slab cannot be placed using slub_max_order.
>  	 */
> -	order = calc_slab_order(size, 1, MAX_ORDER, 1);
> +	order = get_order(size);


This patchset is a nice cleanup, and my previous test all looked fine. 
And one 'slub_min_order' setup reminded by Christopher [1] doesn't
work as not taking affect with this 1/4 patch.

The root cause seems to be, in current kernel, the 'slub_max_order'
is not ajusted  accordingly with 'slub_min_order', so there is case
that 'slub_min_order' is bigger than the default 'slub_max_order' (3).

And it could be fixed by the below patch 

diff --git a/mm/slub.c b/mm/slub.c
index 1c91f72c7239..dbe950783105 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4702,6 +4702,9 @@ static int __init setup_slub_min_order(char *str)
 {
 	get_option(&str, (int *)&slub_min_order);
 
+	if (slub_min_order > slub_max_order)
+		slub_max_order = slub_min_order;
+
 	return 1;
 }

Though the formal fix may also need to cover case like this kind of
crazy setting "slub_min_order=6 slub_max_order=5" 

[1]. https://lore.kernel.org/lkml/21a0ba8b-bf05-0799-7c78-2a35f8c8d52a@os.amperecomputing.com/

Thanks,
Feng

>  	if (order <= MAX_ORDER)
>  		return order;
>  	return -ENOSYS;
> -- 
> 2.42.0
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ