lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <361111c6-921b-e129-edf3-367748f6497e@oracle.com>
Date:   Thu, 24 Mar 2022 14:57:30 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Peng Liu <liupeng256@...wei.com>, akpm@...ux-foundation.org,
        yaozhenguo1@...il.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] hugetlb: Fix hugepages_setup when deal with pernode

On 3/24/22 00:40, Peng Liu wrote:
> Hugepages can be specified to pernode since "hugetlbfs: extend
> the definition of hugepages parameter to support node allocation",
> but the following two problems are observed.
> 
> 1) Confusing behavior is observed when both 1G and 2M hugepage
> is set after "numa=off".
>  cmdline hugepage settings:
>   hugepagesz=1G hugepages=0:3,1:3
>   hugepagesz=2M hugepages=0:1024,1:1024
>  results:
>   HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>   HugeTLB registered 2.00 MiB page size, pre-allocated 1024 pages
> 
> 2) Using invalid option values causes the entire kernel boot option
> string to be reported as Unknown.
>  Unknown kernel command line parameters "hugepages=0:1024,1:1024"

Thank you for debugging and sending the patch!

My first thought was "If someone is specifying 'numa=off' as well as
numa node specific allocations on the same command line, we should just
fail the allocation request".  However, this same situation could exist
without the 'numa=off' option as long as an invalid node is included in
the list.

With your patch, the node specific allocations are parsed (and processed)
until there is an error.  So, in the example above 3 1G pages and 1024 2M
pages are allocated on node 0.  That seems correct.

Now suppose the node specific allocations are specified as:
hugepagesz=1G hugepages=1:3,0:3
hugepagesz=2M hugepages=1:1024,0:1024

Since node 1 is invalid, we experience an error here and do not allocate
any pages on node 0.

I am wondering if we should just error and ignore the entire string if
ANY of the specified nodes are invalid?  Thoughts?

-- 
Mike Kravetz

> 
> To fix "1)", hugetlb_hstate_alloc_pages should be called even when
> hugepages_setup going to invalid. To fix "2)", return 1 so that
> init's environment is not polluted with kernel boot options.
> 
> Fixes: b5389086ad7b ("hugetlbfs: extend the definition of hugepages parameter to support node allocation")
> Signed-off-by: Peng Liu <liupeng256@...wei.com>
> ---
>  mm/hugetlb.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b34f50156f7e..de8e1d8a2c66 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4189,6 +4189,7 @@ static int __init hugepages_setup(char *s)
>  		}
>  	}
>  
> +out:
>  	/*
>  	 * Global state is always initialized later in hugetlb_init.
>  	 * But we need to allocate gigantic hstates here early to still
> @@ -4203,7 +4204,7 @@ static int __init hugepages_setup(char *s)
>  
>  invalid:
>  	pr_warn("HugeTLB: Invalid hugepages parameter %s\n", p);
> -	return 0;
> +	goto out;
>  }
>  __setup("hugepages=", hugepages_setup);
>  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ