lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f2bb2878-0584-6774-8e69-162a9ec68728@oracle.com>
Date:   Mon, 13 Apr 2020 11:33:24 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Nitesh Narayan Lal <nitesh@...hat.com>,
        "Longpeng (Mike)" <longpeng2@...wei.com>
Cc:     arei.gonglei@...wei.com, huangzhichao@...wei.com,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Qian Cai <cai@....pw>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH] mm/hugetlb: avoid weird message in hugetlb_init

On 4/10/20 8:47 AM, Nitesh Narayan Lal wrote:
> Hi Mike,
> 
> On platforms that support multiple huge page sizes when 'hugepagesz' is not
> specified before 'hugepages=', hugepages are not allocated. (For example
> if we are requesting 1GB hugepages)

Hi Nitesh,

This should only be an issue with gigantic huge pages.  This is because
hugepages=X not following a hugepagesz=Y specifies the number of huge pages
of default size to allocate.  It does not currently work for gigantic pages.
In the other thread, I provided this explanation as to why:
It comes about because we do not definitively set the default huge page size
until after command line processing (in hugetlb_init).  And, we must
preallocate gigantic huge pages during command line processing because that
is when the bootmem allocater is available.

I will be looking into modifying this behavior to allocate the pages as
expected, even for gigantic pages.

> In terms of reporting meminfo and /sys/kernel/../nr_hugepages reports the
> expected results but if we use sysctl vm.nr_hugepages then it reports a non-zero
> value as it reads the max_huge_pages from the default hstate instead of
> nr_huge_pages.
> AFAIK nr_huge_pages is the one that indicates the number of huge pages that are
> successfully allocated.
> 
> Does vm.nr_hugepages is expected to report the maximum number of hugepages? If
> so, will it not make sense to rename the procname?
> 
> However, if we expect nr_hugepages to report the number of successfully
> allocated hugepages then we should use nr_huge_pages in
> hugetlb_sysctl_handler_common().

This looks like a bug.  Neither sysctl or the /proc file should be reporting
a non-zero value if huge pages do not exist.
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ