lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0b8d283f-dd31-b980-5d53-4bbca4014da7@oracle.com>
Date:   Tue, 14 Apr 2020 21:03:53 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Nitesh Narayan Lal <nitesh@...hat.com>,
        "Longpeng (Mike)" <longpeng2@...wei.com>
Cc:     arei.gonglei@...wei.com, huangzhichao@...wei.com,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Qian Cai <cai@....pw>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH] mm/hugetlb: avoid weird message in hugetlb_init

On 4/13/20 2:21 PM, Nitesh Narayan Lal wrote:
> 
> On 4/13/20 2:33 PM, Mike Kravetz wrote:
>> On 4/10/20 8:47 AM, Nitesh Narayan Lal wrote:
>>> Hi Mike,
>>>
>>> On platforms that support multiple huge page sizes when 'hugepagesz' is not
>>> specified before 'hugepages=', hugepages are not allocated. (For example
>>> if we are requesting 1GB hugepages)
>> Hi Nitesh,
>>
>> This should only be an issue with gigantic huge pages.  This is because
>> hugepages=X not following a hugepagesz=Y specifies the number of huge pages
>> of default size to allocate.  It does not currently work for gigantic pages.
> 
> I see, since we changed the default hugepages to gigantic pages and we missed
> 'hugepagesz=' no page were allocated of any type.
> 
>> In the other thread, I provided this explanation as to why:
>> It comes about because we do not definitively set the default huge page size
>> until after command line processing (in hugetlb_init).  And, we must
>> preallocate gigantic huge pages during command line processing because that
>> is when the bootmem allocater is available.
>>
>> I will be looking into modifying this behavior to allocate the pages as
>> expected, even for gigantic pages.
> 
> Nice, looking forward to it.
> 
>>
>>> In terms of reporting meminfo and /sys/kernel/../nr_hugepages reports the
>>> expected results but if we use sysctl vm.nr_hugepages then it reports a non-zero
>>> value as it reads the max_huge_pages from the default hstate instead of
>>> nr_huge_pages.
>>> AFAIK nr_huge_pages is the one that indicates the number of huge pages that are
>>> successfully allocated.
>>>
>>> Does vm.nr_hugepages is expected to report the maximum number of hugepages? If
>>> so, will it not make sense to rename the procname?
>>>
>>> However, if we expect nr_hugepages to report the number of successfully
>>> allocated hugepages then we should use nr_huge_pages in
>>> hugetlb_sysctl_handler_common().
>> This looks like a bug.  Neither sysctl or the /proc file should be reporting
>> a non-zero value if huge pages do not exist.
> 
> Yeap, as I mentioned it reports max_huge_pages instead of the nr_huge_pages.

Does this only happen when you specify gigantic pages as the default huge
page size and they are not allocated at boot time?  Or, are there other
situations where this happens?  If so, can you provide a sample of the
boot parameters used, or how to recreate.

I am fixing up the issue with gigantic pages, and suspect this will take
are of all the issues you are seeing.  This will be part of the command line
cleanup series.  Just want to make sure I am not missing something.
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ