lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e48a5899-7139-ba76-46e9-76bda4a7ab78@linux.alibaba.com>
Date:   Fri, 19 Apr 2019 09:28:21 -0700
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [QUESTIONS] THP allocation in NUMA fault migration path



On 4/19/19 4:13 AM, Mel Gorman wrote:
> On Thu, Apr 18, 2019 at 09:18:15AM -0700, Yang Shi wrote:
>>
>> On 4/17/19 11:32 PM, Michal Hocko wrote:
>>> On Wed 17-04-19 21:15:41, Yang Shi wrote:
>>>> Hi folks,
>>>>
>>>>
>>>> I noticed that there might be new THP allocation in NUMA fault migration
>>>> path (migrate_misplaced_transhuge_page()) even when THP is disabled (set to
>>>> "never"). When THP is set to "never", there should be not any new THP
>>>> allocation, but the migration path is kind of special. So I'm not quite sure
>>>> if this is the expected behavior or not?
>>>>
>>>>
>>>> And, it looks this allocation disregards defrag setting too, is this
>>>> expected behavior too?H
>>> Could you point to the specific code? But in general the miTgration path
>> Yes. The code is in migrate_misplaced_transhuge_page() called by
>> do_huge_pmd_numa_page().
>>
>> It would just do:
>> alloc_pages_node(node, (GFP_TRANSHUGE_LIGHT | __GFP_THISNODE),
>> HPAGE_PMD_ORDER);
>> without checking if transparent_hugepage is enabled or not.
>>
>> THP may be disabled before calling into do_huge_pmd_numa_page(). The
>> do_huge_pmd_wp_page() does check if THP is disabled or not. If THP is
>> disabled, it just tries to allocate 512 base pages.
>>
>>> should allocate the memory matching the migration origin. If the origin
>>> was a THP then I find it quite natural if the target was a huge page as
>> Yes, this is what I would like to confirm. Migration allocates a new THP to
>> replace the old one.
>>
>>> well. How hard the allocation should try is another question and I
>>> suspect we do want to obedy the defrag setting.
>> Yes, I thought so too. However, THP NUMA migration was added in 3.8 by
>> commit b32967f ("mm: numa: Add THP migration for the NUMA working set
>> scanning fault case."). It disregarded defrag setting at the very beginning.
>> So, I'm not quite sure if it was done on purpose or just forgot it.
>>
> It was on purpose as migration due to NUMA misplacement was not intended
> to change the type of page used. It would be impossible to tell in advance
> if locality was more important than the page size from a performance point
> of view. This is particularly relevant if the workload is virtualised and
> there is an expectation that huge pages are preserved.  I'm not aware of
> any bugs whereby there was a complaint that the THP migration caused an
> excessive stall. It could be altered of course, but it would be preferred
> to have an example workload demonstrating the problem before making a
> decision.

Thanks a lot for elaborating the idea. I didn't run into any problem at 
the moment, just didn't get the thinking behind the choice since other 
page fault paths (i.e. wp) do allocate hugepages more aggressively.

>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ