[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <dc8ecc32-54c8-4931-9a13-64a3c3b41d35@linux.alibaba.com>
Date: Thu, 21 Mar 2024 15:12:43 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: akpm@...ux-foundation.org, david@...hat.com, mgorman@...hsingularity.net,
wangkefeng.wang@...wei.com, jhubbard@...dia.com, 21cnbao@...il.com,
ryan.roberts@....com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2] mm: support multi-size THP numa balancing
(sorry for late reply)
On 2024/3/19 15:26, Huang, Ying wrote:
> Baolin Wang <baolin.wang@...ux.alibaba.com> writes:
>
>> On 2024/3/18 14:16, Huang, Ying wrote:
>>> Baolin Wang <baolin.wang@...ux.alibaba.com> writes:
>>>
>>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>>> but the numa balancing still prohibits mTHP migration even though it is an
>>>> exclusive mapping, which is unreasonable. Thus let's support the exclusive
>>>> mTHP numa balancing firstly.
>>>>
>>>> Allow scanning mTHP:
>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>>> issue, although there is still a GUP race, the issue seems to have been
>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_estimated_sharers()
>>>> to skip shared CoW pages though this is not a precise sharers count. To
>>>> check if the folio is shared, ideally we want to make sure every page is
>>>> mapped to the same process, but doing that seems expensive and using
>>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>>
>>>> Allow migrating mTHP:
>>>> As mentioned in the previous thread[1], large folios are more susceptible
>>>> to false sharing issues, leading to pages ping-pong back and forth during
>>>> numa balancing, which is currently hard to resolve. Therefore, as a start to
>>>> support mTHP numa balancing, only exclusive mappings are allowed to perform
>>>> numa migration to avoid the false sharing issues with large folios. Similarly,
>>>> use the estimated mapcount to skip shared mappings, which seems can work
>>>> in most cases (?), and we've used folio_estimated_sharers() to skip shared
>>>> mappings in migrate_misplaced_folio() for numa balancing, seems no real
>>>> complaints.
>>> IIUC, folio_estimated_sharers() cannot identify multi-thread
>>> applications. If some mTHP is shared by multiple threads in one
>>
>> Right.
>>
>>> process, how to deal with that?
>>
>> IMHO, seems the should_numa_migrate_memory() already did something to help?
>>
>> ......
>> if (!cpupid_pid_unset(last_cpupid) &&
>> cpupid_to_nid(last_cpupid) != dst_nid)
>> return false;
>>
>> /* Always allow migrate on private faults */
>> if (cpupid_match_pid(p, last_cpupid))
>> return true;
>> ......
>>
>> If the node of the CPU that accessed the mTHP last time is different
>> from this time, which means there is some contention for that mTHP
>> among threads. So it will not allow migration.
>
> Yes. The two-stage filter in should_numa_migrate_memory() helps at some
> degree.
>
> But the situation is somewhat different after your change. Previously,
> in one round of NUMA balancing page table scanning, the number of the
> hint page fault for one process and one folio is 1. After your change,
> the number may become folio_nr_pages(). So we need to evaluate the
Yes, this follows the same strategy as THP.
> original algorithm in the new situation and revise. For example, use a
> N-stage filter for mTHP.
Yes, let me try if N-stage filter for mTHP can helpful.
>
> Anyway, the NUMA balancing algorithm adjustment needs to be based on
> test results.
>
> Another possibility is to emulate the original behavior as much as
> possible to try to reuse the original algorithm. For example, we can
> restore all PTE maps upon the first hint page fault of a folio. Then,
> the behavior is almost same as the original PMD mapped THP. Personally,
> I prefer to use this as the first step. Then, try to adjust the
> algorithm to take advantage of more information available.
OK, sounds reasonable, I will try.
>>
>> If the contention for the mTHP among threads is light or the accessing
>> is relatively stable, then we can allow migration?
>>
>>> For example, I think that we should avoid to migrate on the first fault
>>> for mTHP in should_numa_migrate_memory().
>
> I am referring to the following code in should_numa_migrate_memory().
>
> /*
> * Allow first faults or private faults to migrate immediately early in
> * the lifetime of a task. The magic number 4 is based on waiting for
> * two full passes of the "multi-stage node selection" test that is
> * executed below.
> */
> if ((p->numa_preferred_nid == NUMA_NO_NODE || p->numa_scan_seq <= 4) &&
> (cpupid_pid_unset(last_cpupid) || cpupid_match_pid(p, last_cpupid)))
> return true;
>
> But, after thought about this again, I realized that the original PMD
> mapped THP may be migrated on the first fault sometimes. So, this isn't
> a new problem. We may "optimize" it. But it needn't to be part of this
> series.
Make sense. Thanks for your input.
Powered by blists - more mailing lists