[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87f9ae34-91af-4df3-b4be-5ec4e0eb9982@linux.alibaba.com>
Date: Thu, 28 Mar 2024 22:18:28 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org
Cc: mgorman@...hsingularity.net, wangkefeng.wang@...wei.com,
jhubbard@...dia.com, ying.huang@...el.com, 21cnbao@...il.com,
ryan.roberts@....com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: support multi-size THP numa balancing
On 2024/3/28 20:25, David Hildenbrand wrote:
> On 28.03.24 13:07, David Hildenbrand wrote:
>> On 28.03.24 12:34, Baolin Wang wrote:
>>>
>>>
>>> On 2024/3/28 17:25, David Hildenbrand wrote:
>>>> On 26.03.24 12:51, Baolin Wang wrote:
>>>>> Now the anonymous page allocation already supports multi-size THP
>>>>> (mTHP),
>>>>> but the numa balancing still prohibits mTHP migration even though it
>>>>> is an
>>>>> exclusive mapping, which is unreasonable.
>>>>>
>>>>> Allow scanning mTHP:
>>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data
>>>>> section
>>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared
>>>>> data
>>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>>>> NUMA-migrate COW pages that have other uses") change to use
>>>>> page_count()
>>>>> to avoid GUP pages migration, that will also skip the mTHP numa
>>>>> scaning.
>>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>>>> issue, although there is still a GUP race, the issue seems to have
>>>>> been
>>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the
>>>>> folio_likely_mapped_shared()
>>>>> to skip shared CoW pages though this is not a precise sharers
>>>>> count. To
>>>>> check if the folio is shared, ideally we want to make sure every
>>>>> page is
>>>>> mapped to the same process, but doing that seems expensive and using
>>>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>>>
>>>>> Allow migrating mTHP:
>>>>> As mentioned in the previous thread[1], large folios (including
>>>>> THP) are
>>>>> more susceptible to false sharing issues among threads than 4K base
>>>>> page,
>>>>> leading to pages ping-pong back and forth during numa balancing,
>>>>> which is
>>>>> currently not easy to resolve. Therefore, as a start to support
>>>>> mTHP numa
>>>>> balancing, we can follow the PMD mapped THP's strategy, that means
>>>>> we can
>>>>> reuse the 2-stage filter in should_numa_migrate_memory() to check
>>>>> if the
>>>>> mTHP is being heavily contended among threads (through checking the
>>>>> CPU id
>>>>> and pid of the last access) to avoid false sharing at some degree.
>>>>> Thus,
>>>>> we can restore all PTE maps upon the first hint page fault of a large
>>>>> folio
>>>>> to follow the PMD mapped THP's strategy. In the future, we can
>>>>> continue to
>>>>> optimize the NUMA balancing algorithm to avoid the false sharing issue
>>>>> with
>>>>> large folios as much as possible.
>>>>>
>>>>> Performance data:
>>>>> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
>>>>> Base: 2024-03-25 mm-unstable branch
>>>>> Enable mTHP to run autonuma-benchmark
>>>>>
>>>>> mTHP:16K
>>>>> Base Patched
>>>>> numa01 numa01
>>>>> 224.70 137.23
>>>>> numa01_THREAD_ALLOC numa01_THREAD_ALLOC
>>>>> 118.05 50.57
>>>>> numa02 numa02
>>>>> 13.45 9.30
>>>>> numa02_SMT numa02_SMT
>>>>> 14.80 7.43
>>>>>
>>>>> mTHP:64K
>>>>> Base Patched
>>>>> numa01 numa01
>>>>> 216.15 135.20
>>>>> numa01_THREAD_ALLOC numa01_THREAD_ALLOC
>>>>> 115.35 46.93
>>>>> numa02 numa02
>>>>> 13.24 9.24
>>>>> numa02_SMT numa02_SMT
>>>>> 14.67 7.31
>>>>>
>>>>> mTHP:128K
>>>>> Base Patched
>>>>> numa01 numa01
>>>>> 205.13 140.41
>>>>> numa01_THREAD_ALLOC numa01_THREAD_ALLOC
>>>>> 112.93 44.78
>>>>> numa02 numa02
>>>>> 13.16 9.19
>>>>> numa02_SMT numa02_SMT
>>>>> 14.81 7.39
>>>>>
>>>>> [1]
>>>>> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
>>>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>>>> ---
>>>>> mm/memory.c | 56
>>>>> +++++++++++++++++++++++++++++++++++++++++++--------
>>>>> mm/mprotect.c | 3 ++-
>>>>> 2 files changed, 50 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>>> index c30fb4b95e15..36191a9c799c 100644
>>>>> --- a/mm/memory.c
>>>>> +++ b/mm/memory.c
>>>>> @@ -5068,16 +5068,55 @@ static void numa_rebuild_single_mapping(struct
>>>>> vm_fault *vmf, struct vm_area_str
>>>>> update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>>>>> }
>>>>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct
>>>>> vm_area_struct *vma,
>>>>> + struct folio *folio, pte_t fault_pte, bool
>>>>> ignore_writable)
>>>>> +{
>>>>> + int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>>>>> + unsigned long start = max(vmf->address - nr * PAGE_SIZE,
>>>>> vma->vm_start);
>>>>> + unsigned long end = min(start + folio_nr_pages(folio) *
>>>>> PAGE_SIZE, vma->vm_end);
>>>>> + pte_t *start_ptep = vmf->pte - (vmf->address - start) /
>>>>> PAGE_SIZE;
>>>>> + bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
>>>>> + unsigned long addr;
>>>>> +
>>>>> + /* Restore all PTEs' mapping of the large folio */
>>>>> + for (addr = start; addr != end; start_ptep++, addr +=
>>>>> PAGE_SIZE) {
>>>>> + pte_t pte, old_pte;
>>>>> + pte_t ptent = ptep_get(start_ptep);
>>>>> + bool writable = false;
>>>>> +
>>>>> + if (!pte_present(ptent) || !pte_protnone(ptent))
>>>>> + continue;
>>>>> +
>>>>> + if (vm_normal_folio(vma, addr, ptent) != folio)
>>>>> + continue;
>>>>> +
>>>>
>>>> Should you be using folio_pte_batch() in the caller to collect all
>>>> applicable PTEs and then only have function that batch-changes a given
>>>> nr of PTEs?
>>>>
>>>> (just like we are now batching other stuff)
>>>
>>> Seems folio_pte_batch() is not suitable for numa balancing, since we did
>>> not care about other PTE bits, only care about the protnone bits. And
>>
>> You should be able to ignore most bits we care about, which case are you
>> concerned about folio_pte_batch() would miss. Hand crafting own
>> functions to cover some corner cases nobody cares about is likely a bad
>> idea.
>
> Note that the reason why I am asking is that folio_pte_batch() can
> optimize-out repeated ptep_get() with cont-ptes.
IIUC, the protnone PTEs will not set cont-ptes bit.
Another concern is that the protnone PTEs of the large folio might not
be contiguous. For example, if a middle section of the large folio has
been zapped, we would still like to restore all the protnone PTE mapping
for the entire folio. However, folio_pte_batch() seems to only help
identify the initial contiguous protnone PTEs.
> Are you sure about that?
Sorry for noise, I am wrong. Folio validation is needed for some corner
cases, but I may optimize the code with a simple pfn validation.
Powered by blists - more mailing lists