lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5f17af68-721b-42bc-88f6-ee8fc527789d@linux.alibaba.com>
Date: Mon, 1 Apr 2024 17:47:09 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Kefeng Wang <wangkefeng.wang@...wei.com>, akpm@...ux-foundation.org
Cc: david@...hat.com, mgorman@...hsingularity.net, jhubbard@...dia.com,
 ying.huang@...el.com, 21cnbao@...il.com, ryan.roberts@....com,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] mm: support multi-size THP numa balancing



On 2024/4/1 11:47, Kefeng Wang wrote:
> 
> 
> On 2024/3/29 14:56, Baolin Wang wrote:
>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>> but the numa balancing still prohibits mTHP migration even though it 
>> is an
>> exclusive mapping, which is unreasonable.
>>
>> Allow scanning mTHP:
>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>> NUMA-migrate COW pages that have other uses") change to use page_count()
>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>> issue, although there is still a GUP race, the issue seems to have been
>> resolved by commit 80d47f5de5e3. Meanwhile, use the 
>> folio_likely_mapped_shared()
>> to skip shared CoW pages though this is not a precise sharers count. To
>> check if the folio is shared, ideally we want to make sure every page is
>> mapped to the same process, but doing that seems expensive and using
>> the estimated mapcount seems can work when running autonuma benchmark.
>>
>> Allow migrating mTHP:
>> As mentioned in the previous thread[1], large folios (including THP) are
>> more susceptible to false sharing issues among threads than 4K base page,
>> leading to pages ping-pong back and forth during numa balancing, which is
>> currently not easy to resolve. Therefore, as a start to support mTHP numa
>> balancing, we can follow the PMD mapped THP's strategy, that means we can
>> reuse the 2-stage filter in should_numa_migrate_memory() to check if the
>> mTHP is being heavily contended among threads (through checking the 
>> CPU id
>> and pid of the last access) to avoid false sharing at some degree. Thus,
>> we can restore all PTE maps upon the first hint page fault of a large 
>> folio
>> to follow the PMD mapped THP's strategy. In the future, we can 
>> continue to
>> optimize the NUMA balancing algorithm to avoid the false sharing issue 
>> with
>> large folios as much as possible.
>>
>> Performance data:
>> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum
>> Base: 2024-03-25 mm-unstable branch
>> Enable mTHP to run autonuma-benchmark
>>
>> mTHP:16K
>> Base                Patched
>> numa01                numa01
>> 224.70                143.48
>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>> 118.05                47.43
>> numa02                numa02
>> 13.45                9.29
>> numa02_SMT            numa02_SMT
>> 14.80                7.50
>>
>> mTHP:64K
>> Base                Patched
>> numa01                numa01
>> 216.15                114.40
>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>> 115.35                47.41
>> numa02                numa02
>> 13.24                9.25
>> numa02_SMT            numa02_SMT
>> 14.67                7.34
>>
>> mTHP:128K
>> Base                Patched
>> numa01                numa01
>> 205.13                144.45
>> numa01_THREAD_ALLOC        numa01_THREAD_ALLOC
>> 112.93                41.88
>> numa02                numa02
>> 13.16                9.18
>> numa02_SMT            numa02_SMT
>> 14.81                7.49
>>
>> [1] 
>> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/
>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>> ---
>>   mm/memory.c   | 57 +++++++++++++++++++++++++++++++++++++++++++--------
>>   mm/mprotect.c |  3 ++-
>>   2 files changed, 51 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index c30fb4b95e15..2aca19e4fbd8 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -5068,16 +5068,56 @@ static void numa_rebuild_single_mapping(struct 
>> vm_fault *vmf, struct vm_area_str
>>       update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
>>   }
>> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct 
>> vm_area_struct *vma,
>> +                       struct folio *folio, pte_t fault_pte, bool 
>> ignore_writable)
>> +{
>> +    int nr = pte_pfn(fault_pte) - folio_pfn(folio);
>> +    unsigned long start = max(vmf->address - nr * PAGE_SIZE, 
>> vma->vm_start);
>> +    unsigned long end = min(vmf->address + (folio_nr_pages(folio) - 
>> nr) * PAGE_SIZE, vma->vm_end);
>> +    pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE;
>> +    bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma);
>> +    unsigned long addr;
>> +
>> +    /* Restore all PTEs' mapping of the large folio */
>> +    for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
>> +        pte_t pte, old_pte;
>> +        pte_t ptent = ptep_get(start_ptep);
>> +        bool writable = false;
>> +
>> +        if (!pte_present(ptent) || !pte_protnone(ptent))
>> +            continue;
>> +
>> +        if (pfn_folio(pte_pfn(ptent)) != folio)
>> +            continue;
>> +
>> +        if (!ignore_writable) {
>> +            ptent = pte_modify(ptent, vma->vm_page_prot);
>> +            writable = pte_write(ptent);
>> +            if (!writable && pte_write_upgrade &&
>> +                can_change_pte_writable(vma, addr, ptent))
>> +                writable = true;
>> +        }
>> +
>> +        old_pte = ptep_modify_prot_start(vma, addr, start_ptep);
>> +        pte = pte_modify(old_pte, vma->vm_page_prot);
>> +        pte = pte_mkyoung(pte);
>> +        if (writable)
>> +            pte = pte_mkwrite(pte, vma);
>> +        ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte);
>> +        update_mmu_cache_range(vmf, vma, addr, start_ptep, 1);
> 
> Maybe pass "unsigned long address, pte_t *ptep" to 
> numa_rebuild_single_mapping(),
> then, just call it here.

Yes, sounds reasonable. Will do in next version.

>>   static vm_fault_t do_numa_page(struct vm_fault *vmf)
>>   {
>>       struct vm_area_struct *vma = vmf->vma;
>>       struct folio *folio = NULL;
>>       int nid = NUMA_NO_NODE;
>> -    bool writable = false;
>> +    bool writable = false, ignore_writable = false;
>>       int last_cpupid;
>>       int target_nid;
>>       pte_t pte, old_pte;
>> -    int flags = 0;
>> +    int flags = 0, nr_pages;
>>       /*
>>        * The pte cannot be used safely until we verify, while holding 
>> the page
>> @@ -5107,10 +5147,6 @@ static vm_fault_t do_numa_page(struct vm_fault 
>> *vmf)
>>       if (!folio || folio_is_zone_device(folio))
>>           goto out_map;
>> -    /* TODO: handle PTE-mapped THP */
>> -    if (folio_test_large(folio))
>> -        goto out_map;
>> -
>>       /*
>>        * Avoid grouping on RO pages in general. RO pages shouldn't 
>> hurt as
>>        * much anyway since they can be in shared cache state. This misses
>> @@ -5130,6 +5166,7 @@ static vm_fault_t do_numa_page(struct vm_fault 
>> *vmf)
>>           flags |= TNF_SHARED;
>>       nid = folio_nid(folio);
>> +    nr_pages = folio_nr_pages(folio);
>>       /*
>>        * For memory tiering mode, cpupid of slow memory page is used
>>        * to record page access time.  So use default value.
>> @@ -5146,6 +5183,7 @@ static vm_fault_t do_numa_page(struct vm_fault 
>> *vmf)
>>       }
>>       pte_unmap_unlock(vmf->pte, vmf->ptl);
>>       writable = false;
>> +    ignore_writable = true;
>>       /* Migrate to the requested node */
>>       if (migrate_misplaced_folio(folio, vma, target_nid)) {
>> @@ -5166,14 +5204,17 @@ static vm_fault_t do_numa_page(struct vm_fault 
>> *vmf)
>>   out:
>>       if (nid != NUMA_NO_NODE)
>> -        task_numa_fault(last_cpupid, nid, 1, flags);
>> +        task_numa_fault(last_cpupid, nid, nr_pages, flags);
>>       return 0;
>>   out_map:
>>       /*
>>        * Make it present again, depending on how arch implements
>>        * non-accessible ptes, some can allow access by kernel mode.
>>        */
>> -    numa_rebuild_single_mapping(vmf, vma, writable);
>> +    if (folio && folio_test_large(folio))
> initialize nr_pages and then call
> 
>      if (nr_pages > 1)

Umm, IMO, folio_test_large() is more readable for me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ