[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f96283b-11b3-49ee-9d2d-5ad977325cb0@linux.alibaba.com>
Date: Wed, 16 Apr 2025 14:32:27 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Dev Jain <dev.jain@....com>, akpm@...ux-foundation.org
Cc: ryan.roberts@....com, david@...hat.com, willy@...radead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, hughd@...gle.com,
vishal.moola@...il.com, yang@...amperecomputing.com, ziy@...dia.com
Subject: Re: [PATCH v3] mempolicy: Optimize queue_folios_pte_range by PTE
batching
On 2025/4/16 13:30, Dev Jain wrote:
> After the check for queue_folio_required(), the code only cares about the
> folio in the for loop, i.e the PTEs are redundant. Therefore, optimize
> this loop by skipping over a PTE batch mapping the same folio.
>
> With a test program migrating pages of the calling process, which includes
> a mapped VMA of size 4GB with pte-mapped large folios of order-9, and
> migrating once back and forth node-0 and node-1, the average execution
> time reduces from 7.5 to 4 seconds, giving an approx 47% speedup.
>
> v2->v3:
> - Don't use assignment in if condition
>
> v1->v2:
> - Follow reverse xmas tree declarations
> - Don't initialize nr
> - Move folio_pte_batch() immediately after retrieving a normal folio
> - increment nr_failed in one shot
>
> Acked-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Dev Jain <dev.jain@....com>
> ---
> mm/mempolicy.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index b28a1e6ae096..4d2dc8b63965 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -566,6 +566,7 @@ static void queue_folios_pmd(pmd_t *pmd, struct mm_walk *walk)
> static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
> unsigned long end, struct mm_walk *walk)
> {
> + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> struct vm_area_struct *vma = walk->vma;
> struct folio *folio;
> struct queue_pages *qp = walk->private;
> @@ -573,6 +574,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
> pte_t *pte, *mapped_pte;
> pte_t ptent;
> spinlock_t *ptl;
> + int max_nr, nr;
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> if (ptl) {
> @@ -586,7 +588,9 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
> walk->action = ACTION_AGAIN;
> return 0;
> }
> - for (; addr != end; pte++, addr += PAGE_SIZE) {
> + for (; addr != end; pte += nr, addr += nr * PAGE_SIZE) {
> + max_nr = (end - addr) >> PAGE_SHIFT;
> + nr = 1;
> ptent = ptep_get(pte);
> if (pte_none(ptent))
> continue;
> @@ -598,6 +602,10 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
> folio = vm_normal_folio(vma, addr, ptent);
> if (!folio || folio_is_zone_device(folio))
> continue;
> + if (folio_test_large(folio) && max_nr != 1)
> + nr = folio_pte_batch(folio, addr, pte, ptent,
> + max_nr, fpb_flags,
> + NULL, NULL, NULL);
> /*
> * vm_normal_folio() filters out zero pages, but there might
> * still be reserved folios to skip, perhaps in a VDSO.
> @@ -630,7 +638,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
> if (!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) ||
> !vma_migratable(vma) ||
> !migrate_folio_add(folio, qp->pagelist, flags)) {
> - qp->nr_failed++;
> + qp->nr_failed += nr;
Sorry for chiming in late, but I am not convinced that 'qp->nr_failed'
should add 'nr' when isolation fails.
From the comments of queue_pages_range():
"
* >0 - this number of misplaced folios could not be queued for moving
* (a hugetlbfs page or a transparent huge page being counted as 1).
"
That means if a large folio is failed to isolate, we should only add '1'
for qp->nr_failed instead of the number of pages in this large folio. Right?
Powered by blists - more mailing lists