[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ffecd4cd-32ce-4c3f-a5ff-0ced14b13f8d@lucifer.local>
Date: Tue, 29 Apr 2025 12:00:27 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Dev Jain <dev.jain@....com>
Cc: akpm@...ux-foundation.org, ryan.roberts@....com, david@...hat.com,
willy@...radead.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
catalin.marinas@....com, will@...nel.org, Liam.Howlett@...cle.com,
vbabka@...e.cz, jannh@...gle.com, anshuman.khandual@....com,
peterx@...hat.com, joey.gouly@....com, ioworker0@...il.com,
baohua@...nel.org, kevin.brodsky@....com, quic_zhenhuah@...cinc.com,
christophe.leroy@...roup.eu, yangyicong@...ilicon.com,
linux-arm-kernel@...ts.infradead.org, namit@...are.com,
hughd@...gle.com, yang@...amperecomputing.com, ziy@...dia.com
Subject: Re: [PATCH v2 1/7] mm: Refactor code in mprotect
For changes like this, difftastic comes in very handy :)
On Tue, Apr 29, 2025 at 10:53:30AM +0530, Dev Jain wrote:
> Reduce indentation in change_pte_range() by refactoring some of the code
> into a new function. No functional change.
>
> Signed-off-by: Dev Jain <dev.jain@....com>
Overall a big fan of the intent of this patch! This is a nice cleanup, just
need to nail down details.
> ---
> mm/mprotect.c | 116 +++++++++++++++++++++++++++++---------------------
> 1 file changed, 68 insertions(+), 48 deletions(-)
>
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 88608d0dc2c2..70f59aa8c2a8 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -83,6 +83,71 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
> return pte_dirty(pte);
> }
>
> +
> +
Nit: stray extra newline.
> +static bool prot_numa_skip(struct vm_area_struct *vma, struct folio *folio,
> + int target_node)
This is a bit weird, it's like you have two functions to determine whether to
skip a PTE entry, but named differently?
I think you say in response to a comment elsewhere that you intend to further
split things up in subsequent patches, but this kinda bugs me as subjective as
it is :)
I'd say rename prot_numa_avoid_fault() -> can_skip_prot_numa_pte()
And this to can_skip_prot_numa_folio()?
Then again, the below function does some folio stuff too, so I'm not sure
exactly what the separation is? Can you explain?
Also it'd be good to add some brief comment, something like 'the prot_numa
change-prot (cp) flag indicates that this protection change is due to NUMA
hinting, we determine if we actually have work to do or can skip this folio
entirely'.
Or equivalent in the below function.
> +{
> + bool toptier;
> + int nid;
> +
> + /* Also skip shared copy-on-write pages */
> + if (is_cow_mapping(vma->vm_flags) &&
> + (folio_maybe_dma_pinned(folio) ||
> + folio_maybe_mapped_shared(folio)))
> + return true;
> +
> + /*
> + * While migration can move some dirty pages,
> + * it cannot move them all from MIGRATE_ASYNC
> + * context.
> + */
> + if (folio_is_file_lru(folio) &&
> + folio_test_dirty(folio))
> + return true;
> +
> + /*
> + * Don't mess with PTEs if page is already on the node
> + * a single-threaded process is running on.
> + */
> + nid = folio_nid(folio);
> + if (target_node == nid)
> + return true;
> + toptier = node_is_toptier(nid);
> +
> + /*
> + * Skip scanning top tier node if normal numa
> + * balancing is disabled
> + */
> + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
> + toptier)
> + return true;
> + return false;
> +}
> +
> +static bool prot_numa_avoid_fault(struct vm_area_struct *vma,
> + unsigned long addr, pte_t oldpte, int target_node)
> +{
> + struct folio *folio;
> + int ret;
> +
> + /* Avoid TLB flush if possible */
> + if (pte_protnone(oldpte))
> + return true;
> +
> + folio = vm_normal_folio(vma, addr, oldpte);
> + if (!folio || folio_is_zone_device(folio) ||
> + folio_test_ksm(folio))
> + return true;
> + ret = prot_numa_skip(vma, folio, target_node);
> + if (ret)
> + return ret;
This is a bit silly as it returns a boolean value, surely;
if (prot_numa_skip(vma, folio, target_node))
return true;
Is better?
> + if (folio_use_access_time(folio))
> + folio_xchg_access_time(folio,
> + jiffies_to_msecs(jiffies));
Why is this here and not in prot_numa_skip() or whatever we rename it to?
> + return false;
> +}
> +
> static long change_pte_range(struct mmu_gather *tlb,
> struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
> unsigned long end, pgprot_t newprot, unsigned long cp_flags)
> @@ -116,56 +181,11 @@ static long change_pte_range(struct mmu_gather *tlb,
> * Avoid trapping faults against the zero or KSM
> * pages. See similar comment in change_huge_pmd.
> */
> - if (prot_numa) {
> - struct folio *folio;
> - int nid;
> - bool toptier;
> -
> - /* Avoid TLB flush if possible */
> - if (pte_protnone(oldpte))
> - continue;
> -
> - folio = vm_normal_folio(vma, addr, oldpte);
> - if (!folio || folio_is_zone_device(folio) ||
> - folio_test_ksm(folio))
> - continue;
> -
> - /* Also skip shared copy-on-write pages */
> - if (is_cow_mapping(vma->vm_flags) &&
> - (folio_maybe_dma_pinned(folio) ||
> - folio_maybe_mapped_shared(folio)))
> - continue;
> -
> - /*
> - * While migration can move some dirty pages,
> - * it cannot move them all from MIGRATE_ASYNC
> - * context.
> - */
> - if (folio_is_file_lru(folio) &&
> - folio_test_dirty(folio))
> + if (prot_numa &&
> + prot_numa_avoid_fault(vma, addr,
> + oldpte, target_node))
> continue;
>
> - /*
> - * Don't mess with PTEs if page is already on the node
> - * a single-threaded process is running on.
> - */
> - nid = folio_nid(folio);
> - if (target_node == nid)
> - continue;
> - toptier = node_is_toptier(nid);
> -
> - /*
> - * Skip scanning top tier node if normal numa
> - * balancing is disabled
> - */
> - if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
> - toptier)
> - continue;
> - if (folio_use_access_time(folio))
> - folio_xchg_access_time(folio,
> - jiffies_to_msecs(jiffies));
> - }
> -
> oldpte = ptep_modify_prot_start(vma, addr, pte);
> ptent = pte_modify(oldpte, newprot);
>
> --
> 2.30.2
>
Powered by blists - more mailing lists