[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <586f6282-ac7e-42d2-b132-0ba067623ddc@arm.com>
Date: Tue, 7 Oct 2025 11:58:47 +0530
From: Dev Jain <dev.jain@....com>
To: Lance Yang <lance.yang@...ux.dev>, akpm@...ux-foundation.org,
david@...hat.com, lorenzo.stoakes@...cle.com
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
npache@...hat.com, ryan.roberts@....com, baohua@...nel.org,
ioworker0@...il.com, richard.weiyang@...il.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH mm-new v2 3/3] mm/khugepaged: merge PTE scanning logic
into a new helper
On 06/10/25 8:13 pm, Lance Yang wrote:
> +static inline int thp_collapse_check_pte(pte_t pte, struct vm_area_struct *vma,
> + unsigned long addr, struct collapse_control *cc,
> + struct folio **foliop, int *none_or_zero, int *unmapped,
> + int *shared, int *scan_result)
Nit: Will prefer the cc parameter to go at the last.
> +{
> + struct folio *folio = NULL;
> + struct page *page = NULL;
> +
> + if (pte_none(pte) || is_zero_pfn(pte_pfn(pte))) {
> + (*none_or_zero)++;
> + if (!userfaultfd_armed(vma) &&
> + (!cc->is_khugepaged ||
> + *none_or_zero <= khugepaged_max_ptes_none)) {
> + return PTE_CHECK_CONTINUE;
> + } else {
> + *scan_result = SCAN_EXCEED_NONE_PTE;
> + count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
> + return PTE_CHECK_FAIL;
> + }
> + } else if (!pte_present(pte)) {
> + if (!unmapped) {
> + *scan_result = SCAN_PTE_NON_PRESENT;
> + return PTE_CHECK_FAIL;
> + }
> +
> + if (non_swap_entry(pte_to_swp_entry(pte))) {
> + *scan_result = SCAN_PTE_NON_PRESENT;
> + return PTE_CHECK_FAIL;
> + }
> +
> + (*unmapped)++;
> + if (!cc->is_khugepaged ||
> + *unmapped <= khugepaged_max_ptes_swap) {
> + /*
> + * Always be strict with uffd-wp enabled swap
> + * entries. Please see comment below for
> + * pte_uffd_wp().
> + */
> + if (pte_swp_uffd_wp(pte)) {
> + *scan_result = SCAN_PTE_UFFD_WP;
> + return PTE_CHECK_FAIL;
> + }
> + return PTE_CHECK_CONTINUE;
> + } else {
> + *scan_result = SCAN_EXCEED_SWAP_PTE;
> + count_vm_event(THP_SCAN_EXCEED_SWAP_PTE);
> + return PTE_CHECK_FAIL;
> + }
> + } else if (pte_uffd_wp(pte)) {
> + /*
> + * Don't collapse the page if any of the small PTEs are
> + * armed with uffd write protection. Here we can also mark
> + * the new huge pmd as write protected if any of the small
> + * ones is marked but that could bring unknown userfault
> + * messages that falls outside of the registered range.
> + * So, just be simple.
> + */
> + *scan_result = SCAN_PTE_UFFD_WP;
> + return PTE_CHECK_FAIL;
> + }
> +
> + page = vm_normal_page(vma, addr, pte);
You should use vm_normal_folio here and drop struct page altogether - this was also
noted during the review of the mTHP collapse patchset.
Powered by blists - more mailing lists