lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <786c83e0-d69f-4fa3-a39c-94c4dfc08a20@arm.com>
Date: Mon, 30 Jun 2025 13:25:14 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Dev Jain <dev.jain@....com>, akpm@...ux-foundation.org, david@...hat.com
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com,
 lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, npache@...hat.com,
 ryan.roberts@....com, baohua@...nel.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH] khugepaged: Reduce race probability between migration and
 khugepaged

On 30/06/25 10:18 AM, Dev Jain wrote:
> Suppose a folio is under migration, and khugepaged is also trying to
> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
> page cache via filemap_lock_folio(), thus taking a reference on the folio
> and sleeping on the folio lock, since the lock is held by the migration
> path. Migration will then fail in
> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
> such a race happening (leading to migration failure) by bailing out
> if we detect a PMD is marked with a migration entry.

Could the migration be re-attempted after such failure ? Seems like
the migration failure here is traded for a scan failure instead.

> 
> This fixes the migration-shared-anon-thp testcase failure on Apple M3.

Could you please provide some more context why this test case was
failing earlier and how does this change here fixes the problem ?

> 
> Note that, this is not a "fix" since it only reduces the chance of
> interference of khugepaged with migration, wherein both the kernel
> functionalities are deemed "best-effort".
> > Signed-off-by: Dev Jain <dev.jain@....com>
> ---
> 
> This patch was part of
> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
> but I have sent it separately on suggestion of Lorenzo, and also because
> I plan to send the first two patches after David Hildenbrand's
> folio_pte_batch series gets merged.
> 
>  mm/khugepaged.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 1aa7ca67c756..99977bb9bf6a 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -31,6 +31,7 @@ enum scan_result {
>  	SCAN_FAIL,
>  	SCAN_SUCCEED,
>  	SCAN_PMD_NULL,
> +	SCAN_PMD_MIGRATION,
>  	SCAN_PMD_NONE,
>  	SCAN_PMD_MAPPED,
>  	SCAN_EXCEED_NONE_PTE,
> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>  
>  	if (pmd_none(pmde))
>  		return SCAN_PMD_NONE;
> +	if (is_pmd_migration_entry(pmde))
> +		return SCAN_PMD_MIGRATION;
>  	if (!pmd_present(pmde))
>  		return SCAN_PMD_NULL;
>  	if (pmd_trans_huge(pmde))
> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>  	    !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>  		return SCAN_VMA_CHECK;
>  
> -	/* Fast check before locking page if already PMD-mapped */
> +	/*
> +	 * Fast check before locking folio if already PMD-mapped, or if the
> +	 * folio is under migration
> +	 */
>  	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
> -	if (result == SCAN_PMD_MAPPED)
> +	if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
Should mapped PMD and migrating PMD be treated equally while scanning ?

>  		return result;
>  
>  	/*
> @@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
>  	case SCAN_PAGE_LRU:
>  	case SCAN_DEL_PAGE_LRU:
>  	case SCAN_PAGE_FILLED:
> +	case SCAN_PMD_MIGRATION:
>  		return -EAGAIN;
>  	/*
>  	 * Other: Trying again likely not to succeed / error intrinsic to
> @@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>  			goto handle_result;
>  		/* Whitelisted set of results where continuing OK */
>  		case SCAN_PMD_NULL:
> +		case SCAN_PMD_MIGRATION:
>  		case SCAN_PTE_NON_PRESENT:
>  		case SCAN_PTE_UFFD_WP:
>  		case SCAN_PAGE_RO:

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ