[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <50bfa46d-5fa4-4b52-a3e5-c0da419db776@arm.com>
Date: Tue, 1 Jul 2025 10:09:27 +0530
From: Dev Jain <dev.jain@....com>
To: Anshuman Khandual <anshuman.khandual@....com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: akpm@...ux-foundation.org, david@...hat.com, ziy@...dia.com,
baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com, npache@...hat.com,
ryan.roberts@....com, baohua@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] khugepaged: Reduce race probability between migration and
khugepaged
On 01/07/25 10:00 am, Anshuman Khandual wrote:
>
> On 30/06/25 8:00 PM, Dev Jain wrote:
>> On 30/06/25 6:57 pm, Lorenzo Stoakes wrote:
>>> On Mon, Jun 30, 2025 at 10:18:37AM +0530, Dev Jain wrote:
>>>> Suppose a folio is under migration, and khugepaged is also trying to
>>>> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
>>>> page cache via filemap_lock_folio(), thus taking a reference on the folio
>>>> and sleeping on the folio lock, since the lock is held by the migration
>>>> path. Migration will then fail in
>>>> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
>>>> such a race happening (leading to migration failure) by bailing out
>>>> if we detect a PMD is marked with a migration entry.
>>> This is a nice find!
>>>
>>>> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
>>>>
>>>> Note that, this is not a "fix" since it only reduces the chance of
>>>> interference of khugepaged with migration, wherein both the kernel
>>>> functionalities are deemed "best-effort".
>>> Thanks for separating this out, appreciated!
>>>
>>>> Signed-off-by: Dev Jain <dev.jain@....com>
>>>> ---
>>>>
>>>> This patch was part of
>>>> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
>>>> but I have sent it separately on suggestion of Lorenzo, and also because
>>>> I plan to send the first two patches after David Hildenbrand's
>>>> folio_pte_batch series gets merged.
>>>>
>>>> mm/khugepaged.c | 12 ++++++++++--
>>>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>>> index 1aa7ca67c756..99977bb9bf6a 100644
>>>> --- a/mm/khugepaged.c
>>>> +++ b/mm/khugepaged.c
>>>> @@ -31,6 +31,7 @@ enum scan_result {
>>>> SCAN_FAIL,
>>>> SCAN_SUCCEED,
>>>> SCAN_PMD_NULL,
>>>> + SCAN_PMD_MIGRATION,
>>>> SCAN_PMD_NONE,
>>>> SCAN_PMD_MAPPED,
>>>> SCAN_EXCEED_NONE_PTE,
>>>> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>>>>
>>>> if (pmd_none(pmde))
>>>> return SCAN_PMD_NONE;
>>>> + if (is_pmd_migration_entry(pmde))
>>>> + return SCAN_PMD_MIGRATION;
>>> With David's suggestions I guess this boils down to simply adding this line.
>> I think it should be
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 1aa7ca67c756..8a6ba5c8ba4d 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -941,10 +941,10 @@ static inline int check_pmd_state(pmd_t *pmd)
>>
>> if (pmd_none(pmde))
>> return SCAN_PMD_NONE;
>> + if (is_pmd_migration_entry(pmde) || pmd_trans_huge(pmde))
>> + return SCAN_PMD_MAPPED;
>> if (!pmd_present(pmde))
>> return SCAN_PMD_NULL;
>> - if (pmd_trans_huge(pmde))
>> - return SCAN_PMD_MAPPED;
>> if (pmd_bad(pmde))
>> return SCAN_PMD_NULL;
>> return SCAN_SUCCEED;
>>
>> Moving this line above since we don't want to exit prematurely
>> due to !pmd_present(pmde).
> Might be cleaner to just add the migration test separately before
> the pmd_present() and without modifying existing pmd_trans_huge().
>
> if (is_pmd_migration_entry(pmde))
> return SCAN_PMD_MAPPED;
Sounds good.
>>
>>> Could we add a quick comment to explain why here?
>> Sure.
>>
>>> Thanks!
>>>
>>>> if (!pmd_present(pmde))
>>>> return SCAN_PMD_NULL;
>>>> if (pmd_trans_huge(pmde))
>>>> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>>>> !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>>>> return SCAN_VMA_CHECK;
>>>>
>>>> - /* Fast check before locking page if already PMD-mapped */
>>>> + /*
>>>> + * Fast check before locking folio if already PMD-mapped, or if the
>>>> + * folio is under migration
>>>> + */
>>>> result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
>>>> - if (result == SCAN_PMD_MAPPED)
>>>> + if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
>>>> return result;
>>>>
>>>> /*
>>>> @@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
>>>> case SCAN_PAGE_LRU:
>>>> case SCAN_DEL_PAGE_LRU:
>>>> case SCAN_PAGE_FILLED:
>>>> + case SCAN_PMD_MIGRATION:
>>>> return -EAGAIN;
>>>> /*
>>>> * Other: Trying again likely not to succeed / error intrinsic to
>>>> @@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>>> goto handle_result;
>>>> /* Whitelisted set of results where continuing OK */
>>>> case SCAN_PMD_NULL:
>>>> + case SCAN_PMD_MIGRATION:
>>>> case SCAN_PTE_NON_PRESENT:
>>>> case SCAN_PTE_UFFD_WP:
>>>> case SCAN_PAGE_RO:
>>>> --
>>>> 2.30.2
>>>>
Powered by blists - more mailing lists