[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b19653ae-8c9a-46f1-af93-3d09c3b0759e@arm.com>
Date: Tue, 27 May 2025 08:50:21 +0530
From: Dev Jain <dev.jain@....com>
To: Shivank Garg <shivankg@....com>, akpm@...ux-foundation.org,
david@...hat.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com,
lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, npache@...hat.com,
ryan.roberts@....com, fengwei.yin@...el.com, bharata@....com,
syzbot+2b99589e33edbe9475ca@...kaller.appspotmail.com
Subject: Re: [PATCH V3 1/2] mm/khugepaged: fix race with folio split/free
using temporary reference
On 26/05/25 11:58 pm, Shivank Garg wrote:
> hpage_collapse_scan_file() calls is_refcount_suitable(), which in turn
> calls folio_mapcount(). folio_mapcount() checks folio_test_large() before
> proceeding to folio_large_mapcount(), but there is a race window where the
> folio may get split/freed between these checks, triggering:
>
> VM_WARN_ON_FOLIO(!folio_test_large(folio), folio)
>
> Take a temporary reference to the folio in hpage_collapse_scan_file().
> This stabilizes the folio during refcount check and prevents incorrect
> large folio detection due to concurrent split/free. Use helper
> folio_expected_ref_count() + 1 to compare with folio_ref_count()
> instead of using is_refcount_suitable().
>
> Fixes: 05c5323b2a34 ("mm: track mapcount of large folios in single value")
> Reported-by: syzbot+2b99589e33edbe9475ca@...kaller.appspotmail.com
> Closes: https://lore.kernel.org/all/6828470d.a70a0220.38f255.000c.GAE@google.com
> Suggested-by: David Hildenbrand <david@...hat.com>
> Acked-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Shivank Garg <shivankg@....com>
> ---
The patch looks fine.
I was just wondering about the implications of this on migration. Earlier
we had a refcount race between migration and shmem page fault via filemap_get_entry()
taking a reference and not releasing it till we take the folio lock, which was held
by the migration path. I would like to *think* that real workloads, when migrating
pages, will *not* be faulting on those pages simultaneously, just guessing. But now
we have a kernel thread (khugepaged) racing against migration. I may just be over-speculating.
Powered by blists - more mailing lists