[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3eb8e1e2-5887-47ed-addc-3be664dd7053@redhat.com>
Date: Tue, 17 Jun 2025 14:08:16 +0200
From: David Hildenbrand <david@...hat.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>, James Houghton <jthoughton@...gle.com>,
Peter Xu <peterx@...hat.com>, Gavin Guo <gavinguo@...lia.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/5] mm,hugetlb: Document the reason to lock the folio in
the faulting path
On 17.06.25 14:04, Oscar Salvador wrote:
> On Tue, Jun 17, 2025 at 01:27:18PM +0200, David Hildenbrand wrote:
>>> @@ -6198,6 +6198,8 @@ static vm_fault_t hugetlb_wp(struct vm_fault *vmf)
>>> * in scenarios that used to work. As a side effect, there can still
>>> * be leaks between processes, for example, with FOLL_GET users.
>>> */
>>> + if (folio_test_anon(old_folio))
>>> + folio_lock(old_folio);
>>
>> If holding the PTL, this would not work. You'd have to unlock PTL, lock
>> folio, retake PTL, check pte_same.
>
> Why so?
>
> hugetlb_no_page() has already checked pte_same under PTL, then mapped the page
> and called hugetlb_wp().
>
> hugetlb_no_page
> vmf->ptl = huge_pte_lock()
> pte_same
> set_huge_pte_at
> hugetlb_wp
>
> and in hugetlb_wp() we're still holding the PTL.
> Why do we have to release PTL in order to lock the folio?
> This folio can't have been unmapped because we're holding PTL, right?
> And it can't have been truncaed for the same reason.
>
> It's because some lock-order issue?
folio lock is a sleeping lock, PTL is a spinlock. :)
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists