[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aFFZtD4zN_qINo9P@localhost.localdomain>
Date: Tue, 17 Jun 2025 14:04:04 +0200
From: Oscar Salvador <osalvador@...e.de>
To: David Hildenbrand <david@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>,
James Houghton <jthoughton@...gle.com>,
Peter Xu <peterx@...hat.com>, Gavin Guo <gavinguo@...lia.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/5] mm,hugetlb: Document the reason to lock the folio in
the faulting path
On Tue, Jun 17, 2025 at 01:27:18PM +0200, David Hildenbrand wrote:
> > @@ -6198,6 +6198,8 @@ static vm_fault_t hugetlb_wp(struct vm_fault *vmf)
> > * in scenarios that used to work. As a side effect, there can still
> > * be leaks between processes, for example, with FOLL_GET users.
> > */
> > + if (folio_test_anon(old_folio))
> > + folio_lock(old_folio);
>
> If holding the PTL, this would not work. You'd have to unlock PTL, lock
> folio, retake PTL, check pte_same.
Why so?
hugetlb_no_page() has already checked pte_same under PTL, then mapped the page
and called hugetlb_wp().
hugetlb_no_page
vmf->ptl = huge_pte_lock()
pte_same
set_huge_pte_at
hugetlb_wp
and in hugetlb_wp() we're still holding the PTL.
Why do we have to release PTL in order to lock the folio?
This folio can't have been unmapped because we're holding PTL, right?
And it can't have been truncaed for the same reason.
It's because some lock-order issue?
--
Oscar Salvador
SUSE Labs
Powered by blists - more mailing lists