[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84bc05e7-f47a-4941-a151-a3b2ab18ad62@redhat.com>
Date: Thu, 30 Jan 2025 16:56:52 +0100
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
dri-devel@...ts.freedesktop.org, linux-mm@...ck.org,
nouveau@...ts.freedesktop.org, Andrew Morton <akpm@...ux-foundation.org>,
Jérôme Glisse <jglisse@...hat.com>,
Jonathan Corbet <corbet@....net>, Alex Shi <alexs@...nel.org>,
Yanteng Si <si.yanteng@...ux.dev>, Karol Herbst <kherbst@...hat.com>,
Lyude Paul <lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>, Jann Horn <jannh@...gle.com>,
Pasha Tatashin <pasha.tatashin@...een.com>, Peter Xu <peterx@...hat.com>,
Alistair Popple <apopple@...dia.com>, Jason Gunthorpe <jgg@...dia.com>
Subject: Re: [PATCH v1 03/12] mm/rmap: convert make_device_exclusive_range()
to make_device_exclusive()
On 30.01.25 14:46, Simona Vetter wrote:
> On Wed, Jan 29, 2025 at 12:54:01PM +0100, David Hildenbrand wrote:
>> The single "real" user in the tree of make_device_exclusive_range() always
>> requests making only a single address exclusive. The current implementation
>> is hard to fix for properly supporting anonymous THP / large folios and
>> for avoiding messing with rmap walks in weird ways.
>>
>> So let's always process a single address/page and return folio + page to
>> minimize page -> folio lookups. This is a preparation for further
>> changes.
>>
>> Reject any non-anonymous or hugetlb folios early, directly after GUP.
>>
>> Signed-off-by: David Hildenbrand <david@...hat.com>
>
> Yeah this makes sense. Even for pmd entries I think we want to make this
> very explicit with an explicit hugetlb opt-in I think.
>
> Acked-by: Simona Vetter <simona.vetter@...ll.ch>
Thanks, I'll fold in the following:
diff --git a/mm/rmap.c b/mm/rmap.c
index 676df4fba5b0..94256925682d 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2525,6 +2525,10 @@ static bool folio_make_device_exclusive(struct folio *folio,
* programming is complete it should drop the page lock and reference after
* which point CPU access to the page will revoke the exclusive access.
*
+ * Note: This function always operates on individual PTEs mapping individual
+ * pages. PMD-sized THPs are first remapped to be mapped by PTEs before the
+ * conversion happens on a single PTE corresponding to @addr.
+ *
* Returns: pointer to mapped page on success, otherwise a negative error.
*/
struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists