[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250821200701.1329277-33-david@redhat.com>
Date: Thu, 21 Aug 2025 22:06:58 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: David Hildenbrand <david@...hat.com>,
Alexander Potapenko <glider@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Brendan Jackman <jackmanb@...gle.com>,
Christoph Lameter <cl@...two.org>,
Dennis Zhou <dennis@...nel.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
dri-devel@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org,
iommu@...ts.linux.dev,
io-uring@...r.kernel.org,
Jason Gunthorpe <jgg@...dia.com>,
Jens Axboe <axboe@...nel.dk>,
Johannes Weiner <hannes@...xchg.org>,
John Hubbard <jhubbard@...dia.com>,
kasan-dev@...glegroups.com,
kvm@...r.kernel.org,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-arm-kernel@...s.com,
linux-arm-kernel@...ts.infradead.org,
linux-crypto@...r.kernel.org,
linux-ide@...r.kernel.org,
linux-kselftest@...r.kernel.org,
linux-mips@...r.kernel.org,
linux-mmc@...r.kernel.org,
linux-mm@...ck.org,
linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org,
linux-scsi@...r.kernel.org,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Marco Elver <elver@...gle.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>,
Muchun Song <muchun.song@...ux.dev>,
netdev@...r.kernel.org,
Oscar Salvador <osalvador@...e.de>,
Peter Xu <peterx@...hat.com>,
Robin Murphy <robin.murphy@....com>,
Suren Baghdasaryan <surenb@...gle.com>,
Tejun Heo <tj@...nel.org>,
virtualization@...ts.linux.dev,
Vlastimil Babka <vbabka@...e.cz>,
wireguard@...ts.zx2c4.com,
x86@...nel.org,
Zi Yan <ziy@...dia.com>
Subject: [PATCH RFC 32/35] mm/gup: drop nth_page() usage in unpin_user_page_range_dirty_lock()
There is the concern that unpin_user_page_range_dirty_lock() might do
some weird merging of PFN ranges -- either now or in the future -- such
that PFN range is contiguous but the page range might not be.
Let's sanity-check for that and drop the nth_page() usage.
Signed-off-by: David Hildenbrand <david@...hat.com>
---
mm/gup.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/gup.c b/mm/gup.c
index f017ff6d7d61a..0a669a766204b 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -237,7 +237,7 @@ void folio_add_pin(struct folio *folio)
static inline struct folio *gup_folio_range_next(struct page *start,
unsigned long npages, unsigned long i, unsigned int *ntails)
{
- struct page *next = nth_page(start, i);
+ struct page *next = start + i;
struct folio *folio = page_folio(next);
unsigned int nr = 1;
@@ -342,6 +342,9 @@ EXPORT_SYMBOL(unpin_user_pages_dirty_lock);
* "gup-pinned page range" refers to a range of pages that has had one of the
* pin_user_pages() variants called on that page.
*
+ * The page range must be truly contiguous: the page range corresponds
+ * to a contiguous PFN range and all pages can be iterated naturally.
+ *
* For the page ranges defined by [page .. page+npages], make that range (or
* its head pages, if a compound page) dirty, if @make_dirty is true, and if the
* page range was previously listed as clean.
@@ -359,6 +362,8 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages,
struct folio *folio;
unsigned int nr;
+ VM_WARN_ON_ONCE(!page_range_contiguous(page, npages));
+
for (i = 0; i < npages; i += nr) {
folio = gup_folio_range_next(page, npages, i, &nr);
if (make_dirty && !folio_test_dirty(folio)) {
--
2.50.1
Powered by blists - more mailing lists