[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aR913JM35YDVSCjF@casper.infradead.org>
Date: Thu, 20 Nov 2025 20:11:08 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Harry Yoo <harry.yoo@...cle.com>,
David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...two.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Christoph Hellwig <hch@...radead.org>
Subject: Re: [GIT PULL] slab fix for 6.18-rc7
On Thu, Nov 20, 2025 at 10:58:13AM -0800, Linus Torvalds wrote:
> On Thu, 20 Nov 2025 at 10:45, Vlastimil Babka <vbabka@...e.cz> wrote:
> >
> > * Fix mempool poisoning order>0 pages with CONFIG_HIGHMEM (Vlastimil Babka)
>
> I've pulled this, but honestly, CONFIG_HIGHMEM should be considered a
> dying breed, and I'd have been happier with just not adding extra code
> for that thing.
Would you rather see something like this that hides the fact it's
dealing with HIGHMEM?
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 0091ad1986bf..68b952e68176 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -623,8 +623,10 @@ PAGEFLAG_FALSE(HighMem, highmem)
/* Does kmap_local_folio() only allow access to one page of the folio? */
#ifdef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP
#define folio_test_partial_kmap(f) true
+#define PagePartialKmap(p) true
#else
#define folio_test_partial_kmap(f) folio_test_highmem(f)
+#define PagePartialKmap(p) PageHighMem(p)
#endif
#ifdef CONFIG_SWAP
diff --git a/mm/mempool.c b/mm/mempool.c
index 1c38e873e546..b1f5d81d70c6 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -67,11 +67,19 @@ static void check_element(mempool_t *pool, void *element)
__check_element(pool, element, kmem_cache_size(pool->pool_data));
} else if (pool->free == mempool_free_pages) {
/* Mempools backed by page allocator */
- int order = (int)(long)pool->pool_data;
- void *addr = kmap_local_page((struct page *)element);
-
- __check_element(pool, addr, 1UL << (PAGE_SHIFT + order));
- kunmap_local(addr);
+ size_t len = 1UL << (long)pool->pool_data;
+ do {
+ struct page *page = (struct page *)element;
+ void *addr = kmap_local_page(page);
+ size_t chunk = len;
+
+ if (PagePartialKmap(page) && chunk > PAGE_SIZE)
+ chunk = PAGE_SIZE;
+ __check_element(pool, addr, chunk);
+ kunmap_local(addr);
+ len -= chunk;
+ element += chunk;
+ } while (len > 0);
}
}
(not actually tested, but based on memcpy_from_folio())
Powered by blists - more mailing lists