[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a90b1707-b97a-454c-bced-a25068b28325@suse.cz>
Date: Wed, 12 Nov 2025 10:33:32 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Christoph Hellwig <hch@....de>, kernel test robot <oliver.sang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>, Harry Yoo <harry.yoo@...cle.com>,
linux-mm@...ck.org, oe-lkp@...ts.linux.dev, lkp@...el.com,
Jens Axboe <axboe@...nel.dk>, "Martin K. Petersen"
<martin.petersen@...cle.com>, Johannes Thumshirn
<johannes.thumshirn@....com>, Anuj Gupta <anuj20.g@...sung.com>,
Kanchan Joshi <joshi.k@...sung.com>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: poison_element vs highmem, was Re: [linux-next:master] [block]
ec7f31b2a2: BUG:unable_to_handle_page_fault_for_address
On 11/11/25 08:48, Christoph Hellwig wrote:
> Looks like this is due to the code in poison_element, which tries
> to memset more than PAGE_SIZE for a single page. This probably
> implies we are the first users of the mempool page helpers for order > 0,
> or at least the first one tested by anyone on 32-bit with highmem :)
>
> That code seems to come from
>
> commit bdfedb76f4f5aa5e37380e3b71adee4a39f30fc6
> Author: David Rientjes <rientjes@...gle.com>
> Date: Wed Apr 15 16:14:17 2015 -0700
>
> mm, mempool: poison elements backed by slab allocator
>
> originally. The easiest fix would be to just skip poisoning for this
> case, although that would reduce the usefulness of the poisoning.
#syz test
----8<----
>From 4d97b55c208c611cb01062e0fbf9dbda9f5617d5 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@...e.cz>
Date: Wed, 12 Nov 2025 10:29:52 +0100
Subject: [PATCH] mm/mempool: fix poisoning order>0 pages with HIGHMEM
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
mm/mempool.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/mm/mempool.c b/mm/mempool.c
index 1c38e873e546..75fea9441b93 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -68,10 +68,18 @@ static void check_element(mempool_t *pool, void *element)
} else if (pool->free == mempool_free_pages) {
/* Mempools backed by page allocator */
int order = (int)(long)pool->pool_data;
- void *addr = kmap_local_page((struct page *)element);
+#ifdef CONFIG_HIGHMEM
+ for (int i = 0; i < (1 << order); i++) {
+ struct page *page = (struct page *)element;
+ void *addr = kmap_local_page(page + i);
- __check_element(pool, addr, 1UL << (PAGE_SHIFT + order));
- kunmap_local(addr);
+ __check_element(pool, addr, PAGE_SIZE);
+ kunmap_local(addr);
+ }
+#else
+ void *addr = page_address((struct page *)element);
+ __check_element(pool, addr, PAGE_SIZE << order);
+#endif
}
}
@@ -97,10 +105,18 @@ static void poison_element(mempool_t *pool, void *element)
} else if (pool->alloc == mempool_alloc_pages) {
/* Mempools backed by page allocator */
int order = (int)(long)pool->pool_data;
- void *addr = kmap_local_page((struct page *)element);
+#ifdef CONFIG_HIGHMEM
+ for (int i = 0; i < (1 << order); i++) {
+ struct page *page = (struct page *)element;
+ void *addr = kmap_local_page(page + i);
- __poison_element(addr, 1UL << (PAGE_SHIFT + order));
- kunmap_local(addr);
+ __poison_element(addr, PAGE_SIZE);
+ kunmap_local(addr);
+ }
+#else
+ void *addr = page_address((struct page *)element);
+ __poison_element(addr, PAGE_SIZE << order);
+#endif
}
}
#else /* CONFIG_SLUB_DEBUG_ON */
--
2.51.1
Powered by blists - more mailing lists