[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250821200701.1329277-34-david@redhat.com>
Date: Thu, 21 Aug 2025 22:06:59 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: David Hildenbrand <david@...hat.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Brendan Jackman <jackmanb@...gle.com>,
Christoph Lameter <cl@...two.org>,
Dennis Zhou <dennis@...nel.org>,
dri-devel@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org,
iommu@...ts.linux.dev,
io-uring@...r.kernel.org,
Jason Gunthorpe <jgg@...dia.com>,
Jens Axboe <axboe@...nel.dk>,
Johannes Weiner <hannes@...xchg.org>,
John Hubbard <jhubbard@...dia.com>,
kasan-dev@...glegroups.com,
kvm@...r.kernel.org,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-arm-kernel@...s.com,
linux-arm-kernel@...ts.infradead.org,
linux-crypto@...r.kernel.org,
linux-ide@...r.kernel.org,
linux-kselftest@...r.kernel.org,
linux-mips@...r.kernel.org,
linux-mmc@...r.kernel.org,
linux-mm@...ck.org,
linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org,
linux-scsi@...r.kernel.org,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>,
Muchun Song <muchun.song@...ux.dev>,
netdev@...r.kernel.org,
Oscar Salvador <osalvador@...e.de>,
Peter Xu <peterx@...hat.com>,
Robin Murphy <robin.murphy@....com>,
Suren Baghdasaryan <surenb@...gle.com>,
Tejun Heo <tj@...nel.org>,
virtualization@...ts.linux.dev,
Vlastimil Babka <vbabka@...e.cz>,
wireguard@...ts.zx2c4.com,
x86@...nel.org,
Zi Yan <ziy@...dia.com>
Subject: [PATCH RFC 33/35] kfence: drop nth_page() usage
We want to get rid of nth_page(), and kfence init code is the last user.
Unfortunately, we might actually walk a PFN range where the pages are
not contiguous, because we might be allocating an area from memblock
that could span memory sections in problematic kernel configs (SPARSEMEM
without SPARSEMEM_VMEMMAP).
We could check whether the page range is contiguous
using page_range_contiguous() and failing kfence init, or making kfence
incompatible these problemtic kernel configs.
Let's keep it simple and simply use pfn_to_page() by iterating PFNs.
Cc: Alexander Potapenko <glider@...gle.com>
Cc: Marco Elver <elver@...gle.com>
Cc: Dmitry Vyukov <dvyukov@...gle.com>
Signed-off-by: David Hildenbrand <david@...hat.com>
---
mm/kfence/core.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 0ed3be100963a..793507c77f9e8 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -594,15 +594,15 @@ static void rcu_guarded_free(struct rcu_head *h)
*/
static unsigned long kfence_init_pool(void)
{
- unsigned long addr;
- struct page *pages;
+ unsigned long addr, pfn, start_pfn, end_pfn;
int i;
if (!arch_kfence_init_pool())
return (unsigned long)__kfence_pool;
addr = (unsigned long)__kfence_pool;
- pages = virt_to_page(__kfence_pool);
+ start_pfn = PHYS_PFN(virt_to_phys(__kfence_pool));
+ end_pfn = start_pfn + KFENCE_POOL_SIZE / PAGE_SIZE;
/*
* Set up object pages: they must have PGTY_slab set to avoid freeing
@@ -612,12 +612,13 @@ static unsigned long kfence_init_pool(void)
* fast-path in SLUB, and therefore need to ensure kfree() correctly
* enters __slab_free() slow-path.
*/
- for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
- struct slab *slab = page_slab(nth_page(pages, i));
+ for (pfn = start_pfn; pfn != end_pfn; pfn++) {
+ struct slab *slab;
if (!i || (i % 2))
continue;
+ slab = page_slab(pfn_to_page(pfn));
__folio_set_slab(slab_folio(slab));
#ifdef CONFIG_MEMCG
slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts |
@@ -664,11 +665,13 @@ static unsigned long kfence_init_pool(void)
return 0;
reset_slab:
- for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
- struct slab *slab = page_slab(nth_page(pages, i));
+ for (pfn = start_pfn; pfn != end_pfn; pfn++) {
+ struct slab *slab;
if (!i || (i % 2))
continue;
+
+ slab = page_slab(pfn_to_page(pfn));
#ifdef CONFIG_MEMCG
slab->obj_exts = 0;
#endif
--
2.50.1
Powered by blists - more mailing lists