[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNP8-dM-cizCfsVOUNDS2jBaY6d=0Wx8OGen5RbXgaqcfQ@mail.gmail.com>
Date: Thu, 28 Aug 2025 10:43:16 +0200
From: Marco Elver <elver@...gle.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>,
Brendan Jackman <jackmanb@...gle.com>, Christoph Lameter <cl@...two.org>, Dennis Zhou <dennis@...nel.org>,
dri-devel@...ts.freedesktop.org, intel-gfx@...ts.freedesktop.org,
iommu@...ts.linux.dev, io-uring@...r.kernel.org,
Jason Gunthorpe <jgg@...dia.com>, Jens Axboe <axboe@...nel.dk>, Johannes Weiner <hannes@...xchg.org>,
John Hubbard <jhubbard@...dia.com>, kasan-dev@...glegroups.com, kvm@...r.kernel.org,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Linus Torvalds <torvalds@...ux-foundation.org>,
linux-arm-kernel@...s.com, linux-arm-kernel@...ts.infradead.org,
linux-crypto@...r.kernel.org, linux-ide@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-mips@...r.kernel.org,
linux-mmc@...r.kernel.org, linux-mm@...ck.org,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linux-scsi@...r.kernel.org, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Marek Szyprowski <m.szyprowski@...sung.com>, Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>, Muchun Song <muchun.song@...ux.dev>, netdev@...r.kernel.org,
Oscar Salvador <osalvador@...e.de>, Peter Xu <peterx@...hat.com>,
Robin Murphy <robin.murphy@....com>, Suren Baghdasaryan <surenb@...gle.com>, Tejun Heo <tj@...nel.org>,
virtualization@...ts.linux.dev, Vlastimil Babka <vbabka@...e.cz>, wireguard@...ts.zx2c4.com,
x86@...nel.org, Zi Yan <ziy@...dia.com>
Subject: Re: [PATCH v1 34/36] kfence: drop nth_page() usage
On Thu, 28 Aug 2025 at 00:11, 'David Hildenbrand' via kasan-dev
<kasan-dev@...glegroups.com> wrote:
>
> We want to get rid of nth_page(), and kfence init code is the last user.
>
> Unfortunately, we might actually walk a PFN range where the pages are
> not contiguous, because we might be allocating an area from memblock
> that could span memory sections in problematic kernel configs (SPARSEMEM
> without SPARSEMEM_VMEMMAP).
>
> We could check whether the page range is contiguous
> using page_range_contiguous() and failing kfence init, or making kfence
> incompatible these problemtic kernel configs.
>
> Let's keep it simple and simply use pfn_to_page() by iterating PFNs.
>
> Cc: Alexander Potapenko <glider@...gle.com>
> Cc: Marco Elver <elver@...gle.com>
> Cc: Dmitry Vyukov <dvyukov@...gle.com>
> Signed-off-by: David Hildenbrand <david@...hat.com>
Reviewed-by: Marco Elver <elver@...gle.com>
Thanks.
> ---
> mm/kfence/core.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 0ed3be100963a..727c20c94ac59 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -594,15 +594,14 @@ static void rcu_guarded_free(struct rcu_head *h)
> */
> static unsigned long kfence_init_pool(void)
> {
> - unsigned long addr;
> - struct page *pages;
> + unsigned long addr, start_pfn;
> int i;
>
> if (!arch_kfence_init_pool())
> return (unsigned long)__kfence_pool;
>
> addr = (unsigned long)__kfence_pool;
> - pages = virt_to_page(__kfence_pool);
> + start_pfn = PHYS_PFN(virt_to_phys(__kfence_pool));
>
> /*
> * Set up object pages: they must have PGTY_slab set to avoid freeing
> @@ -613,11 +612,12 @@ static unsigned long kfence_init_pool(void)
> * enters __slab_free() slow-path.
> */
> for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
> - struct slab *slab = page_slab(nth_page(pages, i));
> + struct slab *slab;
>
> if (!i || (i % 2))
> continue;
>
> + slab = page_slab(pfn_to_page(start_pfn + i));
> __folio_set_slab(slab_folio(slab));
> #ifdef CONFIG_MEMCG
> slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts |
> @@ -665,10 +665,12 @@ static unsigned long kfence_init_pool(void)
>
> reset_slab:
> for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
> - struct slab *slab = page_slab(nth_page(pages, i));
> + struct slab *slab;
>
> if (!i || (i % 2))
> continue;
> +
> + slab = page_slab(pfn_to_page(start_pfn + i));
> #ifdef CONFIG_MEMCG
> slab->obj_exts = 0;
> #endif
> --
> 2.50.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@...glegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/kasan-dev/20250827220141.262669-35-david%40redhat.com.
Powered by blists - more mailing lists