[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNMcdM2MSL5J6ewChovxZbe-rKncU4LekQiXwKoVY0xDnQ@mail.gmail.com>
Date: Fri, 2 Oct 2020 16:18:48 +0200
From: Marco Elver <elver@...gle.com>
To: Jann Horn <jannh@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Alexander Potapenko <glider@...gle.com>,
"H . Peter Anvin" <hpa@...or.com>,
"Paul E . McKenney" <paulmck@...nel.org>,
Andrey Konovalov <andreyknvl@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
Christoph Lameter <cl@...ux.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Eric Dumazet <edumazet@...gle.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Hillf Danton <hdanton@...a.com>,
Ingo Molnar <mingo@...hat.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Jonathan Corbet <corbet@....net>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Kees Cook <keescook@...omium.org>,
Mark Rutland <mark.rutland@....com>,
Pekka Enberg <penberg@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
SeongJae Park <sjpark@...zon.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>,
Will Deacon <will@...nel.org>,
"the arch/x86 maintainers" <x86@...nel.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
kernel list <linux-kernel@...r.kernel.org>,
kasan-dev <kasan-dev@...glegroups.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH v4 03/11] arm64, kfence: enable KFENCE for ARM64
On Fri, 2 Oct 2020 at 08:48, Jann Horn <jannh@...gle.com> wrote:
>
> On Tue, Sep 29, 2020 at 3:38 PM Marco Elver <elver@...gle.com> wrote:
> > Add architecture specific implementation details for KFENCE and enable
> > KFENCE for the arm64 architecture. In particular, this implements the
> > required interface in <asm/kfence.h>. Currently, the arm64 version does
> > not yet use a statically allocated memory pool, at the cost of a pointer
> > load for each is_kfence_address().
> [...]
> > diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h
> [...]
> > +static inline bool arch_kfence_initialize_pool(void)
> > +{
> > + const unsigned int num_pages = ilog2(roundup_pow_of_two(KFENCE_POOL_SIZE / PAGE_SIZE));
> > + struct page *pages = alloc_pages(GFP_KERNEL, num_pages);
> > +
> > + if (!pages)
> > + return false;
> > +
> > + __kfence_pool = page_address(pages);
> > + return true;
> > +}
>
> If you're going to do "virt_to_page(meta->addr)->slab_cache = cache;"
> on these pages in kfence_guarded_alloc(), and pass them into kfree(),
> you'd better mark these pages as non-compound - something like
> alloc_pages_exact() or split_page() may help. Otherwise, I think when
> SLUB's kfree() does virt_to_head_page() right at the start, that will
> return a pointer to the first page of the entire __kfence_pool, and
> then when it loads page->slab_cache, it gets some random cache and
> stuff blows up. Kinda surprising that you haven't run into that during
> your testing, maybe I'm missing something...
I added a WARN_ON() check in kfence_initialize_pool() to check if our
pages are compound or not; they are not.
In slub.c, __GFP_COMP is passed to alloc_pages(), which causes them to
have a compound head I believe.
> Also, this kinda feels like it should be the "generic" version of
> arch_kfence_initialize_pool() and live in mm/kfence/core.c ?
Done for v5.
Thanks,
-- Marco
Powered by blists - more mailing lists