[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez2Ne7ZR1K2959s=wP2-t-V2LxCmg6_OJ+Tu58OvwV42ZA@mail.gmail.com>
Date: Tue, 30 Jul 2024 12:30:30 +0200
From: Jann Horn <jannh@...gle.com>
To: Andrey Konovalov <andreyknvl@...il.com>
Cc: Andrey Ryabinin <ryabinin.a.a@...il.com>, Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, Vincenzo Frascino <vincenzo.frascino@....com>,
Andrew Morton <akpm@...ux-foundation.org>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>, Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>,
Marco Elver <elver@...gle.com>, kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v3 1/2] kasan: catch invalid free before SLUB
reinitializes the object
On Fri, Jul 26, 2024 at 2:43 AM Andrey Konovalov <andreyknvl@...il.com> wrote:
> On Thu, Jul 25, 2024 at 5:32 PM Jann Horn <jannh@...gle.com> wrote:
> > Currently, when KASAN is combined with init-on-free behavior, the
> > initialization happens before KASAN's "invalid free" checks.
[...]
> > So add a new KASAN hook that allows KASAN to pre-validate a
> > kmem_cache_free() operation before SLUB actually starts modifying the
> > object or its metadata.
> >
> > Acked-by: Vlastimil Babka <vbabka@...e.cz> #slub
> > Signed-off-by: Jann Horn <jannh@...gle.com>
> > ---
> > include/linux/kasan.h | 16 ++++++++++++++++
> > mm/kasan/common.c | 51 +++++++++++++++++++++++++++++++++++++++------------
> > mm/slub.c | 7 +++++++
> > 3 files changed, 62 insertions(+), 12 deletions(-)
> >
> > diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> > index 70d6a8f6e25d..ebd93c843e78 100644
> > --- a/include/linux/kasan.h
> > +++ b/include/linux/kasan.h
> > @@ -175,6 +175,16 @@ static __always_inline void * __must_check kasan_init_slab_obj(
> > return (void *)object;
> > }
> >
> > +bool __kasan_slab_pre_free(struct kmem_cache *s, void *object,
> > + unsigned long ip);
> > +static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s,
> > + void *object)
> > +{
> > + if (kasan_enabled())
> > + return __kasan_slab_pre_free(s, object, _RET_IP_);
> > + return false;
> > +}
>
> Please add a documentation comment for this new hook; something like
> what we have for kasan_mempool_poison_pages() and some of the others.
> (I've been meaning to add them for all of them, but still didn't get
> around to that.)
Ack, done in v4.
> > +static inline bool poison_slab_object(struct kmem_cache *cache, void *object,
> > + unsigned long ip, bool init)
> > +{
> > + void *tagged_object = object;
> > + enum free_validation_result valid = check_slab_free(cache, object, ip);
>
> I believe we don't need check_slab_free() here, as it was already done
> in kasan_slab_pre_free()? Checking just kasan_arch_is_ready() and
> is_kfence_address() should save a bit on performance impact.
>
> Though if we remove check_slab_free() from here, we do need to add it
> to __kasan_mempool_poison_object().
Ack, changed in v4.
> > +
> > + if (valid == KASAN_FREE_IS_IGNORED)
> > + return false;
> > + if (valid == KASAN_FREE_IS_INVALID)
> > + return true;
> > +
> > + object = kasan_reset_tag(object);
> > +
> > + /* RCU slabs could be legally used after free within the RCU period. */
> > + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU))
> > + return false;
>
> I vaguely recall there was some reason why this check was done before
> the kasan_byte_accessible() check, but I might be wrong. Could you try
> booting the kernel with only this patch applied to see if anything
> breaks?
I tried booting it to a graphical environment and running the kunit
tests, nothing immediately broke from what I can tell...
Powered by blists - more mailing lists