[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <201906270906.9EE619600@keescook>
Date: Thu, 27 Jun 2019 09:07:08 -0700
From: Kees Cook <keescook@...omium.org>
To: Marco Elver <elver@...gle.com>
Cc: linux-kernel@...r.kernel.org,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...gle.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mark Rutland <mark.rutland@....com>,
kasan-dev@...glegroups.com, linux-mm@...ck.org
Subject: Re: [PATCH v4 5/5] mm/kasan: Add object validation in ksize()
On Thu, Jun 27, 2019 at 11:44:45AM +0200, Marco Elver wrote:
> ksize() has been unconditionally unpoisoning the whole shadow memory region
> associated with an allocation. This can lead to various undetected bugs,
> for example, double-kzfree().
>
> Specifically, kzfree() uses ksize() to determine the actual allocation
> size, and subsequently zeroes the memory. Since ksize() used to just
> unpoison the whole shadow memory region, no invalid free was detected.
>
> This patch addresses this as follows:
>
> 1. Add a check in ksize(), and only then unpoison the memory region.
>
> 2. Preserve kasan_unpoison_slab() semantics by explicitly unpoisoning
> the shadow memory region using the size obtained from __ksize().
>
> Tested:
> 1. With SLAB allocator: a) normal boot without warnings; b) verified the
> added double-kzfree() is detected.
> 2. With SLUB allocator: a) normal boot without warnings; b) verified the
> added double-kzfree() is detected.
>
> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199359
> Signed-off-by: Marco Elver <elver@...gle.com>
Acked-by: Kees Cook <keescook@...omium.org>
-Kees
> Cc: Andrey Ryabinin <aryabinin@...tuozzo.com>
> Cc: Dmitry Vyukov <dvyukov@...gle.com>
> Cc: Alexander Potapenko <glider@...gle.com>
> Cc: Andrey Konovalov <andreyknvl@...gle.com>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Mark Rutland <mark.rutland@....com>
> Cc: Kees Cook <keescook@...omium.org>
> Cc: kasan-dev@...glegroups.com
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-mm@...ck.org
> ---
> v4:
> * Prefer WARN_ON_ONCE() instead of BUG_ON().
> ---
> include/linux/kasan.h | 7 +++++--
> mm/slab_common.c | 22 +++++++++++++++++++++-
> 2 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index b40ea104dd36..cc8a03cc9674 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -76,8 +76,11 @@ void kasan_free_shadow(const struct vm_struct *vm);
> int kasan_add_zero_shadow(void *start, unsigned long size);
> void kasan_remove_zero_shadow(void *start, unsigned long size);
>
> -size_t ksize(const void *);
> -static inline void kasan_unpoison_slab(const void *ptr) { ksize(ptr); }
> +size_t __ksize(const void *);
> +static inline void kasan_unpoison_slab(const void *ptr)
> +{
> + kasan_unpoison_shadow(ptr, __ksize(ptr));
> +}
> size_t kasan_metadata_size(struct kmem_cache *cache);
>
> bool kasan_save_enable_multi_shot(void);
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index b7c6a40e436a..a09bb10aa026 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1613,7 +1613,27 @@ EXPORT_SYMBOL(kzfree);
> */
> size_t ksize(const void *objp)
> {
> - size_t size = __ksize(objp);
> + size_t size;
> +
> + if (WARN_ON_ONCE(!objp))
> + return 0;
> + /*
> + * We need to check that the pointed to object is valid, and only then
> + * unpoison the shadow memory below. We use __kasan_check_read(), to
> + * generate a more useful report at the time ksize() is called (rather
> + * than later where behaviour is undefined due to potential
> + * use-after-free or double-free).
> + *
> + * If the pointed to memory is invalid we return 0, to avoid users of
> + * ksize() writing to and potentially corrupting the memory region.
> + *
> + * We want to perform the check before __ksize(), to avoid potentially
> + * crashing in __ksize() due to accessing invalid metadata.
> + */
> + if (unlikely(objp == ZERO_SIZE_PTR) || !__kasan_check_read(objp, 1))
> + return 0;
> +
> + size = __ksize(objp);
> /*
> * We assume that ksize callers could use whole allocated area,
> * so we need to unpoison this area.
> --
> 2.22.0.410.gd8fdbe21b5-goog
>
--
Kees Cook
Powered by blists - more mailing lists