[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB=+i9QUC+zscxC6AuK9qUaD-Y9VmAv2-Ovqt8JRJJARWxZ-EQ@mail.gmail.com>
Date: Thu, 14 Nov 2024 22:34:10 +0900
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Feng Tang <feng.tang@...el.com>
Cc: Vlastimil Babka <vbabka@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>, Roman Gushchin <roman.gushchin@...ux.dev>,
Andrey Konovalov <andreyknvl@...il.com>, Marco Elver <elver@...gle.com>,
Alexander Potapenko <glider@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>,
Danilo Krummrich <dakr@...nel.org>, Narasimhan.V@....com, linux-mm@...ck.org,
kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc()
On Thu, Oct 17, 2024 at 12:42 AM Feng Tang <feng.tang@...el.com> wrote:
>
> For current krealloc(), one problem is its caller doesn't pass the old
> request size, say the object is 64 bytes kmalloc one, but caller may
> only requested 48 bytes. Then when krealloc() shrinks or grows in the
> same object, or allocate a new bigger object, it lacks this 'original
> size' information to do accurate data preserving or zeroing (when
> __GFP_ZERO is set).
>
> Thus with slub debug redzone and object tracking enabled, parts of the
> object after krealloc() might contain redzone data instead of zeroes,
> which is violating the __GFP_ZERO guarantees. Good thing is in this
> case, kmalloc caches do have this 'orig_size' feature. So solve the
> problem by utilize 'org_size' to do accurate data zeroing and preserving.
>
> [Thanks to syzbot and V, Narasimhan for discovering kfence and big
> kmalloc related issues in early patch version]
>
> Suggested-by: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Feng Tang <feng.tang@...el.com>
> ---
> mm/slub.c | 84 +++++++++++++++++++++++++++++++++++++++----------------
> 1 file changed, 60 insertions(+), 24 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 1d348899f7a3..958f7af79fad 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4718,34 +4718,66 @@ static __always_inline __realloc_size(2) void *
> __do_krealloc(const void *p, size_t new_size, gfp_t flags)
> {
> void *ret;
> - size_t ks;
> -
> - /* Check for double-free before calling ksize. */
> - if (likely(!ZERO_OR_NULL_PTR(p))) {
> - if (!kasan_check_byte(p))
> - return NULL;
> - ks = ksize(p);
> - } else
> - ks = 0;
> -
> - /* If the object still fits, repoison it precisely. */
> - if (ks >= new_size) {
> - /* Zero out spare memory. */
> - if (want_init_on_alloc(flags)) {
> - kasan_disable_current();
> + size_t ks = 0;
> + int orig_size = 0;
> + struct kmem_cache *s = NULL;
> +
> + /* Check for double-free. */
> + if (unlikely(ZERO_OR_NULL_PTR(p)))
> + goto alloc_new;
nit: I think kasan_check_bytes() is the function that checks for double-free?
Otherwise looks good to me,
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> + if (!kasan_check_byte(p))
> + return NULL;
> +
> + if (is_kfence_address(p)) {
> + ks = orig_size = kfence_ksize(p);
> + } else {
> + struct folio *folio;
> +
> + folio = virt_to_folio(p);
> + if (unlikely(!folio_test_slab(folio))) {
> + /* Big kmalloc object */
> + WARN_ON(folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE);
> + WARN_ON(p != folio_address(folio));
> + ks = folio_size(folio);
> + } else {
> + s = folio_slab(folio)->slab_cache;
> + orig_size = get_orig_size(s, (void *)p);
> + ks = s->object_size;
> + }
> + }
> +
> + /* If the old object doesn't fit, allocate a bigger one */
> + if (new_size > ks)
> + goto alloc_new;
> +
> + /* Zero out spare memory. */
> + if (want_init_on_alloc(flags)) {
> + kasan_disable_current();
> + if (orig_size && orig_size < new_size)
> + memset((void *)p + orig_size, 0, new_size - orig_size);
> + else
> memset((void *)p + new_size, 0, ks - new_size);
> - kasan_enable_current();
> - }
> + kasan_enable_current();
> + }
>
> - p = kasan_krealloc((void *)p, new_size, flags);
> - return (void *)p;
> + /* Setup kmalloc redzone when needed */
> + if (s && slub_debug_orig_size(s)) {
> + set_orig_size(s, (void *)p, new_size);
> + if (s->flags & SLAB_RED_ZONE && new_size < ks)
> + memset_no_sanitize_memory((void *)p + new_size,
> + SLUB_RED_ACTIVE, ks - new_size);
> }
> + p = kasan_krealloc((void *)p, new_size, flags);
> + return (void *)p;
> +
> +alloc_new:
> ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
> if (ret && p) {
> /* Disable KASAN checks as the object's redzone is accessed. */
> kasan_disable_current();
> - memcpy(ret, kasan_reset_tag(p), ks);
> + memcpy(ret, kasan_reset_tag(p), orig_size ?: ks);
> kasan_enable_current();
> }
>
> @@ -4766,16 +4798,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
> * memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
> * __GFP_ZERO is not fully honored by this API.
> *
> - * This is the case, since krealloc() only knows about the bucket size of an
> - * allocation (but not the exact size it was allocated with) and hence
> - * implements the following semantics for shrinking and growing buffers with
> - * __GFP_ZERO.
> + * When slub_debug_orig_size() is off, krealloc() only knows about the bucket
> + * size of an allocation (but not the exact size it was allocated with) and
> + * hence implements the following semantics for shrinking and growing buffers
> + * with __GFP_ZERO.
> *
> * new bucket
> * 0 size size
> * |--------|----------------|
> * | keep | zero |
> *
> + * Otherwise, the original allocation size 'orig_size' could be used to
> + * precisely clear the requested size, and the new size will also be stored
> + * as the new 'orig_size'.
> + *
> * In any case, the contents of the object pointed to are preserved up to the
> * lesser of the new and old sizes.
> *
> --
> 2.27.0
>
Powered by blists - more mailing lists