[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNMjgSKommNCrfyFuaz+3HQdW92ZSF_p26LQdmS0o3L98Q@mail.gmail.com>
Date: Wed, 23 Feb 2022 20:06:02 +0100
From: Marco Elver <elver@...gle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
Roman Gushchin <guro@...com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com>,
David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
Kees Cook <keescook@...omium.org>,
kasan-dev <kasan-dev@...glegroups.com>,
Andrey Konovalov <andreyknvl@...il.com>
Subject: Re: [PATCH 1/5] mm/sl[au]b: Unify __ksize()
On Wed, 23 Feb 2022 at 19:39, Vlastimil Babka <vbabka@...e.cz> wrote:
> On 2/21/22 11:53, Hyeonggon Yoo wrote:
> > Only SLOB need to implement __ksize() separately because SLOB records
> > size in object header for kmalloc objects. Unify SLAB/SLUB's __ksize().
> >
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> > ---
> > mm/slab.c | 23 -----------------------
> > mm/slab_common.c | 29 +++++++++++++++++++++++++++++
> > mm/slub.c | 16 ----------------
> > 3 files changed, 29 insertions(+), 39 deletions(-)
> >
> > diff --git a/mm/slab.c b/mm/slab.c
> > index ddf5737c63d9..eb73d2499480 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -4199,27 +4199,4 @@ void __check_heap_object(const void *ptr, unsigned long n,
> > }
> > #endif /* CONFIG_HARDENED_USERCOPY */
> >
> > -/**
> > - * __ksize -- Uninstrumented ksize.
> > - * @objp: pointer to the object
> > - *
> > - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same
> > - * safety checks as ksize() with KASAN instrumentation enabled.
> > - *
> > - * Return: size of the actual memory used by @objp in bytes
> > - */
> > -size_t __ksize(const void *objp)
> > -{
> > - struct kmem_cache *c;
> > - size_t size;
> >
> > - BUG_ON(!objp);
> > - if (unlikely(objp == ZERO_SIZE_PTR))
> > - return 0;
> > -
> > - c = virt_to_cache(objp);
> > - size = c ? c->object_size : 0;
>
> This comes from commit a64b53780ec3 ("mm/slab: sanity-check page type when
> looking up cache") by Kees and virt_to_cache() is an implicit check for
> folio slab flag ...
>
> > -
> > - return size;
> > -}
> > -EXPORT_SYMBOL(__ksize);
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index 23f2ab0713b7..488997db0d97 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -1245,6 +1245,35 @@ void kfree_sensitive(const void *p)
> > }
> > EXPORT_SYMBOL(kfree_sensitive);
> >
> > +#ifndef CONFIG_SLOB
> > +/**
> > + * __ksize -- Uninstrumented ksize.
> > + * @objp: pointer to the object
> > + *
> > + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same
> > + * safety checks as ksize() with KASAN instrumentation enabled.
> > + *
> > + * Return: size of the actual memory used by @objp in bytes
> > + */
> > +size_t __ksize(const void *object)
> > +{
> > + struct folio *folio;
> > +
> > + if (unlikely(object == ZERO_SIZE_PTR))
> > + return 0;
> > +
> > + folio = virt_to_folio(object);
> > +
> > +#ifdef CONFIG_SLUB
> > + if (unlikely(!folio_test_slab(folio)))
> > + return folio_size(folio);
> > +#endif
> > +
> > + return slab_ksize(folio_slab(folio)->slab_cache);
>
> ... and here in the common version you now for SLAB trust that the folio
> will be a slab folio, thus undoing the intention of that commit. Maybe
> that's not good and we should keep the folio_test_slab() for both cases?
> Although maybe it's also strange that prior this patch, SLAB would return 0
> if the test fails, and SLUB would return folio_size(). Probably because with
> SLUB this can be a large kmalloc here and with SLAB not. So we could keep
> doing that in the unified version, or KASAN devs (CC'd) could advise
> something better?
Is this a definitive failure case? My opinion here is that returning 0
from ksize() in case of failure will a) provide a way to check for
error, and b) if the size is used unconditionally to compute an
address may be the more graceful failure mode (see comment added in
0d4ca4c9bab39 for what happens if we see invalid memory per KASAN
being accessed).
Powered by blists - more mailing lists