[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YxRp5uz9KSY0S9id@hyeyoo>
Date: Sun, 4 Sep 2022 18:03:34 +0900
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Feng Tang <feng.tang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Dmitry Vyukov <dvyukov@...gle.com>,
"Hansen, Dave" <dave.hansen@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Robin Murphy <robin.murphy@....com>,
John Garry <john.garry@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [PATCH v4 1/4] mm/slub: enable debugging memory wasting of
kmalloc
On Fri, Sep 02, 2022 at 02:15:45PM +0800, Feng Tang wrote:
> On Thu, Sep 01, 2022 at 10:01:13PM +0800, Hyeonggon Yoo wrote:
> > On Mon, Aug 29, 2022 at 03:56:15PM +0800, Feng Tang wrote:
> > > kmalloc's API family is critical for mm, with one nature that it will
> > > round up the request size to a fixed one (mostly power of 2). Say
> > > when user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes
> > > could be allocated, so in worst case, there is around 50% memory
> > > space waste.
> > >
[...]
> > >
> > > Signed-off-by: Feng Tang <feng.tang@...el.com>
> > > Cc: Robin Murphy <robin.murphy@....com>
> > > Cc: John Garry <john.garry@...wei.com>
> > > Cc: Kefeng Wang <wangkefeng.wang@...wei.com>
> > > ---
> > > include/linux/slab.h | 2 +
> > > mm/slub.c | 94 +++++++++++++++++++++++++++++++++++++-------
> > > 2 files changed, 81 insertions(+), 15 deletions(-)
> >
> >
> > Would you update Documentation/mm/slub.rst as well?
> > (alloc_traces part)
>
> Sure, will do.
>
> > [...]
> >
> > > */
> > > static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> > > - unsigned long addr, struct kmem_cache_cpu *c)
> > > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size)
> > > {
> > > void *freelist;
> > > struct slab *slab;
> > > @@ -3115,6 +3158,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> > >
> > > if (s->flags & SLAB_STORE_USER)
> > > set_track(s, freelist, TRACK_ALLOC, addr);
> > > + set_orig_size(s, freelist, orig_size);
> > >
> > > return freelist;
> > > }
> > > @@ -3140,6 +3184,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> > > */
> > > if (s->flags & SLAB_STORE_USER)
> > > set_track(s, freelist, TRACK_ALLOC, addr);
> > > + set_orig_size(s, freelist, orig_size);
> > > +
> > > return freelist;
> > > }
> >
> >
> > This patch is okay but with patch 4, init_object() initializes redzone/poison area
> > using s->object_size, and init_kmalloc_object() fixes redzone/poison area using orig_size.
> > Why not do it in init_object() in the first time?
> >
> > Also, updating redzone/poison area after alloc_single_from_new_slab()
> > (outside list_lock, after adding slab to list) will introduce races with validation.
> >
> > So I think doing set_orig_size()/init_kmalloc_object() in alloc_debug_processing() would make more sense.
>
> Yes, this makes sense, and in v3, kmalloc redzone/poison setup was
> done in alloc_debug_processing() (through init_object()). When
> rebasing to v4, I met the classical problem: how to pass 'orig_size'
> parameter :)
>
> In latest 'for-next' branch, one call path for alloc_debug_processing()
> is
> ___slab_alloc
> get_partial
> get_any_partial
> get_partial_node
> alloc_debug_processing
>
> Adding 'orig_size' paramter to all these function looks horrible, and
> I couldn't figure out a good way and chosed to put those ops after
> 'set_track()'
IMO adding a parameter to them isn't too horrible...
I don't see better solution than adding a parameter with current implementation.
(Yeah, the code is quite complicated...)
It won't affect performance to meaningful degree as most of
allocations will be served from cpu slab or percpu partial list.
--
Thanks,
Hyeonggon
Powered by blists - more mailing lists