[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YxCUIM4BWVZD6fnk@hyeyoo>
Date: Thu, 1 Sep 2022 20:14:40 +0900
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Feng Tang <feng.tang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Dmitry Vyukov <dvyukov@...gle.com>,
"Hansen, Dave" <dave.hansen@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Robin Murphy <robin.murphy@....com>,
John Garry <john.garry@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [PATCH v4 1/4] mm/slub: enable debugging memory wasting of
kmalloc
On Thu, Sep 01, 2022 at 01:04:58PM +0800, Feng Tang wrote:
> On Wed, Aug 31, 2022 at 10:52:15PM +0800, Hyeonggon Yoo wrote:
> > On Mon, Aug 29, 2022 at 03:56:15PM +0800, Feng Tang wrote:
> > > kmalloc's API family is critical for mm, with one nature that it will
> > > round up the request size to a fixed one (mostly power of 2). Say
> > > when user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes
> > > could be allocated, so in worst case, there is around 50% memory
> > > space waste.
> > >
> >
> > [...]
> >
> > > static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> > > - unsigned long addr, struct kmem_cache_cpu *c)
> > > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size)
> > > {
> > > void *freelist;
> > > struct slab *slab;
> > > @@ -3115,6 +3158,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> > >
> > > if (s->flags & SLAB_STORE_USER)
> > > set_track(s, freelist, TRACK_ALLOC, addr);
> > > + set_orig_size(s, freelist, orig_size);
> > >
> > > return freelist;
> > > }
> > > @@ -3140,6 +3184,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> > > */
> > > if (s->flags & SLAB_STORE_USER)
> > > set_track(s, freelist, TRACK_ALLOC, addr);
> > > + set_orig_size(s, freelist, orig_size);
> > > +
> > > return freelist;
> > > }
> >
> > Maybe we can move set_track() and set_orig_size() to after slab_post_alloc_hook().
> > something like alloc/free hooks for debugging caches? (and drop orig_size parameter.)
>
> Yep, we discussed this during v3 review
> https://lore.kernel.org/lkml/442d2b9c-9f07-8954-b90e-b4a9f8b64303@intel.com/
Ah, I missed that :) Thanks!
Considering the added cost (should be low) and races with validation,
I think this approach will cost more than it get. Sorry for the noise.
p.s. I think I can review this series in few days.
Thanks for your efforts!
> Will revisit this considering recent refactoring and the following
> kmalloc data redzone patches.
> Thanks,
> Feng
>
> > Thanks!
--
Thanks,
Hyeonggon
Powered by blists - more mailing lists