[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220701150451.GA62281@shbuild999.sh.intel.com>
Date: Fri, 1 Jul 2022 23:04:51 +0800
From: Feng Tang <feng.tang@...el.com>
To: Christoph Lameter <cl@...two.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, dave.hansen@...el.com,
Robin Murphy <robin.murphy@....com>,
John Garry <john.garry@...wei.com>
Subject: Re: [PATCH v1] mm/slub: enable debugging memory wasting of kmalloc
Hi Christoph,
On Fri, Jul 01, 2022 at 04:37:00PM +0200, Christoph Lameter wrote:
> On Fri, 1 Jul 2022, Feng Tang wrote:
>
> > static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> > - unsigned long addr, struct kmem_cache_cpu *c)
> > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size)
> > {
>
> It would be good to avoid expanding the basic slab handling functions for
> kmalloc. Can we restrict the mods to the kmalloc related functions?
Yes, this is the part that concerned me. I tried but haven't figured
a way.
I started implemting it several month ago, and stuck with several
kmalloc APIs in a hacky way like dump_stack() when there is a waste
over 1/4 of the object_size of the kmalloc_caches[][].
Then I found one central API which has all the needed info (object_size &
orig_size) that we can yell about the waste :
static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru,
gfp_t gfpflags, int node, unsigned long addr, size_t orig_size)
which I thought could be still hacky, as the existing 'alloc_traces'
can't be resued which already has the count/call-stack info. Current
solution leverage it at the cost of adding 'orig_size' parameters, but
I don't know how to pass the 'waste' info through as track/location is
in the lowest level.
Thanks,
Feng
Powered by blists - more mailing lists