[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220630032315.GB4668@shbuild999.sh.intel.com>
Date: Thu, 30 Jun 2022 11:23:15 +0800
From: Feng Tang <feng.tang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, dave.hansen@...el.com,
Joerg Roedel <jroedel@...e.de>,
Robin Murphy <robin.murphy@....com>
Subject: Re: [RFC PATCH] mm/slub: enable debugging memory wasting of kmalloc
On Wed, Jun 29, 2022 at 07:47:47PM -0700, Andrew Morton wrote:
> On Thu, 30 Jun 2022 10:38:44 +0800 Feng Tang <feng.tang@...el.com> wrote:
>
> > Hi Andrew,
> >
> > Thanks for the review!
> >
> > On Wed, Jun 29, 2022 at 07:30:06PM -0700, Andrew Morton wrote:
> > > On Thu, 30 Jun 2022 09:47:15 +0800 Feng Tang <feng.tang@...el.com> wrote:
> > >
> > > > kmalloc's API family is critical for mm, with one shortcoming that
> > > > its object size is fixed to be power of 2. When user requests memory
> > > > for '2^n + 1' bytes, actually 2^(n+1) bytes will be allocated, so
> > > > in worst case, there is around 50% memory space waste.
> > > >
> > > > We've met a kernel boot OOM panic, and from the dumped slab info:
> > > >
> > > > [ 26.062145] kmalloc-2k 814056KB 814056KB
> > > >
> > > > >From debug we found there are huge number of 'struct iova_magazine',
> > > > whose size is 1032 bytes (1024 + 8), so each allocation will waste
> > > > 1016 bytes. Though the issue is solved by giving the right(bigger)
> > > > size of RAM, it is still better to optimize the size (either use
> > > > a kmalloc friendly size or create a dedicated slab for it).
> > >
> > > Well that's nice, and additional visibility is presumably a good thing.
> > >
> > > But what the heck is going on with iova_magazine? Is anyone looking at
> > > moderating its impact?
> >
> > Yes, I have a very simple patch at hand
> >
> > --- a/drivers/iommu/iova.c
> > +++ b/drivers/iommu/iova.c
> > @@ -614,7 +614,7 @@ EXPORT_SYMBOL_GPL(reserve_iova);
> > * dynamic size tuning described in the paper.
> > */
> >
> > -#define IOVA_MAG_SIZE 128
> > +#define IOVA_MAG_SIZE 127
>
> Well OK. Would benefit from a comment explaining the reasoning.
Sure, will try to give the full context.
> But we still have eleventy squillion of these things in flight. Why?
I've checked the waste info right after boot for desktop/server, the
waste is not severe generally, and I didn't even find 'iova_magzine'
(could be due to it's virtulization related).
When dockers are started to run workload, more kmalloc is invoked
and the waste increases accordingly.
Another case that can benefit is budget devices with limited memory,
which wants to squeeze the wasted memory.
Thanks,
Feng
> > #define MAX_GLOBAL_MAGS 32 /* magazines per bin */
> >
> > struct iova_magazine {
> >
> > I guess changing it from 128 to 127 will not hurt much, and plan to
> > send it out soon.
>
Powered by blists - more mailing lists