[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220629193006.77e9f071a5940e882c459cdd@linux-foundation.org>
Date: Wed, 29 Jun 2022 19:30:06 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Feng Tang <feng.tang@...el.com>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, dave.hansen@...el.com,
Joerg Roedel <jroedel@...e.de>,
Robin Murphy <robin.murphy@....com>
Subject: Re: [RFC PATCH] mm/slub: enable debugging memory wasting of kmalloc
On Thu, 30 Jun 2022 09:47:15 +0800 Feng Tang <feng.tang@...el.com> wrote:
> kmalloc's API family is critical for mm, with one shortcoming that
> its object size is fixed to be power of 2. When user requests memory
> for '2^n + 1' bytes, actually 2^(n+1) bytes will be allocated, so
> in worst case, there is around 50% memory space waste.
>
> We've met a kernel boot OOM panic, and from the dumped slab info:
>
> [ 26.062145] kmalloc-2k 814056KB 814056KB
>
> >From debug we found there are huge number of 'struct iova_magazine',
> whose size is 1032 bytes (1024 + 8), so each allocation will waste
> 1016 bytes. Though the issue is solved by giving the right(bigger)
> size of RAM, it is still better to optimize the size (either use
> a kmalloc friendly size or create a dedicated slab for it).
Well that's nice, and additional visibility is presumably a good thing.
But what the heck is going on with iova_magazine? Is anyone looking at
moderating its impact?
Powered by blists - more mailing lists