[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <cvrr3u7n424dhroqi7essjm53kqrqjomatly2b7us4b6rymcox@3ttbatss6ypy>
Date: Wed, 18 Jun 2025 18:16:26 -0400
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Casey Chen <cachen@...estorage.com>
Cc: akpm@...ux-foundation.org, surenb@...gle.com, corbet@....net,
dennis@...nel.org, tj@...nel.org, cl@...two.org, vbabka@...e.cz, mhocko@...e.com,
jackmanb@...gle.com, hannes@...xchg.org, ziy@...dia.com, rientjes@...gle.com,
roman.gushchin@...ux.dev, harry.yoo@...cle.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org, yzhong@...estorage.com
Subject: Re: [PATCH] alloc_tag: add per-NUMA node stats
On Tue, Jun 10, 2025 at 05:30:53PM -0600, Casey Chen wrote:
> Add support for tracking per-NUMA node statistics in /proc/allocinfo.
> Previously, each alloc_tag had a single set of counters (bytes and
> calls), aggregated across all CPUs. With this change, each CPU can
> maintain separate counters for each NUMA node, allowing finer-grained
> memory allocation profiling.
>
> This feature is controlled by the new
> CONFIG_MEM_ALLOC_PROFILING_PER_NUMA_STATS option:
>
> * When enabled (=y), the output includes per-node statistics following
> the total bytes/calls:
>
> <size> <calls> <tag info>
> ...
> 315456 9858 mm/dmapool.c:338 func:pool_alloc_page
> nid0 94912 2966
> nid1 220544 6892
> 7680 60 mm/dmapool.c:254 func:dma_pool_create
> nid0 4224 33
> nid1 3456 27
I just received a report of memory reclaim issues where it seems DMA32
is stuffed full.
So naturally, instrumenting to see what's consuming DMA32 is going to be
the first thing to do, which made me think of your patchset.
I wonder if we should think about something a bit more general, so it's
easy to break out accounting different ways depending on what we want to
debug.
Powered by blists - more mailing lists