[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <254a4857.b2b.19458d0dbc2.Coremail.00107082@163.com>
Date: Sun, 12 Jan 2025 12:41:10 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Suren Baghdasaryan" <surenb@...gle.com>, kent.overstreet@...ux.dev
Cc: "Hao Ge" <hao.ge@...ux.dev>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
"Hao Ge" <gehao@...inos.cn>, "Alessio Balsini" <balsini@...gle.com>,
"Pasha Tatashin" <tatashin@...gle.com>,
"Sourav Panda" <souravpanda@...gle.com>
Subject: Re: [PATCH] tools/mm: Introduce a tool to handle entries in
allocinfo
At 2025-01-11 22:31:36, "David Wang" <00107082@....com> wrote:
>Hi,
>
>I have using this feature for a long while, and I believe this memory alloc profiling feature
>is quite powerful.
>
>But, I have been wondering how to use this data, specifically:
>how anomaly could be detected, what pattern should be defined as anomaly?
>
>So far, I have tools collecting those data (via prometheus), make basic analysis, i.e. top-k, group-by or rate.
>Those analysis help me understand my system, but I cannot tell whether it is abnormal or not.
>
>And sometimes I would just read through /proc/allocinfo, trying to pickup something.
>(Sometimes get lucky, actually only once, find the underflow problem weeks ago.)
>
>A tool would be more helpful if it can identify anomalies, and we can add more pattern as develop along.
>
>A pattern may be hard to define, especially when it involves context. For example,
>I happened to notice following strange things recently:
>
> 896 14 kernel/sched/topology.c:2275 func:__sdt_alloc 1025
> 896 14 kernel/sched/topology.c:2266 func:__sdt_alloc 1025
> 96 6 kernel/sched/topology.c:2259 func:__sdt_alloc 1025
> 12288 24 kernel/sched/topology.c:2252 func:__sdt_alloc 1025 <----- B
> 0 0 kernel/sched/topology.c:2242 func:__sdt_alloc 210
> 0 0 kernel/sched/topology.c:2238 func:__sdt_alloc 210
> 0 0 kernel/sched/topology.c:2234 func:__sdt_alloc 210
> 0 0 kernel/sched/topology.c:2230 func:__sdt_alloc 210 <----- A
>Code A
>2230 sdd->sd = alloc_percpu(struct sched_domain *);
>2231 if (!sdd->sd)
>2232 return -ENOMEM;
>2233
>
>Code B
>2246 for_each_cpu(j, cpu_map) {
> ...
>
>2251
>2252 sd = kzalloc_node(sizeof(struct sched_domain) + cpumask_size(),
>2253 GFP_KERNEL, cpu_to_node(j));
>2254 if (!sd)
>2255 return -ENOMEM;
>2256
>2257 *per_cpu_ptr(sdd->sd, j) = sd;
>
>
>The address of memory alloced by 'Code B', is stored in memory "Code A', the allocation counter for 'Code A'
>is *0*, while 'Code B' is not *0*. Something odd happens here, either it is expected and some ownership changes happened somewhere
>, or it is a leak, or it is an accounting problem.
>
>If a tool can help identify this kind of pattern, that would be great!~
>
>
>Any suggestions about how to proceed with the memory problem of kernel/sched/topology.c mentioneded
> above?, or is it a problem at all?
>
Update:
It seems the memory alloced by 'Code B' could be handovered via claim_allocations:
1530 /*
1531 * NULL the sd_data elements we've used to build the sched_domain and
1532 * sched_group structure so that the subsequent __free_domain_allocs()
1533 * will not free the data we're using.
1534 */
1535 static void claim_allocations(int cpu, struct sched_domain *sd)
So most likely, this is neither a leak nor a accounting issue. False alarm, sorry....
The reason I brought this up is that the profiling data is rich, but a user who is not familiar
with code detail could not make much out of it. If a tool can tell whether the system is drifting away somewhere,
like a healthcheck based on profiling data, it would be quite helpful.
Thanks
David
Powered by blists - more mailing lists