[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpECD9Q-eLA+O17FjPmUOBTDxwS3OY0Gxi9rkA-K9YGJAA@mail.gmail.com>
Date: Mon, 13 Jan 2025 13:56:23 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: David Wang <00107082@....com>
Cc: kent.overstreet@...ux.dev, Hao Ge <hao.ge@...ux.dev>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, Hao Ge <gehao@...inos.cn>,
Alessio Balsini <balsini@...gle.com>, Pasha Tatashin <tatashin@...gle.com>,
Sourav Panda <souravpanda@...gle.com>
Subject: Re: memory alloc profiling seems not work properly during bootup?
On Mon, Jan 13, 2025 at 12:04 AM David Wang <00107082@....com> wrote:
>
> Hi,
>
> More update,
>
> When I boot up my system, no alloc_percpu was accounted in kernel/sched/topology.c
>
> 996 14 kernel/sched/topology.c:2275 func:__sdt_alloc 80
> 996 14 kernel/sched/topology.c:2266 func:__sdt_alloc 80
> 96 6 kernel/sched/topology.c:2259 func:__sdt_alloc 80
> 12388 24 kernel/sched/topology.c:2252 func:__sdt_alloc 80
> 612 1 kernel/sched/topology.c:1961 func:sched_init_numa 1
>
> And then after suspend/resume, those alloc_percpu shows up.
>
> 996 14 kernel/sched/topology.c:2275 func:__sdt_alloc 395
> 996 14 kernel/sched/topology.c:2266 func:__sdt_alloc 395
> 96 6 kernel/sched/topology.c:2259 func:__sdt_alloc 395
> 12388 24 kernel/sched/topology.c:2252 func:__sdt_alloc 395
> 0 0 kernel/sched/topology.c:2242 func:__sdt_alloc 70 <---
> 0 0 kernel/sched/topology.c:2238 func:__sdt_alloc 70 <---
> 0 0 kernel/sched/topology.c:2234 func:__sdt_alloc 70 <---
> 0 0 kernel/sched/topology.c:2230 func:__sdt_alloc 70 <---
> 612 1 kernel/sched/topology.c:1961 func:sched_init_numa 1
>
> I have my accumulative counter patch and filter out items with 0 accumulative counter,
> I am almost sure the patch would not cause this accounting issue, but not 100%.....
Have you tested this without your accumulative counter patch?
IIUC, that patch filters out any allocation which has never been hit.
So, if suspend/resume path contains allocations which were never hit
before then those allocations would become suddenly visible, like in
your case. That's why I'm against filtering allocinfo data in the
kernel. Please try this without your patch and see if the data becomes
more consistent.
Thanks,
Suren.
>
>
> It seems to me, during boot up, some alloc_percpu is not registered.
>
>
> FYI
> David
>
>
>
> At 2025-01-12 12:41:10, "David Wang" <00107082@....com> wrote:
> >
> >
> >At 2025-01-11 22:31:36, "David Wang" <00107082@....com> wrote:
> >>Hi,
> >>
> >>I have using this feature for a long while, and I believe this memory alloc profiling feature
> >>is quite powerful.
> >>
> >>But, I have been wondering how to use this data, specifically:
> >>how anomaly could be detected, what pattern should be defined as anomaly?
> >>
> >>So far, I have tools collecting those data (via prometheus), make basic analysis, i.e. top-k, group-by or rate.
> >>Those analysis help me understand my system, but I cannot tell whether it is abnormal or not.
> >>
> >>And sometimes I would just read through /proc/allocinfo, trying to pickup something.
> >>(Sometimes get lucky, actually only once, find the underflow problem weeks ago.)
> >>
> >>A tool would be more helpful if it can identify anomalies, and we can add more pattern as develop along.
> >>
> >>A pattern may be hard to define, especially when it involves context. For example,
> >>I happened to notice following strange things recently:
> >>
> >> 896 14 kernel/sched/topology.c:2275 func:__sdt_alloc 1025
> >> 896 14 kernel/sched/topology.c:2266 func:__sdt_alloc 1025
> >> 96 6 kernel/sched/topology.c:2259 func:__sdt_alloc 1025
> >> 12288 24 kernel/sched/topology.c:2252 func:__sdt_alloc 1025 <----- B
> >> 0 0 kernel/sched/topology.c:2242 func:__sdt_alloc 210
> >> 0 0 kernel/sched/topology.c:2238 func:__sdt_alloc 210
> >> 0 0 kernel/sched/topology.c:2234 func:__sdt_alloc 210
> >> 0 0 kernel/sched/topology.c:2230 func:__sdt_alloc 210 <----- A
> >>Code A
> >>2230 sdd->sd = alloc_percpu(struct sched_domain *);
> >>2231 if (!sdd->sd)
> >>2232 return -ENOMEM;
> >>2233
> >>
> >>Code B
> >>2246 for_each_cpu(j, cpu_map) {
> >> ...
> >>
> >>2251
> >>2252 sd = kzalloc_node(sizeof(struct sched_domain) + cpumask_size(),
> >>2253 GFP_KERNEL, cpu_to_node(j));
> >>2254 if (!sd)
> >>2255 return -ENOMEM;
> >>2256
> >>2257 *per_cpu_ptr(sdd->sd, j) = sd;
> >>
> >>
> >>The address of memory alloced by 'Code B', is stored in memory "Code A', the allocation counter for 'Code A'
> >>is *0*, while 'Code B' is not *0*. Something odd happens here, either it is expected and some ownership changes happened somewhere
> >>, or it is a leak, or it is an accounting problem.
> >>
> >>If a tool can help identify this kind of pattern, that would be great!~
> >>
> >>
> >>Any suggestions about how to proceed with the memory problem of kernel/sched/topology.c mentioneded
> >> above?, or is it a problem at all?
> >>
> >
> >Update:
> >
> >It seems the memory alloced by 'Code B' could be handovered via claim_allocations:
> >1530 /*
> >1531 * NULL the sd_data elements we've used to build the sched_domain and
> >1532 * sched_group structure so that the subsequent __free_domain_allocs()
> >1533 * will not free the data we're using.
> >1534 */
> >1535 static void claim_allocations(int cpu, struct sched_domain *sd)
> >
> >So most likely, this is neither a leak nor a accounting issue. False alarm, sorry....
> >
> >The reason I brought this up is that the profiling data is rich, but a user who is not familiar
> >with code detail could not make much out of it. If a tool can tell whether the system is drifting away somewhere,
> >like a healthcheck based on profiling data, it would be quite helpful.
> >
> >Thanks
> >David
> >
> >
Powered by blists - more mailing lists