lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <213ff7d2.7c6c.1945eb0c2ff.Coremail.00107082@163.com>
Date: Mon, 13 Jan 2025 16:03:50 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Suren Baghdasaryan" <surenb@...gle.com>, kent.overstreet@...ux.dev
Cc: "Hao Ge" <hao.ge@...ux.dev>, akpm@...ux-foundation.org, 
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, 
	"Hao Ge" <gehao@...inos.cn>, "Alessio Balsini" <balsini@...gle.com>, 
	"Pasha Tatashin" <tatashin@...gle.com>, 
	"Sourav Panda" <souravpanda@...gle.com>
Subject: memory alloc profiling seems not work properly during bootup?

Hi, 

More update, 

When I boot up my system,  no alloc_percpu was accounted in kernel/sched/topology.c

         996       14 kernel/sched/topology.c:2275 func:__sdt_alloc 80 
         996       14 kernel/sched/topology.c:2266 func:__sdt_alloc 80 
          96        6 kernel/sched/topology.c:2259 func:__sdt_alloc 80 
       12388       24 kernel/sched/topology.c:2252 func:__sdt_alloc 80 
         612        1 kernel/sched/topology.c:1961 func:sched_init_numa 1

And then after suspend/resume, those alloc_percpu shows up.

         996       14 kernel/sched/topology.c:2275 func:__sdt_alloc 395 
         996       14 kernel/sched/topology.c:2266 func:__sdt_alloc 395 
          96        6 kernel/sched/topology.c:2259 func:__sdt_alloc 395 
       12388       24 kernel/sched/topology.c:2252 func:__sdt_alloc 395 
           0        0 kernel/sched/topology.c:2242 func:__sdt_alloc 70    <---
           0        0 kernel/sched/topology.c:2238 func:__sdt_alloc 70    <---
           0        0 kernel/sched/topology.c:2234 func:__sdt_alloc 70    <---
           0        0 kernel/sched/topology.c:2230 func:__sdt_alloc 70    <---
         612        1 kernel/sched/topology.c:1961 func:sched_init_numa 1

I have my accumulative counter patch and filter out items with 0 accumulative counter, 
I am almost sure the patch would not cause this accounting issue, but not 100%.....


It seems to me, during boot up, some alloc_percpu is not registered.


FYI
David



At 2025-01-12 12:41:10, "David Wang" <00107082@....com> wrote:
>
>
>At 2025-01-11 22:31:36, "David Wang" <00107082@....com> wrote:
>>Hi, 
>>
>>I have using this feature for a long while, and I believe this memory alloc profiling feature
>>is quite powerful. 
>>
>>But, I have been wondering how to use this data, specifically:
>>how anomaly could be detected, what pattern should be defined as anomaly?
>>
>>So far, I have tools collecting those data (via prometheus), make basic analysis, i.e. top-k, group-by or rate.
>>Those analysis help me understand my system, but I cannot tell whether it is abnormal or not.
>>
>>And sometimes I would just read through /proc/allocinfo, trying to pickup something.
>>(Sometimes get lucky, actually only once, find the underflow problem weeks ago.)
>>
>>A tool would be more helpful if it can identify anomalies, and we can add more pattern as develop along.
>>
>>A pattern may be hard to define, especially when it involves context. For example,
>>I happened to notice following strange things recently:
>>
>>         896       14 kernel/sched/topology.c:2275 func:__sdt_alloc 1025 
>>         896       14 kernel/sched/topology.c:2266 func:__sdt_alloc 1025 
>>          96        6 kernel/sched/topology.c:2259 func:__sdt_alloc 1025 
>>       12288       24 kernel/sched/topology.c:2252 func:__sdt_alloc 1025    <----- B
>>           0        0 kernel/sched/topology.c:2242 func:__sdt_alloc 210     
>>           0        0 kernel/sched/topology.c:2238 func:__sdt_alloc 210 
>>           0        0 kernel/sched/topology.c:2234 func:__sdt_alloc 210 
>>           0        0 kernel/sched/topology.c:2230 func:__sdt_alloc 210     <----- A
>>Code A
>>2230                 sdd->sd = alloc_percpu(struct sched_domain *);
>>2231                 if (!sdd->sd)
>>2232                         return -ENOMEM;
>>2233 
>>
>>Code B
>>2246                 for_each_cpu(j, cpu_map) {
>>                             ...
>>
>>2251 
>>2252                         sd = kzalloc_node(sizeof(struct sched_domain) + cpumask_size(),
>>2253                                         GFP_KERNEL, cpu_to_node(j));
>>2254                         if (!sd)
>>2255                                 return -ENOMEM;
>>2256 
>>2257                         *per_cpu_ptr(sdd->sd, j) = sd;
>>
>>
>>The address of memory alloced by 'Code B', is stored in memory "Code A', the allocation counter for 'Code A'
>>is *0*, while 'Code B' is not *0*.  Something odd happens here, either it is expected and some ownership changes happened somewhere
>>, or it is a leak, or it is an accounting problem. 
>>
>>If a tool can help identify this kind of pattern, that would be great!~
>>
>>
>>Any suggestions about how to proceed with the memory problem of kernel/sched/topology.c mentioneded
>> above?, or is it a problem at all?
>>
>
>Update: 
>
>It seems the memory alloced by 'Code B' could be handovered via claim_allocations:
>1530 /*
>1531  * NULL the sd_data elements we've used to build the sched_domain and
>1532  * sched_group structure so that the subsequent __free_domain_allocs()
>1533  * will not free the data we're using.
>1534  */
>1535 static void claim_allocations(int cpu, struct sched_domain *sd)
>
>So most likely, this is neither a leak nor a accounting issue. False alarm, sorry....
>
>The reason I brought this up is that the profiling data is rich, but a user who is not familiar
>with code detail could not make much out of it. If a tool can tell whether the system is drifting away somewhere, 
>like a healthcheck based on profiling data, it would be quite helpful. 
>
>Thanks
>David 
> 
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ