lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <152dd526.b05a.196b5e7b738.Coremail.00107082@163.com>
Date: Sat, 10 May 2025 00:36:23 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Suren Baghdasaryan" <surenb@...gle.com>
Cc: kent.overstreet@...ux.dev, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [BUG?]Data key in /proc/allocinfo is a multiset


At 2025-05-09 23:56:32, "Suren Baghdasaryan" <surenb@...gle.com> wrote:
>On Thu, May 8, 2025 at 11:10 PM David Wang <00107082@....com> wrote:
>>
>> Just start a new thread for this[1].
>> There are duplications in /proc/allocinfo where same [file:line]
>> shows up several times:
>>
>> =======================
>>            0        0 ./include/crypto/kpp.h:185 func:kpp_request_alloc
>>            0        0 ./include/crypto/kpp.h:185 func:kpp_request_alloc
>> =======================
>>            0        0 ./include/net/tcp.h:2548 func:tcp_v4_save_options
>>            0        0 ./include/net/tcp.h:2548 func:tcp_v4_save_options
>> =======================
>>            0        0 drivers/iommu/amd/../iommu-pages.h:94 func:iommu_alloc_pages_node
>>            0        0 drivers/iommu/amd/../iommu-pages.h:94 func:iommu_alloc_pages_node
>>            0        0 drivers/iommu/amd/../iommu-pages.h:94 func:iommu_alloc_pages_node
>> =======================
>>            0        0 drivers/iommu/intel/../iommu-pages.h:94 func:iommu_alloc_pages_node
>>            0        0 drivers/iommu/intel/../iommu-pages.h:94 func:iommu_alloc_pages_node
>>            0        0 drivers/iommu/intel/../iommu-pages.h:94 func:iommu_alloc_pages_node
>>            0        0 drivers/iommu/intel/../iommu-pages.h:94 func:iommu_alloc_pages_node
>>            0        0 drivers/iommu/intel/../iommu-pages.h:94 func:iommu_alloc_pages_node
>
>Yep, that happens when an inlined function allocates memory. It ends
>up inlined in different locations. Usually that's done by allocation
>helper functions.
>To fix this we need to wrap these allocator helpers with alloc_hooks:
>
>-static inline void *iommu_alloc_pages_node(int nid, gfp_t gfp, int order)
>+static inline void *iommu_alloc_pages_node_noprof(int nid, gfp_t gfp,
>int order)
>{
>-        struct page *page = alloc_pages_node(nid, gfp | __GFP_ZERO,
>order);  struct skcipher_request *req;
>+        struct page *page = alloc_pages_node_noprof(nid, gfp |
>__GFP_ZERO, order);  struct skcipher_request *req;
>...
>}
>+#define iommu_alloc_pages_node(...)
>alloc_hooks(iommu_alloc_pages_node_noprof(__VA_ARGS__))
>
>See 2c321f3f70bc "mm: change inlined allocation helpers to account at
>the call site" for examples of how this was done before.
>Thanks,
>Suren.

Thanks for clarifying this, seems like a never-ending work...... >_<|||

>
>> ...
>>
>> The duplication make parsing tools a little bit more complicated:
>> the numbers need to be added up, group by key
>>        81920       20 drivers/iommu/amd/../iommu-pages.h:94 func:iommu_alloc_pages_node 20
>>      1441792      352 drivers/iommu/amd/../iommu-pages.h:94 func:iommu_alloc_pages_node 352
>>
>> The script for checking:
>> ```
>> #!/bin/env python
>> def fetch():
>>     r = {}
>>     with open("/proc/allocinfo") as f:
>>         for l in f:
>>             f = l.strip().split()[2]
>>             if f not in r: r[f]=[]
>>             r[f].append(l)
>>     keys = []
>>     for f, ls in r.items():
>>         if len(ls) > 1: keys.append(f)
>>     keys.sort()
>>     for f in keys:
>>         print "======================="
>>         for l in r[f]: print l,
>>
>> fetch()
>> ```
>>
>> Thanks
>> David
>>
>> [1]. https://lore.kernel.org/lkml/531adbba.b537.196b0868a8c.Coremail.00107082@163.com/
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ