lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3fc8410c.46bd.193dd3348a2.Coremail.00107082@163.com>
Date: Thu, 19 Dec 2024 12:35:45 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Zhenhua Huang" <quic_zhenhuah@...cinc.com>
Cc: "Suren Baghdasaryan" <surenb@...gle.com>, kent.overstreet@...ux.dev, 
	akpm@...ux-foundation.org, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] lib/alloc_tag: Add accumulative call counter for
 memory allocation profiling



At 2024-12-19 12:06:07, "Zhenhua Huang" <quic_zhenhuah@...cinc.com> wrote:
>
>
>On 2024/12/19 10:31, David Wang wrote:
>> HI,
>> At 2024-12-19 02:22:53, "Suren Baghdasaryan" <surenb@...gle.com> wrote:
>>> On Wed, Dec 18, 2024 at 4:49 AM David Wang <00107082@....com> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I found another usage/benefit for accumulative counters:
>>>>
>>>> On my system, /proc/allocinfo yields about 5065 lines,  of which 2/3 lines have accumulative counter *0*.
>>>> meaning no memory activities. (right?)
>>>> It is quite a waste to keep those items which are *not alive yet*.
>>>> With additional changes, only 1684 lines in /proc/allocinfo on my system:
>>>>
>>>> --- a/lib/alloc_tag.c
>>>> +++ b/lib/alloc_tag.c
>>>> @@ -95,8 +95,11 @@ static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct)
>>>>          struct alloc_tag_counters counter = alloc_tag_read(tag);
>>>>          s64 bytes = counter.bytes;
>>>>
>>>> +       if (counter.accu_calls == 0)
>>>> +               return;
>>>>          seq_buf_printf(out, "%12lli %8llu ", bytes, counter.calls);
>>>>
>>>>
>>>> I think this is quite an improvement worth pursuing.
>>>> (counter.calls could also be used to filter out "inactive" items, but
>>>> lines keep disappearing/reappearing can confuse monitoring systems.)
>>>
>>> Please see discussion at
>>> https://lore.kernel.org/all/20241211085616.2471901-1-quic_zhenhuah@quicinc.com/
>> 
>> Thanks for the information.
>> 
>>> My point is that with this change we lose information which can be
>>> useful. For example if I want to analyze all the places in the kernel
>>> where memory can be potentially allocated, your change would prevent
>>> me from doing that
>> 
>> Maybe the filter can be disabled when DEBUG is on?
>> 
>>   > No, I disagree. Allocation that was never invoked is not the same as
>>> no allocation at all. How would we know the difference if we filter
>>> out the empty ones?
>> 
>> Totally agree with this,  I think (bytes || counter.calls)  does not make good filter. Accumulative counter is the answer. :)
>
>hmm... it really depends on the use case IMHO. If memory consumption is 
>a concern, using counter.calls should suffice. However, for 
>performance-related scenarios as you stated, it's definitely better to 
>use an accumulative counter."
>Both of these can't address Suren's comment: "if I want to analyze all 
>the places in the kernel where memory can be potentially allocated, your 
>change would prevent me from doing that", but.

Oh, as I mentioned above, maybe CONFIG_MEM_ALLOC_PROFILING_DEBUG can help:
when DEBUG is on, nothing is filtered.


David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ