[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aNVpFn9W0jYYr9vs@tiehlicka>
Date: Thu, 25 Sep 2025 18:08:54 +0200
From: Michal Hocko <mhocko@...e.com>
To: Mauricio Faria de Oliveira <mfo@...lia.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Oscar Salvador <osalvador@...e.de>,
Suren Baghdasaryan <surenb@...gle.com>,
Brendan Jackman <jackmanb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>, Zi Yan <ziy@...dia.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-dev@...lia.com
Subject: Re: [PATCH 0/3] mm/page_owner: add options 'print_handle' and
'print_stack' for 'show_stacks'
On Wed 24-09-25 14:40:20, Mauricio Faria de Oliveira wrote:
> Problem:
>
> The use case of monitoring the memory usage per stack trace (or tracking
> a particular stack trace) requires uniquely identifying each stack trace.
>
> This has to be done for every stack trace on every sample of monitoring,
> even if tracking only one stack trace (to identify it among all others).
>
> Therefore, an approach like, for example, hashing the stack traces from
> 'show_stacks' for calculating unique identifiers can become expensive.
>
> Solution:
>
> Fortunately, the kernel can provide a unique identifier for stack traces
> in page_owner, which is the handle number in stackdepot.
>
> Additionally, with that information, the stack traces themselves are not
> needed until the time when the memory usage should be associated with a
> stack trace (say, to look at a few top consumers), using handle numbers.
>
> This eliminates hashing and reduces filtering related to stack traces in
> userspace, and reduces text emitted/copied by the kernel.
Let's see if I understand this correctly. You are suggesting trimming
down the output to effectivelly key, value pair and only resolve the key
once per debugging session because keys do not change and you do not
need the full stack traces that maps to the key. Correct?
Could you elaborate some more on why the performance really matters here?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists