[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cq7537bswpnbsmeiw3rh4ffrgqky4iufsaurukpk2flxts6jcu@6ctttkclvf3f>
Date: Thu, 30 May 2024 14:56:56 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Pasha Tatashin <pasha.tatashin@...een.com>
Cc: akpm@...ux-foundation.org, jpoimboe@...nel.org,
kent.overstreet@...ux.dev, peterz@...radead.org, nphamcs@...il.com,
cerasuolodomenico@...il.com, surenb@...gle.com, lizhijian@...itsu.com, willy@...radead.org,
vbabka@...e.cz, ziy@...dia.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3] vmstat: Kernel stack usage histogram
Hi Pasha,
On Thu, May 30, 2024 at 05:02:59PM GMT, Pasha Tatashin wrote:
> Provide a kernel stack usage histogram to aid in optimizing kernel stack
> sizes and minimizing memory waste in large-scale environments. The
> histogram divides stack usage into power-of-two buckets and reports the
> results in /proc/vmstat. This information is especially valuable in
> environments with millions of machines, where even small optimizations
> can have a significant impact.
>
> The histogram data is presented in /proc/vmstat with entries like
> "kstack_1k", "kstack_2k", and so on, indicating the number of threads
> that exited with stack usage falling within each respective bucket.
>
> Example outputs:
> Intel:
> $ grep kstack /proc/vmstat
> kstack_1k 3
> kstack_2k 188
> kstack_4k 11391
> kstack_8k 243
> kstack_16k 0
>
> ARM with 64K page_size:
> $ grep kstack /proc/vmstat
> kstack_1k 1
> kstack_2k 340
> kstack_4k 25212
> kstack_8k 1659
> kstack_16k 0
> kstack_32k 0
> kstack_64k 0
>
> Signed-off-by: Pasha Tatashin <pasha.tatashin@...een.com>
Couple of questions:
1. In future with your on-demand kstack allocation feature, will these
metrics still be useful? (I think so but I want to know your take)
2. With on-demand kstack allocation, the stack_not_used() needs to be
changed to not cause the allocation, right?
3. Does the histogram get updated on exit only? What about long running
kernel threads whose will never exit?
thanks,
Shakeel
Powered by blists - more mailing lists