[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8b10446a-16bc-4967-b03c-7be577671dab@suse.cz>
Date: Tue, 26 Nov 2024 08:32:25 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Shakeel Butt <shakeel.butt@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>, Suren Baghdasaryan
<surenb@...gle.com>, Matthew Wilcox <willy@...radead.org>,
linux-mm@...ck.org, cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH v2] mm: mmap_lock: optimize mmap_lock tracepoints
On 11/25/24 18:16, Shakeel Butt wrote:
> We are starting to deploy mmap_lock tracepoint monitoring across our
> fleet and the early results showed that these tracepoints are consuming
> significant amount of CPUs in kernfs_path_from_node when enabled.
>
> It seems like the kernel is trying to resolve the cgroup path in the
> fast path of the locking code path when the tracepoints are enabled. In
> addition for some application their metrics are regressing when
> monitoring is enabled.
>
> The cgroup path resolution can be slow and should not be done in the
> fast path. Most userspace tools, like bpftrace, provides functionality
> to get the cgroup path from cgroup id, so let's just trace the cgroup
> id and the users can use better tools to get the path in the slow path.
>
> Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Powered by blists - more mailing lists