[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190916152325.GD3084169@devbig004.ftw2.facebook.com>
Date: Mon, 16 Sep 2019 08:23:25 -0700
From: Tejun Heo <tj@...nel.org>
To: Song Liu <liu.song.a23@...il.com>
Cc: Namhyung Kim <namhyung@...nel.org>, Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Jiri Olsa <jolsa@...hat.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Stephane Eranian <eranian@...gle.com>,
Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH 2/9] perf/core: Add PERF_SAMPLE_CGROUP feature
Hello, Song.
On Sat, Sep 14, 2019 at 03:02:51PM +0100, Song Liu wrote:
> I think we don't need a perfect identifier in this case. IIUC, the goal of
I really don't want different versions of imperfect identifiers
proliferating.
> this patchset is to map each sample with a cgroup name (or full path).
> To achieve this, we need
>
> 1. PERF_RECORD_CGROUP, that maps
> "64-bit number" => cgroup name/path
> 2. PERF_SAMPLE_CGROUP, that adds "64-bit number" to each sample.
>
> I call the id a "64-bit number" because it is not required to be a globally
> unique id. As long as it is consistent within the same perf-record session,
> we won't get any confusion. Since we add PERF_RECORD_CGROUP
> for each cgroup creation, we will map most of samples correctly even
> when the "64-bit number" is recycled within the same perf-record session.
>
> At the moment, I think ino is good enough for the "64-bit number" even
> for 32-bit systems. If we don't call it "ino" (just call it "cgroup_tag" or
> "cgroup_id", we can change it when kernfs provides a better 64-bit id.
So, a firm nack on this direction.
> About full path name: The user names the full path here. If the user gives
> two different workloads the same name/path, we really cannot change that.
> Reasonable users would be able to make sense from the full path.
I don't see why we wanna be causing this avoidable problem to users.
Thanks.
--
tejun
Powered by blists - more mailing lists