[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5bd40a66-bb20-44c7-9c0e-35bfa1d271f6@linux.intel.com>
Date: Tue, 30 Apr 2019 11:46:54 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ian Rogers <irogers@...gle.com>
Cc: tglx@...utronix.de, mingo@...hat.com, linux-kernel@...r.kernel.org,
Stephane Eranian <eranian@...gle.com>, tj@...nel.org,
ak@...ux.intel.com
Subject: Re: [PATCH 3/4] perf cgroup: Add cgroup ID as a key of RB tree
On 4/30/2019 5:08 AM, Peter Zijlstra wrote:
> On Mon, Apr 29, 2019 at 04:02:33PM -0700, Ian Rogers wrote:
>> This is very interesting. How does the code handle cgroup hierarchies?
>> For example, if we have:
>>
>> cgroup0 is the cgroup root
>> cgroup1 whose parent is cgroup0
>> cgroup2 whose parent is cgroup1
>>
>> we have task0 running in cgroup0, task1 in cgroup1, task2 in cgroup2
>> and then a perf command line like:
>> perf stat -e cycles,cycles,cycles -G cgroup0,cgroup1,cgroup2 --no-merge sleep 10
>>
>> we expected 3 cycles counts:
>> - for cgroup0 including task2, task1 and task0
>> - for cgroup1 including task2 and task1
>> - for cgroup2 just including task2
>>
>> It looks as though:
>> + if (next && (next->cpu == event->cpu) && (next->cgrp_id ==
>> event->cgrp_id))
>>
>> will mean that events will only consider cgroups that directly match
>> the cgroup of the event. Ie we'd get 3 cycles counts of:
>> - for cgroup0 just task0
>> - for cgroup1 just task1
>> - for cgroup2 just task2
> Yeah, I think you're right; the proposed code doesn't capture the
> hierarchy thing at all.
The hierarchies is handled in the next patch as below.
But I once thought we only need to handle directly match. So it will be
return immediately once a match found.
I believe we can fix it by simply remove the "return 0".
> +static int cgroup_visit_groups_merge(struct perf_event_groups *groups, int cpu,
> + int (*func)(struct perf_event *, void *, int (*)(struct perf_event *)),
> + void *data)
> +{
> + struct sched_in_data *sid = data;
> + struct cgroup_subsys_state *css;
> + struct perf_cgroup *cgrp;
> + struct perf_event *evt;
> + u64 cgrp_id;
> +
> + for (css = &sid->cpuctx->cgrp->css; css; css = css->parent) {
> + /* root cgroup doesn't have events */
> + if (css->id == 1)
> + return 0;
> +
> + cgrp = container_of(css, struct perf_cgroup, css);
> + cgrp_id = *this_cpu_ptr(cgrp->cgrp_id);
> + /* Only visit groups when the cgroup has events */
> + if (cgrp_id) {
> + evt = perf_event_groups_first_cgroup(groups, cpu, cgrp_id);
> + while (evt) {
> + if (func(evt, (void *)sid, pmu_filter_match))
> + break;
> + evt = perf_event_groups_next_cgroup(evt);
> + }
> + return 0; <--- need to remove for hierarchies
> + }
> + }
> +
> + return 0;
> +}
Thanks,
Kan
Powered by blists - more mailing lists