[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALPjY3nNy0Lp14uKob_FytTHaMajRAHsob7=hPaUx6LPFQ6MBQ@mail.gmail.com>
Date: Wed, 24 Jan 2018 17:59:20 +0800
From: Lin Xiulei <linxiulei@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Jiri Olsa <jolsa@...hat.com>, mingo@...hat.com, acme@...nel.org,
alexander.shishkin@...ux.intel.com, linux-kernel@...r.kernel.org,
tglx@...utronix.de, Stephane Eranian <eranian@...il.com>,
torvalds@...ux-foundation.org, linux-perf-users@...r.kernel.org,
Brendan Gregg <brendan.d.gregg@...il.com>,
yang_oliver@...mail.com, jinli.zjl@...baba-inc.com,
"leilei.lin" <leilei.lin@...baba-inc.com>
Subject: Re: [PATCH v2] perf/core: Fix installing cgroup event into cpu
2018-01-24 17:46 GMT+08:00 Peter Zijlstra <peterz@...radead.org>:
> On Wed, Jan 24, 2018 at 05:19:34PM +0800, Lin Xiulei wrote:
>> Sure, and I consider this "OK" works for "What goes wrong if we leave
>> it set?". : )
>
> It would be good if you inspect the code for the case of leaving
> cpuctx->cgrp set with no cgroup events left -- AND -- put a blurb about
> what you found in your new Changelog.
>
I have some test cases for this issue, I don't know if it's good to
put those in changelog
reproduction as below
Step 1
Create program for measuring, write below to the file d.py
```
while True:
sumup = 0
for i in range(10000000):
sumup += i
```
Step 2
Create cgroup path and run relative program
```
mkdir /sys/fs/cgroup/perf_event/test1
mkdir /sys/fs/cgroup/perf_event/test2
python d.py &
echo $! > /sys/fs/cgroup/perf_event/test1/cgroup.procs
python d.py &
echo $! > /sys/fs/cgroup/perf_event/test2/cgroup.procs
```
Step 3
```
perf stat -e cycles -G test1 -e cycles -G test2 -a sleep 1
```
You would see output like below
```
Performance counter stats for 'system wide':
2,161,022,123 cycles test1
138,626,073 cycles test2
1.001858328 seconds time elapsed
```
The result of test2 is much less than test1, which happens commonly.
Just because of what I mentioned above that a second event couldn't be
activated immediately, that cases some loss
> I suspect it works out and something like perf_cgroup_switch() will fix
> things up for us later, but double check and test.
exactly, the case above wouldn't have any result if no
perf_cgroup_switch() happened.
Powered by blists - more mailing lists