[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7cgdUJytP31y90c5AuQAmR6FgkBWjj4brVjH8Pg+d00O+Q@mail.gmail.com>
Date: Wed, 15 Nov 2023 07:48:33 -0800
From: Namhyung Kim <namhyung@...nel.org>
To: David Wang <00107082@....com>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
acme@...nel.org, mark.rutland@....com,
alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
irogers@...gle.com, adrian.hunter@...el.com,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Regression or Fix]perf: profiling stats sigificantly changed for
aio_write/read(ext4) between 6.7.0-rc1 and 6.6.0
On Wed, Nov 15, 2023 at 3:00 AM David Wang <00107082@....com> wrote:
>
>
>
> At 2023-11-15 18:32:41, "Peter Zijlstra" <peterz@...radead.org> wrote:
> >
> >Namhyung, could you please take a look, you know how to operate this
> >cgroup stuff.
> >
>
> More information, I run the profiling with 8cpu machine on a SSD with ext4 filesystem :
>
> # mkdir /sys/fs/cgroup/mytest
> # echo $$ > /sys/fs/cgroup/mytest/cgroup.procs
> ## Start profiling targeting cgroup /sys/fs/cgroup/mytest on another terminal
> # fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --bs=4k --iodepth=64 --size=1G --readwrite=randrw --runtime=600 --numjobs=4 --time_based=1
>
> I got a feeling that f06cc667f7990 would decrease total samples by 10%~20% when profiling IO benchmark within cgroup.
Oh sorry, I missed this message. Can you please share the
command line and the output?
Thanks,
Namhyung
Powered by blists - more mailing lists