[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDzVHZpE0XmyPF8AVsmtVMCmKgYERjdW4euj-H6kNu2Rw@mail.gmail.com>
Date: Wed, 20 Nov 2019 20:28:57 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Qais Yousef <qais.yousef@....com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Phil Auld <pauld@...hat.com>,
Valentin Schneider <valentin.schneider@....com>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Quentin Perret <quentin.perret@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Hillf Danton <hdanton@...a.com>,
Parth Shah <parth@...ux.ibm.com>,
Rik van Riel <riel@...riel.com>
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
On Wed, 20 Nov 2019 at 19:27, Qais Yousef <qais.yousef@....com> wrote:
>
> On 11/20/19 19:20, Vincent Guittot wrote:
> > On Wed, 20 Nov 2019 at 19:10, Qais Yousef <qais.yousef@....com> wrote:
> > >
> > > On 11/20/19 18:43, Vincent Guittot wrote:
> > > > On Wed, 20 Nov 2019 at 18:34, Qais Yousef <qais.yousef@....com> wrote:
> > > > >
> > > > > On 11/20/19 17:53, Vincent Guittot wrote:
> > > > > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > > > > <vincent.guittot@...aro.org> wrote:
> > > > > > >
> > > > > > > Hi Qais,
> > > > > > >
> > > > > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@....com> wrote:
> > > > > > > >
> > > > > > > > Hi Vincent
> > > > > > > >
> > > > > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > > > > sched_group and select the idlest one following the same steps as
> > > > > > > > > load_balance().
> > > > > > > > >
> > > > > > > > > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > > > > > > > > ---
> > > > > > > >
> > > > > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > > > > bisected it to this patch.
> > > > > > > >
> > > > > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > > > > passes. Without the revert the test fails.
> > > > > >
> > > > > > I haven't tried linux-next yet but LTP test is passed with
> > > > > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > > > > too.
> > > > > >
> > > > > > Have you tried tip/sched/core on your juno ? this could help to
> > > > > > understand if it's only for juno or if this patch interact with
> > > > > > another branch merged in linux next
> > > > >
> > > > > Okay will give it a go. But out of curiosity, what is the output of your run?
> > > > >
> > > > > While bisecting on linux-next I noticed that at some point the test was
> > > > > passing but all the read values were 0. At some point I started seeing
> > > > > none-zero values.
> > > >
> > > > for tip/sched/core
> > > > linaro@...aro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > > > sudo ./perf_event_open02
> > > > perf_event_open02 0 TINFO : overall task clock: 63724479
> > > > perf_event_open02 0 TINFO : hw sum: 1800900992, task clock sum: 382170311
> > > > perf_event_open02 0 TINFO : ratio: 5.997229
> > > > perf_event_open02 1 TPASS : test passed
> > > >
> > > > for next-2019119
> > > > ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> > > > at iteration:0 value:0 time_enabled:69795312 time_running:0
> > > > perf_event_open02 0 TINFO : overall task clock: 63582292
> > > > perf_event_open02 0 TINFO : hw sum: 0, task clock sum: 0
> > > > hw counters: 0 0 0 0
> > > > task clock counters: 0 0 0 0
> > > > perf_event_open02 0 TINFO : ratio: 0.000000
> > > > perf_event_open02 1 TPASS : test passed
> > >
> > > Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
> > > next-20191119. I'm not sure why the counters return 0 sometimes - is it
> > > dependent on some option or a bug somewhere.
> > >
> > > I just did another run and it failed for me (building with defconfig)
> > >
> > > # uname -a
> > > Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux
> > >
> > > # ./perf_event_open02 -v
> > > at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
> > > perf_event_open02 0 TINFO : overall task clock: 166915220
> > > perf_event_open02 0 TINFO : hw sum: 1200718268, task clock sum: 667621320
> > > hw counters: 300179051 300179395 300179739 300180083
> > > task clock counters: 166906620 166906200 166905160 166903340
> > > perf_event_open02 0 TINFO : ratio: 3.999763
> > > perf_event_open02 0 TINFO : nhw: 0.000100
> > > perf_event_open02 1 TFAIL : perf_event_open02.c:370: test failed (ratio was greater than )
> > >
> > > It is a funny one for sure. I haven't tried tip/sched/core yet.
> >
> > I confirm that on next-20191119, hw counters always return 0
> > but on tip/sched/core which has this patch and v5.4-rc7 which has not,
> > the hw counters are always different from 0
>
> It's the other way around for me. tip/sched/core returns 0 hw counters. I tried
> enabling coresight; that had no effect. Nor copying the .config that failed
> from linux-next to tip/sched/core. I'm not sure what's the dependency/breakage
> :-/
I run few more tests and i can get either hw counter with 0 or not.
The main difference is on which CPU it runs: either big or little
little return always 0 and big always non-zero value
on v5.4-rc7 and tip/sched/core, cpu0-3 return 0 and other non zeroa
but on next, it's the opposite cpu0-3 return non zero ratio
Could you try to run the test with taskset to run it on big or little ?
>
> --
> Qais Yousef
>
> >
> > on v5.4-rc7 i have got the same ratio :
> > linaro@...aro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > sudo ./perf_event_open02 -v
> > at iteration:0 value:300157088 time_enabled:80641145 time_running:80641145
> > at iteration:1 value:300100129 time_enabled:63572917 time_running:63572917
> > at iteration:2 value:300100885 time_enabled:63569271 time_running:63569271
> > at iteration:3 value:300103998 time_enabled:63573437 time_running:63573437
> > at iteration:4 value:300101477 time_enabled:63571875 time_running:63571875
> > at iteration:5 value:300100698 time_enabled:63569791 time_running:63569791
> > at iteration:6 value:245252526 time_enabled:63650520 time_running:52012500
> > perf_event_open02 0 TINFO : overall task clock: 63717187
> > perf_event_open02 0 TINFO : hw sum: 1800857435, task clock sum: 382156248
> > hw counters: 149326575 150152481 169006047 187845928 206684169
> > 224693333 206543358 187716226 168865909 150023409
> > task clock counters: 31694792 31870834 35868749 39866666 43863541
> > 47685936 43822396 39826042 35828125 31829167
> > perf_event_open02 0 TINFO : ratio: 5.997695
> > perf_event_open02 1 TPASS : test passed
Powered by blists - more mailing lists