[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFpoUr2t0OXLJZi9wJzYg2uOhSLfwRa7sxCxxWzriJgXDsgEdA@mail.gmail.com>
Date: Tue, 27 Apr 2021 10:36:23 +0200
From: Odin Ugedal <odin@...dal.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Odin Ugedal <odin@...d.al>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
"open list:CONTROL GROUP (CGROUP)" <cgroups@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/1] sched/fair: Fix unfairness caused by missing load decay
Also, instead of bpftrace, one can look at the /proc/sched_debug file,
and infer from there.
Something like:
$ cat /proc/sched_debug | grep ":/slice" -A 28 | egrep "(:/slice)|load_avg"
gives me the output (when one stress proc gets 99%, and the other one 1%):
cfs_rq[0]:/slice/cg-2/sub
.load_avg : 1023
.removed.load_avg : 0
.tg_load_avg_contrib : 1035
.tg_load_avg : 1870
.se->avg.load_avg : 56391
cfs_rq[0]:/slice/cg-1/sub
.load_avg : 1023
.removed.load_avg : 0
.tg_load_avg_contrib : 1024
.tg_load_avg : 1847
.se->avg.load_avg : 4
cfs_rq[0]:/slice/cg-1
.load_avg : 4
.removed.load_avg : 0
.tg_load_avg_contrib : 4
.tg_load_avg : 794
.se->avg.load_avg : 5
cfs_rq[0]:/slice/cg-2
.load_avg : 56401
.removed.load_avg : 0
.tg_load_avg_contrib : 56700
.tg_load_avg : 57496
.se->avg.load_avg : 1008
cfs_rq[0]:/slice
.load_avg : 1015
.removed.load_avg : 0
.tg_load_avg_contrib : 1009
.tg_load_avg : 2314
.se->avg.load_avg : 447
As can be seen here, no other cfs_rq for the relevant cgroups are
"active" and listed, but they still contribute to eg. the "tg_load_avg". In this
example, "cfs_rq[0]:/slice/cg-1" has a load_avg of 4, and contributes with 4 to
tg_load_avg. However, the total total tg_load_avg is 794. That means
that the other 790 have to come from somewhere, and in this example,
they come from the cfs_rq on another cpu.
Hopefully that clarified a bit.
For reference, here is the output when the issue is not occuring:
cfs_rq[1]:/slice/cg-2/sub
.load_avg : 1024
.removed.load_avg : 0
.tg_load_avg_contrib : 1039
.tg_load_avg : 1039
.se->avg.load_avg : 1
cfs_rq[1]:/slice/cg-1/sub
.load_avg : 1023
.removed.load_avg : 0
.tg_load_avg_contrib : 1034
.tg_load_avg : 1034
.se->avg.load_avg : 49994
cfs_rq[1]:/slice/cg-1
.load_avg : 49998
.removed.load_avg : 0
.tg_load_avg_contrib : 49534
.tg_load_avg : 49534
.se->avg.load_avg : 1023
cfs_rq[1]:/slice/cg-2
.load_avg : 1
.removed.load_avg : 0
.tg_load_avg_contrib : 1
.tg_load_avg : 1
.se->avg.load_avg : 1023
cfs_rq[1]:/slice
.load_avg : 2048
.removed.load_avg : 0
.tg_load_avg_contrib : 2021
.tg_load_avg : 2021
.se->avg.load_avg : 1023
Odin
Powered by blists - more mailing lists