[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCPTcfFs2OjaY9O2BtSZ_M6Gr-rGgFO2b8-wfvQWAZ1Zg@mail.gmail.com>
Date: Wed, 4 Mar 2020 09:45:44 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: 王贇 <yun.wang@...ux.alibaba.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
"open list:SCHEDULER" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is
too, small
On Tue, 3 Mar 2020 at 20:52, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, Mar 03, 2020 at 10:17:03PM +0800, 王贇 wrote:
> > During our testing, we found a case that shares no longer
> > working correctly, the cgroup topology is like:
> >
> > /sys/fs/cgroup/cpu/A (shares=102400)
> > /sys/fs/cgroup/cpu/A/B (shares=2)
> > /sys/fs/cgroup/cpu/A/B/C (shares=1024)
> >
> > /sys/fs/cgroup/cpu/D (shares=1024)
> > /sys/fs/cgroup/cpu/D/E (shares=1024)
> > /sys/fs/cgroup/cpu/D/E/F (shares=1024)
> >
> > The same benchmark is running in group C & F, no other tasks are
> > running, the benchmark is capable to consumed all the CPUs.
> >
> > We suppose the group C will win more CPU resources since it could
> > enjoy all the shares of group A, but it's F who wins much more.
> >
> > The reason is because we have group B with shares as 2, which make
> > the group A 'cfs_rq->load.weight' very small.
> >
> > And in calc_group_shares() we calculate shares as:
> >
> > load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
> > shares = (tg_shares * load) / tg_weight;
> >
> > Since the 'cfs_rq->load.weight' is too small, the load become 0
> > in here, although 'tg_shares' is 102400, shares of the se which
> > stand for group A on root cfs_rq become 2.
>
> Argh, because A->cfs_rq.load.weight is B->se.load.weight which is
> B->shares/nr_cpus.
>
> > While the se of D on root cfs_rq is far more bigger than 2, so it
> > wins the battle.
> >
> > This patch add a check on the zero load and make it as MIN_SHARES
> > to fix the nonsense shares, after applied the group C wins as
> > expected.
> >
> > Signed-off-by: Michael Wang <yun.wang@...ux.alibaba.com>
> > ---
> > kernel/sched/fair.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 84594f8aeaf8..53d705f75fa4 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3182,6 +3182,8 @@ static long calc_group_shares(struct cfs_rq *cfs_rq)
> > tg_shares = READ_ONCE(tg->shares);
> >
> > load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
> > + if (!load && cfs_rq->load.weight)
> > + load = MIN_SHARES;
> >
> > tg_weight = atomic_long_read(&tg->load_avg);
>
> Yeah, I suppose that'll do. Hurmph, wants a comment though.
>
> But that has me looking at other users of scale_load_down(), and doesn't
> at least update_tg_cfs_load() suffer the same problem?
yes and other places like the load_avg that will stay to 0 or the fact
that weight != 0 is used to assume that se on enqueued and to not
remove the cfs from the leaf_cfs_rq_list even if load_avg is null
Powered by blists - more mailing lists