[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f2579de9b0555568eff94ff83f8695af7b218349.camel@mediatek.com>
Date: Thu, 14 Apr 2022 17:29:28 +0800
From: Kuyo Chang <kuyo.chang@...iatek.com>
To: Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
"Mel Gorman" <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Matthias Brugger <matthias.bgg@...il.com>
CC: <wsd_upstream@...iatek.com>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-mediatek@...ts.infradead.org>
Subject: Re: [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg
calculate method
On Thu, 2022-04-14 at 11:02 +0200, Dietmar Eggemann wrote:
> On 14/04/2022 03:59, Kuyo Chang wrote:
> > From: kuyo chang <kuyo.chang@...iatek.com>
>
> [...]
>
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index d4bd299d67ab..159274482c4e 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3829,10 +3829,12 @@ static void attach_entity_load_avg(struct
> > cfs_rq *cfs_rq, struct sched_entity *s
> >
> > se->avg.runnable_sum = se->avg.runnable_avg * divider;
> >
> > - se->avg.load_sum = divider;
> > - if (se_weight(se)) {
> > + se->avg.load_sum = se->avg.load_avg * divider;
> > + if (se_weight(se) < se->avg.load_sum) {
> > se->avg.load_sum =
> > - div_u64(se->avg.load_avg * se->avg.load_sum,
> > se_weight(se));
> > + div_u64(se->avg.load_sum, se_weight(se));
>
> Seems that this will fit on one line now. No braces needed then.
Thanks for your friendly reminder.
>
> > + } else {
> > + se->avg.load_sum = 1;
> > }
> >
> > enqueue_load_avg(cfs_rq, se);
>
> Looks like taskgroups are not affected since they get always online
> with cpu.shares/weight = 1024 (cgroup v1):
>
> cpu_cgroup_css_online() -> online_fair_sched_group() ->
> attach_entity_cfs_rq() -> attach_entity_load_avg()
>
> And reweight_entity() does not have this issue.
>
> Tested with `qemu-system-x86_64 ... cores=64 ... -enable-kvm` and
> weight=88761 for nice=0 tasks plus forcing se->avg.load_avg = 1
> before
> the div_u64() in attach_entity_load_avg().
>
> Tested-by: Dietmar Eggemann <dietmar.eggemann@....com>
Powered by blists - more mailing lists