[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260211090144.GY1282955@noisy.programming.kicks-ass.net>
Date: Wed, 11 Feb 2026 10:01:44 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Doug Smythies <dsmythies@...us.net>,
K Prateek Nayak <kprateek.nayak@....com>, mingo@...nel.org,
juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, linux-kernel@...r.kernel.org,
wangtao554@...wei.com, quzicheng@...wei.com,
wuyun.abel@...edance.com
Subject: Re: [PATCH 0/4] sched: Various reweight_entity() fixes
On Wed, Feb 11, 2026 at 09:49:57AM +0100, Vincent Guittot wrote:
> On Wed, 11 Feb 2026 at 06:21, Doug Smythies <dsmythies@...us.net> wrote:
> >
> > On 2026.02.10 12:52 Vincent Guittot wrote:
> > > On Mon, 9 Feb 2026 at 16:47, Peter Zijlstra <peterz@...radead.org> wrote:
> > >> On Wed, Feb 04, 2026 at 03:45:58PM +0530, K Prateek Nayak wrote:
> > >
> > ... delete ...
> >
> > > This patch w/ the patchset on top of tip/sched/core create regressions
> > > for hackbench (tbench doesn't seem to be impacted) on my dragonboard
> > > rb5
> > > All hackbench tests are regressing. Some results below
> > ...
> > > hackbench 8 group process socket
> > > 0.650(+/-1%) vs 2.361(+/-8.8%) : -263%
> > ...
> >
> > Very interesting.
> > I only know of the Phoronix version of hackbench.
> > I ran what I believe to be a similar scenario to yours:
> > 10 test runs each (the default is 3):
> >
> > Kernel 6.19-rc8: 23.228 seconds average, deviation 0.39%
> > Kernel 6.19-rc8-pz-v2: 85.755 seconds average, deviation 3.33%
> > 269% regression. (very similar to Vicent's results)
>
> patch 3 + a default value for sum_shift restore the performance
> cfs_rq->sum_shift = SCHED_FIXEDPOINT_SHIFT;
Hurmph.. I really wanted to do away with that, because those small
weights are fairly common in the cgroup shares thing, and will be more
common still if we do a flat pick.
> There 2 issues with patch 3
>
> *one scale_load_down remains in avg_vruntime
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 25c398ff0d59..3143ae7f07b0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -778,7 +778,7 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq)
>
> if (weight) {
> if (curr) {
> - unsigned long w = scale_load_down(curr->load.weight);
> + unsigned long w =
> avg_vruntime_weight(curr->load.weight);
>
> runtime += entity_key(cfs_rq, curr) * w;
> weight += w;
AAARGHHH!! Sorry about that, clearly I've not been careful with
reshuffling patches :-(
> and we still use calc_delta_fair() in update_entity_lag() but
> calc_delta_fair() use scale_load_down()
Hmmm, indeed, let me see if I can do something about that. Perhaps just
eat the 64bit division on 64bit systems.
Anyway, let me go poke at all this.
Powered by blists - more mailing lists