lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtBYqjcxS7S4z-e9LrSmaeR2Qhs-8twVERBa_YfyOQf0JA@mail.gmail.com>
Date: Thu, 18 Dec 2025 11:37:27 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Shrikanth Hegde <sshegde@...ux.ibm.com>, kernel test robot <oliver.sang@...el.com>, oe-lkp@...ts.linux.dev, 
	lkp@...el.com, linux-kernel@...r.kernel.org, x86@...nel.org, 
	Ingo Molnar <mingo@...nel.org>, Linus Torvalds <torvalds@...ux-foundation.org>, 
	Dietmar Eggemann <dietmar.eggemann@....com>, Juri Lelli <juri.lelli@...hat.com>, 
	Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>, aubrey.li@...ux.intel.com, 
	yu.c.chen@...el.com
Subject: Re: [tip:sched/core] [sched/fair] 089d84203a: pts.schbench.32.usec,_99.9th_latency_percentile
 52.4% regression

On Thu, 18 Dec 2025 at 11:20, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Thu, Dec 18, 2025 at 03:41:55PM +0530, Shrikanth Hegde wrote:
> > On 12/18/25 2:07 PM, Peter Zijlstra wrote:
> > > On Thu, Dec 18, 2025 at 12:59:53PM +0800, kernel test robot wrote:
> > > >
> > > >
> > > > Hello,
> > > >
> > > > kernel test robot noticed a 52.4% regression of pts.schbench.32.usec,_99.9th_latency_percentile on:
> > > >
> > > >
> > > > commit: 089d84203ad42bc8fd6dbf41683e162ac6e848cd ("sched/fair: Fold the sched_avg update")
> > > > https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git sched/core
> > >
> > > Well, that obviously wasn't the intention. Let me pull that patch :/
> >
> > Is it possible because it missed scaling by se_weight(se) ??
>
> >  static inline void
> >  enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
> >  {
> > -       cfs_rq->avg.load_avg += se->avg.load_avg;
> > -       cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum;
> > +       __update_sa(&cfs_rq->avg, load, se->avg.load_avg, se->avg.load_sum);
> >  }
>
> Ah, indeed, something like so then? Can the robot (Oliver/Philip)
> verify?

yes, rq tracks the weighted sum whereas se tracks the unweighted sum

>
> (I was going to shelf it and look at it after the holidays, but if this
> is it, we can get it fixed before I dissapear).
>
> ---
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 76f5e4b78b30..7377f9117501 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3775,13 +3775,15 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  static inline void
>  enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  {
> -       __update_sa(&cfs_rq->avg, load, se->avg.load_avg, se->avg.load_sum);
> +       __update_sa(&cfs_rq->avg, load, se->avg.load_avg,
> +                   se_weight(se) * se->avg.load_sum);
>  }
>
>  static inline void
>  dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  {
> -       __update_sa(&cfs_rq->avg, load, -se->avg.load_avg, -se->avg.load_sum);
> +       __update_sa(&cfs_rq->avg, load, -se->avg.load_avg,
> +                   se_weight(se) * -se->avg.load_sum);
>  }
>
>  static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ