lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6f0954ba-2ca7-41e8-b4f7-b0b17990f659@linux.ibm.com>
Date: Thu, 18 Dec 2025 15:41:55 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>,
        kernel test robot <oliver.sang@...el.com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, linux-kernel@...r.kernel.org,
        x86@...nel.org, Ingo Molnar <mingo@...nel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Juri Lelli <juri.lelli@...hat.com>, Mel Gorman <mgorman@...e.de>,
        Valentin Schneider <vschneid@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        aubrey.li@...ux.intel.com, yu.c.chen@...el.com
Subject: Re: [tip:sched/core] [sched/fair] 089d84203a:
 pts.schbench.32.usec,_99.9th_latency_percentile 52.4% regression



On 12/18/25 2:07 PM, Peter Zijlstra wrote:
> On Thu, Dec 18, 2025 at 12:59:53PM +0800, kernel test robot wrote:
>>
>>
>> Hello,
>>
>> kernel test robot noticed a 52.4% regression of pts.schbench.32.usec,_99.9th_latency_percentile on:
>>
>>
>> commit: 089d84203ad42bc8fd6dbf41683e162ac6e848cd ("sched/fair: Fold the sched_avg update")
>> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git sched/core
> 
> Well, that obviously wasn't the intention. Let me pull that patch :/

Is it possible because it missed scaling by se_weight(se) ??


+#define __update_sa(sa, name, delta_avg, delta_sum) do {       \
+       add_positive(&(sa)->name##_avg, delta_avg);             \
+       add_positive(&(sa)->name##_sum, delta_sum);             \
+       (sa)->name##_sum = max_t(typeof((sa)->name##_sum),      \
+                              (sa)->name##_sum,                \
+                              (sa)->name##_avg * PELT_MIN_DIVIDER); \
+} while (0)
+
  static inline void
  enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
  {
-       cfs_rq->avg.load_avg += se->avg.load_avg;
-       cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum;
+       __update_sa(&cfs_rq->avg, load, se->avg.load_avg, se->avg.load_sum);
  }
  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ