lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <23574b3c-e990-45bc-b3f5-8664781adddf@huawei.com>
Date: Fri, 9 Jan 2026 16:40:40 +0800
From: Zicheng Qu <quzicheng@...wei.com>
To: K Prateek Nayak <kprateek.nayak@....com>, <mingo@...hat.com>,
	<peterz@...radead.org>, <juri.lelli@...hat.com>,
	<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
	<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
	<vschneid@...hat.com>, <linux-kernel@...r.kernel.org>
CC: <tanghui20@...wei.com>, <quzicheng@...wei.com>
Subject: Re: [PATCH] sched/fair: Fix vruntime drift by preventing double lag
 scaling during reweight

Hi Prateek,

On 1/9/2026 12:50 PM, K Prateek Nayak wrote:
> If I'm not mistaken, the problem is that we'll see "curr->on_rq" and
> then do:
>
>      if (curr && curr->on_rq)
>          load += scale_load_down(curr->load.weight);
>
>      lag *= load + scale_load_down(se->load.weight);
>
>
> which shouldn't be the case since we are accounting "se" twice when
> it is also the "curr" and avg_vruntime() would have also accounted it
> already since "curr->on_rq" and then we do everything twice for "se".
Thanks for the analysis — I agree your concern is reasonable, but I
think the issue here is slightly different from "accounting se twice",
but a semantic mismatch in how place_entity() is used.

place_entity() is meant to compensate lag for entities being inserted
into the runqueue, accounting for the effect of a new entity on the
weighted average vruntime. That assumption holds when an se is joining
the rq. However, when se == cfs_rq->curr, the entity never left the
runqueue and avg_vruntime() has not changed, so applying enqueue-style
lag scaling is not appropriate.
> I'm wondering if instead of adding a flag, we can do:
Yes, I totally agree that adding a new flag is unnecessary. We
can handle this directly in place_entity() by skipping lag scaling in
case of `se == cfs_rq->curr`, for example:

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index da46c3164537..1b279bf43f38 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5123,6 +5123,15 @@ place_entity(struct cfs_rq *cfs_rq, struct 
sched_entity *se, int flags)

                 lag = se->vlag;

+              /*
+               * place_entity() compensates lag for entities being 
inserted into the
+               * runqueue. When se == cfs_rq->curr, the entity never 
left the rq and
+               * avg_vruntime() did not change, so enqueue-style lag 
scaling does not
+               * apply.
+               */
+              if (se == cfs_rq->curr)
+                      goto skip_lag_scale;
+
                 /*
                  * If we want to place a task and preserve lag, we have to
                  * consider the effect of the new entity on the weighted
@@ -5185,6 +5194,7 @@ place_entity(struct cfs_rq *cfs_rq, struct 
sched_entity *se, int flags)
                 lag = div_s64(lag, load);
         }

+skip_lag_scale:
         se->vruntime = vruntime - lag;

         if (se->rel_deadline) {

Best regards,
Zicheng

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ