lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260126212347.GV166857@noisy.programming.kicks-ass.net>
Date: Mon, 26 Jan 2026 22:23:47 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Zicheng Qu <quzicheng@...wei.com>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
	mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org,
	tanghui20@...wei.com
Subject: Re: [PATCH] sched/fair: Fix vruntime drift by preventing double lag
 scaling during reweight

On Fri, Dec 26, 2025 at 12:17:31AM +0000, Zicheng Qu wrote:
> In reweight_entity(), when reweighting a currently running entity (se ==
> cfs_rq->curr), the entity remains on the runqueue context without
> undergoing a full dequeue/enqueue cycle.

I am horribly confused by this statement. In case current is on_rq
(most often the case) then it is dequeued just as much as any other
on_rq task being reweighted.

[edit: ... after much puzzling ...

Ooh, are you perhaps alluding to the fact that avg_vruntime() always
adds cfs_rq->curr in?]

> This means avg_vruntime()
> remains constant throughout the reweight operation.

Now this might have a point; but I don't see how it would be specific to
current in any way.

Consider the tree entities (vruntime,weight):

 (-40,10)
 ( 30,10)
 (-20,10)

This gives:

         -40*10 + 30*10 + -20*10   -300
 w_avg = ----------------------- = ---- = -10
                    30              30

Now, lets randomly pick (30,10) and do reweight to 5, then we have:

  vlag  = w_avg - vruntime = -10 - 30 = -40
  vlag' = vlag * weight / weight' = -40*10/5 = -80

Then placing that at -10 would give:

 (-40,10)
 ( 70,5 )
 (-20,10)

         -40*10 + 70*5 + -20*10   -250
 w_avg = ---------------------- = ---- = -10
                    25             25

However, if we pick any other entity, lets say (-40/10) and do the same:

 30*10/5 = 60

placing at -10 gives, (-70,5), giving the w_avg:

         -70*5 + 30*10 + -20*10   -250
 w_avg = ---------------------- = ---- = -10
                    25             25


Or rather, we can say that w_avg is/should-be invariant under the
reweight transform. Current or no current, it doesn't matter.

Which is more or less what we had prior to: 6d71a9c61604 ("sched/fair:
Fix EEVDF entity placement bug causing scheduling lag")

> However, the current implementation calls place_entity(..., 0) at the
> end of reweight_entity(). Under EEVDF, place_entity() is designed to
> handle entities entering the runqueue and calculates the virtual lag
> (vlag) to account for the change in the weighted average vruntime (V)
> using the formula:
> 
> 	vlag' = vlag * (W + w_i) / W
>
> Where 'W' is the current aggregate weight (including
> cfs_rq->curr->load.weight) and 'w_i' is the weight of the entity being
> enqueued (in this case, the se is exactly the cfs_rq->curr).
> 
> This leads to a "double scaling" logic for running entities:
> 1. reweight_entity() already rescales se->vlag based on the new weight
>    ratio.
> 2. place_entity() then mistakenly applies the (W + w_i)/W scaling again,
>    treating the reweight as a fresh enqueue into a new total weight
> pool.
> 
> This can cause the entity's vlag to be amplified (if positive) or
> suppressed (if negative) incorrectly during the reweight process.

Again, nothing specific to current here.

[edit: except for the fact that avg_vruntime() actually does include
current when re-adding current -- but that still doesn't explain why it
would be a good idea to have two different ways of doing reweight]

This all suggests we should revert 6d71a9c61604 ("sched/fair: Fix EEVDF
entity placement bug causing scheduling lag"), except we first need an
explanation for the problem described there.

At the time I observed reweight cycles 1048576 -> 2 -> 1048576 inflating
the lag, which obviously should not be.

I might have made a mistake, but we should ensure we're not
re-introducing that problem, so let me see if I can figure out where
that came from if not from this.

Now, poking at all this, I did find some numerical stability issues, but
rather than blowing up like I saw for 6d71a9c61604, these cause lag to
evaporate (also not good).

Observe the below proglet; when using w_avg_t (for truncate, like
sum_w_vruntime with use of scale_load_weight()):

vruntime: 40960 (-51200) -> 40960 (-51200)
avg: -10240 -- -10240
vruntime: 40960 (-51200) -> 26843535360 (-26843545600)
avg: -35840 -- -35840
vruntime: 26843535360 (-26843571200) -> 15360 (-51200)
avg: -18773 -- -18773
vruntime: 15360 (-34133) -> 17895503531 (-17895522304)
avg: -35840 -- -35840
vruntime: 17895503531 (-17895539371) -> -1707 (-34133)
avg: -24462 -- -24462
vruntime: -1707 (-22755) -> 11930148978 (-11930173440)
avg: -35840 -- -35840
vruntime: 11930148978 (-11930184818) -> -13085 (-22755)
avg: -28255 -- -28255


Where as, if we run it with w_avg_n:

vruntime: 40960 (-51200) -> 40960 (-51200)
avg: -10240 -- -10240
vruntime: 40960 (-51200) -> 26843535360 (-26843545600)
avg: -10240 -- -35840
vruntime: 26843535360 (-26843545600) -> 40960 (-51200)
avg: -10240 -- -10240
vruntime: 40960 (-51200) -> 26843535360 (-26843545600)
avg: -10240 -- -35840
vruntime: 26843535360 (-26843545600) -> 40960 (-51200)
avg: -10240 -- -10240
vruntime: 40960 (-51200) -> 26843535360 (-26843545600)
avg: -10240 -- -35840
vruntime: 26843535360 (-26843545600) -> 40960 (-51200)
avg: -10240 -- -10240

Specifically, the problem is because the 2 weight is below the
representable value, causing accuracy issues for avg_vruntime which
propagate.

Luckily, we recently replaced min_vruntime with zero_vruntime, which
tracks avg_vruntime far better resulting in smaller 'keys', which in
turn -- as it happens -- allows us to get rid of that
scale_load_down().


Could you please try queue.git/sched/reweight ?

I've only confirmed it boots on my machine. I'll prod a little more at
it tomorrow.


---
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>

#define min(a,b) \
	({ __typeof__ (a) _a = (a); \
	   __typeof__ (b) _b = (b); \
	   _a < _b ? _a : _b; })

struct e { int64_t vr; int64_t w; };

const int scale = 1024;

struct e es[] = {
	{  40*scale, 1024*1024 },
	{ -20*scale, 1024*1024 },
	{ -50*scale, 1024*1024 },
};

const int n = sizeof(es)/sizeof(es[0]);

int64_t w_avg_n(void)
{
	int64_t ws = 0, w = 0;
	for (int i=0; i<n; i++) {
		w += es[i].w;
		ws += es[i].vr * es[i].w;
	}
	return ws/w;
}

int64_t w_avg_t(void)
{
	int64_t ws = 0, w = 0;
	for (int i=0; i<n; i++) {
		int64_t t = min(2, es[i].w >> 10);
		w += t;
		ws += es[i].vr * t;
	}
	return ws/w;
}

typedef int64_t (*avg_f)(void);

avg_f w_avg = &w_avg_t; // w_avg_n for good, w_avg_t for broken

void reweight(int i, int64_t w)
{
	int64_t a = w_avg();
	int64_t t, vl = a - es[i].vr;
	t = (vl * es[i].w) / w;
	printf("vruntime: %Ld (%Ld) -> %Ld (%Ld)\n", es[i].vr, vl, a-t, t);
	es[i].vr = a - t;
	es[i].w = w;
}

int main(int argc, char **argv)
{
	int i = 0;
	
	if (argc > 1)
		i = atoi(argv[1]);

	reweight(i, 1024*1024);
	printf("avg: %Ld -- %Ld\n", w_avg(), w_avg_t());
	for (int j=0; j<3; j++) {
		reweight(i, 2);
		printf("avg: %Ld -- %Ld\n", w_avg(), w_avg_t());
		reweight(i, 1024*1024);
		printf("avg: %Ld -- %Ld\n", w_avg(), w_avg_t());
	}
	return 0;
}


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ