[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <15d8374f-b1a5-4946-829d-8e9c6ef39272@huawei.com>
Date: Wed, 21 Jan 2026 17:33:07 +0800
From: "wangtao (EQ)" <wangtao554@...wei.com>
To: K Prateek Nayak <kprateek.nayak@....com>, <mingo@...hat.com>,
<peterz@...radead.org>, <juri.lelli@...hat.com>, <vincent.guittot@...aro.org>
CC: <dietmar.eggemann@....com>, <rostedt@...dmis.org>, <bsegall@...gle.com>,
<mgorman@...e.de>, <vschneid@...hat.com>, <tanghui20@...wei.com>,
<zhangqiao22@...wei.com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched/eevdf: Update se->vprot in reweight_entity()
Hello Prateek,
K Prateek Nayak 写道:
> Hello Wang,
>
> On 1/20/2026 6:01 PM, Wang Tao wrote:
>> In the EEVDF framework with Run-to-Parity protection, `se->vprot` is an
>> independent variable defining the virtual protection timestamp.
>>
>> When `reweight_entity()` is called (e.g., via nice/renice), it performs
>> the following actions to preserve Lag consistency:
>> 1. Scales `se->vlag` based on the new weight.
>> 2. Calls `place_entity()`, which recalculates `se->vruntime` based on
>> the new weight and scaled lag.
>>
>> However, the current implementation fails to update `se->vprot`, leading
>> to mismatches between the task's actual runtime and its expected duration.
>
> I don't think that is correct. "vprot" allows for "min_slice" worth of
> runtime from the beginning of the pick however, if we do a
> set_protect_slice() after reweight, we'll essentially grant another
> "min_slice" worth of time from the current "se->vruntime" (or until
> deadline if it is sooner) which is not correct.
>
>>
>> This patch fixes the issue by calling `set_protect_slice()` at the end of
>> `reweight_entity()`. This ensures that a new, valid protection slice is
>> committed based on the updated `vruntime` and the new weight, restoring
>> Run-to-Parity consistency immediately after a weight change.
>>
>> Fixes: 63304558ba5d ("sched/eevdf: Curb wakeup-preemption")
>> Suggested-by: Zhang Qiao <zhangqiao22@...wei.com>
>> Signed-off-by: Wang Tao <wangtao554@...wei.com>
>> ---
>> kernel/sched/fair.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index e71302282671..bdd8c4e688ae 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3792,6 +3792,8 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
>
> At the beginning of reweight_entity, we should first check if
> protect_slice() for curr is true or not:
>
> bool protect_slice = curr && protect_slice(se);
>
>> if (!curr)
>> __enqueue_entity(cfs_rq, se);
>> cfs_rq->nr_queued++;
>> + if (curr)
>> + set_protect_slice(cfs_rq, se);
>
>
> If protect_slice() was true to begin with, we should do:
>
> if (protect_slice)
> se->vprot = min_vruntime(se->vprot, se->deadline);
>
> This ensures that if our deadline has moved back, we only protect until
> the new deadline and the scheduler can re-evaluate after that. If there
> was an entity with a shorter slice at the beginning of the pick, the
> "vprot" should still reflect the old value that was calculated using
> "se->vruntime" at the time of the pick.
>
Regarding your suggestion, I have a concern.
When a task's weight changes, I believe its "vprot" should also change
accordingly. If we keep the original "vprot" unchanged, the task's
actual runtime will not reach the expected "vprot".
For example, if the weight decreases, the "vruntime" increases faster.
If "vprot" is not updated, the task will hit the limit much earlier than
expected in physical time.
Therefore, I made this modification to ensure the protection slice is
valid. I would like to hear your thoughts on this.
if (protect_slice) {
se->vprot -= se->vruntime;
se->vprot = div_s64(se->vprot * se->load.weight, weight);
se->vprot += se->vruntime;
se->vprot = min_vruntime(se->vprot, se->deadline);
}
Best Regards,
Tao
Powered by blists - more mailing lists