[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5fe1adf-de43-41b0-bebc-9f47bb3c80c8@amd.com>
Date: Fri, 5 Dec 2025 18:19:07 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Peter Zijlstra <peterz@...radead.org>, Vincent Guittot
<vincent.guittot@...aro.org>
CC: <mingo@...hat.com>, <juri.lelli@...hat.com>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<vschneid@...hat.com>, <linux-kernel@...r.kernel.org>,
<pierre.gondois@....com>, <qyousef@...alina.io>, <hongyan.xia2@....com>,
<christian.loehle@....com>, <luis.machado@....com>
Subject: Re: [PATCH 4/6 v8] sched/fair: Add push task mechanism for fair
On 12/5/2025 2:29 PM, Peter Zijlstra wrote:
>>> Why not use move_queued_task() ?
>>
>> double_lock_balance() can fail and prevent being blocked waiting for
>> new rq whereas move_queued_task() will wait, won't it ?
>>
>> Do you think move_queued_task() would be better ?
>
> No, double_lock_balance() never fails, the return value indicates if the
> currently held rq-lock, (the first argument) was unlocked while
> attaining both -- this is required when the first rq is a higher address
> than the second.
>
> double_lock_balance() also puts the wait-time and hold time of the
> second inside the hold time of the first, which gets you a quadric term
> in the rq hold times IIRC. Something that's best avoided.
>
> move_queued_task() OTOH takes the task off the runqueue you already hold
> locked, drops this lock, acquires the second, puts the task there, and
> returns with the dst rq locked.
So I was experimenting with:
deactivate_task(rq, p, 0);
set_task_cpu(p, target_cpu);
__ttwu_queue_wakelist(p, target_cpu, 0);
and nothing has screamed at me yet during the benchmark runs.
Would this be any good instead of the whole lock juggling?
Since this CPU is found to be going overloaded, pushing via an
IPI vs taking the overhead ourselves seems to make more sense
to me from EAS standpoint.
Given TTWU_QUEUE is disabled for PREEMPT_RT, I'm assuming this
might be problematic too?
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists