[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAKfTPtAABCHW7BWPN5a1nZo7R1EW701Sa_iCeTxWLm6f5hjNvQ@mail.gmail.com>
Date: Fri, 5 Dec 2025 14:36:12 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org,
pierre.gondois@....com, qyousef@...alina.io, hongyan.xia2@....com,
christian.loehle@....com, luis.machado@....com
Subject: Re: [PATCH 4/6 v8] sched/fair: Add push task mechanism for fair
On Fri, 5 Dec 2025 at 13:49, K Prateek Nayak <kprateek.nayak@....com> wrote:
>
> On 12/5/2025 2:29 PM, Peter Zijlstra wrote:
> >>> Why not use move_queued_task() ?
> >>
> >> double_lock_balance() can fail and prevent being blocked waiting for
> >> new rq whereas move_queued_task() will wait, won't it ?
> >>
> >> Do you think move_queued_task() would be better ?
> >
> > No, double_lock_balance() never fails, the return value indicates if the
> > currently held rq-lock, (the first argument) was unlocked while
> > attaining both -- this is required when the first rq is a higher address
> > than the second.
> >
> > double_lock_balance() also puts the wait-time and hold time of the
> > second inside the hold time of the first, which gets you a quadric term
> > in the rq hold times IIRC. Something that's best avoided.
> >
> > move_queued_task() OTOH takes the task off the runqueue you already hold
> > locked, drops this lock, acquires the second, puts the task there, and
> > returns with the dst rq locked.
>
> So I was experimenting with:
>
> deactivate_task(rq, p, 0);
> set_task_cpu(p, target_cpu);
> __ttwu_queue_wakelist(p, target_cpu, 0);
>
> and nothing has screamed at me yet during the benchmark runs.
> Would this be any good instead of the whole lock juggling?
>
> Since this CPU is found to be going overloaded, pushing via an
Just to make sure that we speak about the same thing. With EAS
overloaded and overutilized are 2 different things. EAS don't care and
some time want to overload a CPU( having more than1 task on the CPU)
but EAS is diable once teh CPU becomes overutilized
> IPI vs taking the overhead ourselves seems to make more sense
> to me from EAS standpoint.
I suppose that it's worth trying the IPI on EAS and embedded device
>
> Given TTWU_QUEUE is disabled for PREEMPT_RT, I'm assuming this
> might be problematic too?
>
> --
> Thanks and Regards,
> Prateek
>
Powered by blists - more mailing lists