[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6667e71b-a9ef-4c01-9453-171c07b2753f@arm.com>
Date: Wed, 7 Jan 2026 16:48:38 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Shijie Huang <shijie@...eremail.onmicrosoft.com>,
Huang Shijie <shijie@...amperecomputing.com>, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com, vincent.guittot@...aro.org
Cc: patches@...erecomputing.com, cl@...ux.com,
Shubhang@...amperecomputing.com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, linux-kernel@...r.kernel.org, vschneid@...hat.com,
vineethr@...ux.ibm.com, kprateek.nayak@....com
Subject: Re: [PATCH v6 2/2] sched: update the rq->avg_idle when a task is
moved to an idle CPU
On 17.12.25 17:15, Dietmar Eggemann wrote:
> On 15.12.25 10:35, Shijie Huang wrote:
>>
>> On 12/12/2025 22:22, Dietmar Eggemann wrote:
[...]
> I think Vincent is right by saying the update_rq_avg_idle() should be
> put into put_prev_task_idle() instead.
>
> Still waiting for the DCPerf Mediawiki test results to see if this
> change fixes my 'rq->avg_idle being too big' issue.
Turns out the patch didn't fix this issue. Still seeing a huge number of
sched_balance_newidle() calls in which the system is (1) overloaded and
(2) this_rq->avg_idle >= sd->max_newidle_lb_cost so that there is no
early bailout and no task is pulled at the end. Must be something else ...
Powered by blists - more mailing lists