[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5cf1790a-3eed-4c0a-8a31-b3802c5d9b35@arm.com>
Date: Wed, 21 May 2025 16:02:46 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Chris Mason <clm@...a.com>, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...nel.org>, vschneid@...hat.com,
Juri Lelli <juri.lelli@...il.com>, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: scheduler performance regression since v6.11
On 20/05/2025 21:38, Peter Zijlstra wrote:
> On Tue, May 20, 2025 at 04:38:09PM +0200, Dietmar Eggemann wrote:
>
>> 3840cbe24cf0 - sched: psi: fix bogus pressure spikes from aggregation race
>>
>> With CONFIG_PSI enabled we call cpu_clock(cpu) now multiple times (up to
>> 4 times per task switch in my setup) in:
>>
>> __schedule() -> psi_sched_switch() -> psi_task_switch() ->
>> psi_group_change().
>>
>> There seems to be another/other v6.12 related patch(es) later which
>> cause(s) another 4% regression I yet have to discover.
>
> Urgh, let me add this to the pile to look at. Thanks!
Not sure how expensive 'cpu_clock(cpu)' is on bare-metal.
But I also don't get why PSI needs per group 'now' values when we
iterate over cgroup levels?
Powered by blists - more mailing lists