[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a8f7aa02-e215-4d06-b6d7-f59c38137d38@arm.com>
Date: Wed, 11 Sep 2024 09:55:26 +0100
From: Luis Machado <luis.machado@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Hongyan Xia <hongyan.xia2@....com>, mingo@...hat.com, juri.lelli@...hat.com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, linux-kernel@...r.kernel.org, kprateek.nayak@....com,
wuyun.abel@...edance.com, youssefesmat@...omium.org, tglx@...utronix.de,
efault@....de
Subject: Re: [PATCH 10/24] sched/uclamg: Handle delayed dequeue
On 9/11/24 09:45, Peter Zijlstra wrote:
> On Wed, Sep 11, 2024 at 09:35:16AM +0100, Luis Machado wrote:
>> On 9/10/24 15:05, Peter Zijlstra wrote:
>>> On Tue, Sep 10, 2024 at 12:04:11PM +0100, Luis Machado wrote:
>>>> I gave the above patch a try on our Android workload running on the Pixel 6 with a 6.8-based kernel.
>>>>
>>>> First I'd like to confirm that Dietmar's fix that was pushed to tip:sched/core (Fix util_est
>>>> accounting for DELAY_DEQUEUE) helps bring the frequencies and power use down to more sensible levels.
>>>>
>>>> As for the above changes, unfortunately I'm seeing high frequencies and high power usage again. The
>>>> pattern looks similar to what we observed with the uclamp inc/dec imbalance.
>>>
>>> :-(
>>>
>>>> I haven't investigated this in depth yet, but I'll go stare at some traces and the code, and hopefully
>>>> something will ring bells.
>>>
>>> So first thing to do is trace h_nr_delayed I suppose, in my own
>>> (limited) testing that was mostly [0,1] correctly correlating to there
>>> being a delayed task on the runqueue.
>>>
>>> I'm assuming that removing the usage sites restores function?
>>
>> It does restore function if we remove the usage.
>>
>> From an initial look:
>>
>> cat /sys/kernel/debug/sched/debug | grep -i delay
>> .h_nr_delayed : -4
>> .h_nr_delayed : -6
>> .h_nr_delayed : -1
>> .h_nr_delayed : -6
>> .h_nr_delayed : -1
>> .h_nr_delayed : -1
>> .h_nr_delayed : -5
>> .h_nr_delayed : -6
>>
>> So probably an unexpected decrement or lack of an increment somewhere.
>
> Yeah, that's buggered. Ok, I'll go rebase sched/core and take this patch
> out. I'll see if I can reproduce that.
I'll keep looking on my end as well. I'm trying to capture the first time it goes bad. For some
reason my SCHED_WARN_ON didn't trigger when it should've.
Powered by blists - more mailing lists