[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0f03e3e-bced-4be7-8589-1e65042b39aa@arm.com>
Date: Wed, 19 Feb 2025 10:29:03 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Juri Lelli <juri.lelli@...hat.com>
Cc: Jon Hunter <jonathanh@...dia.com>,
Christian Loehle <christian.loehle@....com>,
Thierry Reding <treding@...dia.com>, Waiman Long <longman@...hat.com>,
Tejun Heo <tj@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
Michal Koutny <mkoutny@...e.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
Phil Auld <pauld@...hat.com>, Qais Yousef <qyousef@...alina.io>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
"Joel Fernandes (Google)" <joel@...lfernandes.org>,
Suleiman Souhlal <suleiman@...gle.com>, Aashish Sharma <shraash@...gle.com>,
Shin Kawamura <kawasin@...gle.com>,
Vineeth Remanan Pillai <vineeth@...byteword.org>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>
Subject: Re: [PATCH v2 3/2] sched/deadline: Check bandwidth overflow earlier
for hotplug
On 18/02/2025 15:18, Juri Lelli wrote:
> On 18/02/25 15:12, Dietmar Eggemann wrote:
>> On 18/02/2025 10:58, Juri Lelli wrote:
>>> Hi!
>>>
>>> On 17/02/25 17:08, Juri Lelli wrote:
>>>> On 14/02/25 10:05, Jon Hunter wrote:
[...]
>> Yeah, looks like suspend/resume behaves differently compared to CPU hotplug.
>>
>> On my Juno [L b b L L L]
>> ^^^
>> isolcpus=[2,3]
>>
>> # ps2 | grep DLN
>> 98 98 S 140 0 - DLN sugov:0
>> 99 99 S 140 0 - DLN sugov:1
>>
>> # taskset -p 98; taskset -p 99
>> pid 98's current affinity mask: 39
>> pid 99's current affinity mask: 6
>>
>>
>> [ 87.679282] partition_sched_domains() called
>> ...
>> [ 87.684013] partition_sched_domains() called
>> ...
>> [ 87.687961] partition_sched_domains() called
>> ...
>> [ 87.689419] psci: CPU3 killed (polled 0 ms)
>> [ 87.689715] __dl_bw_capacity() mask=2-5 cap=1024
>> [ 87.689739] dl_bw_cpus() cpu=6 rd->span=2-5 cpu_active_mask=0-2 cpus=1
>> [ 87.689757] dl_bw_manage: cpu=2 cap=0 fair_server_bw=52428
>> total_bw=209712 dl_bw_cpus=1 type=DEF span=2-5
>> [ 87.689775] dl_bw_cpus() cpu=6 rd->span=2-5 cpu_active_mask=0-2 cpus=1
>> [ 87.689789] dl_bw_manage() cpu=2 cap=0 overflow=1 return=-16
>> [ 87.689864] Error taking CPU2 down: -16 <-- !!!
>> ...
>> [ 87.690674] partition_sched_domains() called
>> ...
>> [ 87.691496] partition_sched_domains() called
>> ...
>> [ 87.693702] partition_sched_domains() called
>> ...
>> [ 87.695819] partition_and_rebuild_sched_domains() called
>>
>
> Ah, OK. Did you try with my last proposed change?
I did now.
Patch-wise I have:
(1) Putting 'fair_server's __dl_server_[de|at]tach_root() under if
'(cpumask_test_cpu(rq->cpu, [old_rd->online|cpu_active_mask))' in
rq_attach_root()
https://lkml.kernel.org/r/Z7RhNmLpOb7SLImW@jlelli-thinkpadt14gen4.remote.csb
(2) Create __dl_server_detach_root() and call it in rq_attach_root()
https://lkml.kernel.org/r/Z4fd_6M2vhSMSR0i@jlelli-thinkpadt14gen4.remote.csb
plus debug patch:
https://lkml.kernel.org/r/Z6M5fQB9P1_bDF7A@jlelli-thinkpadt14gen4.remote.csb
plus additional debug.
The suspend issue still persists.
My hunch is that it's rather an issue with having 0 CPUs left in DEF
while deactivating the last isol CPU (CPU3) so we set overflow = 1 w/o
calling __dl_overflow(). We want to account fair_server_bw=52428
against 0 CPUs.
l B B l l l
^^^
isolcpus=[3,4]
cpumask_and(mask, rd->span, cpu_active_mask)
mask = [3-5] & [0-3] = [3] -> dl_bw_cpus(3) = 1
---
dl_bw_deactivate() called cpu=5
dl_bw_deactivate() called cpu=4
dl_bw_deactivate() called cpu=3
dl_bw_cpus() cpu=6 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
cpumask_subset(rd->span, cpu_active_mask) is false
for_each_cpu_and(i, rd->span, cpu_active_mask)
cpus++ <-- cpus is 1 !!!
dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
called w/ 'req = dl_bw_req_deactivate'
dl_b->total_bw - fair_server_bw = 104856 - 52428 > 0
dl_bw_cpus(cpu) - 1 = 0
overflow = 1
So there is simply no capacity left in DEF for DL but
'dl_b->total_bw - old_bw + new_bw' = 52428 > 0
Powered by blists - more mailing lists