[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <740e5992-a5d6-9b8a-33c8-fffb7e2785b8@arm.com>
Date: Wed, 8 Apr 2020 14:26:05 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Valentin Schneider <valentin.schneider@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Luca Abeni <luca.abeni@...tannapisa.it>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Wei Wang <wvw@...gle.com>, Quentin Perret <qperret@...gle.com>,
Alessio Balsini <balsini@...gle.com>,
Pavan Kondeti <pkondeti@...eaurora.org>,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Morten Rasmussen <morten.rasmussen@....com>,
Qais Yousef <qais.yousef@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4] sched/deadline: Improve admission control for
asymmetric CPU capacities
On 08.04.20 12:42, Valentin Schneider wrote:
>
> On 08/04/20 10:50, Dietmar Eggemann wrote:
>> +++ b/kernel/sched/sched.h
>> @@ -304,11 +304,14 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
>> __dl_update(dl_b, -((s32)tsk_bw / cpus));
>> }
>>
>> +static inline unsigned long rd_capacity(int cpu);
>> +
>> static inline
>> -bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
>> +bool __dl_overflow(struct dl_bw *dl_b, int cpu, u64 old_bw, u64 new_bw)
>> {
>> return dl_b->bw != -1 &&
>> - dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
>> + cap_scale(dl_b->bw, rd_capacity(cpu)) <
>> + dl_b->total_bw - old_bw + new_bw;
>> }
>>
>
> I don't think this is strictly equivalent to what we have now for the SMP
> case. 'cpus' used to come from dl_bw_cpus(), which is an ugly way of
> writing
>
> cpumask_weight(rd->span AND cpu_active_mask);
>
> The rd->cpu_capacity_orig field you added gets set once per domain rebuild,
> so it also happens in sched_cpu_(de)activate() but is separate from
> touching cpu_active_mask. AFAICT this mean we can observe a CPU as !active
> but still see its capacity_orig accounted in a root_domain.
I see what you mean.
The
int dl_bw_cpus(int i) {
...
for_each_cpu_and(i, rd->span, cpu_active_mask)
cpus++;
...
}
should be there to handle the 'rd->span &nsub cpu_active_mask' case.
We could use a similar implementation for s/cpus/capacity:
unsigned long dl_bw_capacity(int i) {
...
for_each_cpu_and(i, rd->span, cpu_active_mask)
cap += arch_scale_cpu_capacity(i);
...
}
[...]
Powered by blists - more mailing lists