[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200430131036.GE19464@codeaurora.org>
Date: Thu, 30 Apr 2020 18:40:37 +0530
From: Pavan Kondeti <pkondeti@...eaurora.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Luca Abeni <luca.abeni@...tannapisa.it>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Wei Wang <wvw@...gle.com>, Quentin Perret <qperret@...gle.com>,
Alessio Balsini <balsini@...gle.com>,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Morten Rasmussen <morten.rasmussen@....com>,
Valentin Schneider <valentin.schneider@....com>,
Qais Yousef <qais.yousef@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 5/6] sched/deadline: Make DL capacity-aware
On Mon, Apr 27, 2020 at 10:37:08AM +0200, Dietmar Eggemann wrote:
> From: Luca Abeni <luca.abeni@...tannapisa.it>
>
> The current SCHED_DEADLINE (DL) scheduler uses a global EDF scheduling
> algorithm w/o considering CPU capacity or task utilization.
> This works well on homogeneous systems where DL tasks are guaranteed
> to have a bounded tardiness but presents issues on heterogeneous
> systems.
>
> A DL task can migrate to a CPU which does not have enough CPU capacity
> to correctly serve the task (e.g. a task w/ 70ms runtime and 100ms
> period on a CPU w/ 512 capacity).
>
> Add the DL fitness function dl_task_fits_capacity() for DL admission
> control on heterogeneous systems. A task fits onto a CPU if:
>
> CPU original capacity / 1024 >= task runtime / task deadline
>
> Use this function on heterogeneous systems to try to find a CPU which
> meets this criterion during task wakeup, push and offline migration.
>
> On homogeneous systems the original behavior of the DL admission
> control should be retained.
>
> Signed-off-by: Luca Abeni <luca.abeni@...tannapisa.it>
> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
> ---
> kernel/sched/cpudeadline.c | 14 +++++++++++++-
> kernel/sched/deadline.c | 18 ++++++++++++++----
> kernel/sched/sched.h | 15 +++++++++++++++
> 3 files changed, 42 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c
> index 5cc4012572ec..8630f2a40a3f 100644
> --- a/kernel/sched/cpudeadline.c
> +++ b/kernel/sched/cpudeadline.c
> @@ -121,7 +121,19 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p,
>
> if (later_mask &&
> cpumask_and(later_mask, cp->free_cpus, p->cpus_ptr)) {
> - return 1;
> + int cpu;
> +
> + if (!static_branch_unlikely(&sched_asym_cpucapacity))
> + return 1;
> +
> + /* Ensure the capacity of the CPUs fits the task. */
> + for_each_cpu(cpu, later_mask) {
> + if (!dl_task_fits_capacity(p, cpu))
> + cpumask_clear_cpu(cpu, later_mask);
> + }
> +
> + if (!cpumask_empty(later_mask))
> + return 1;
> } else {
> int best_cpu = cpudl_maximum(cp);
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 08ab28e1cefc..575b7d88d839 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1634,6 +1634,7 @@ static int
> select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags)
> {
> struct task_struct *curr;
> + bool select_rq;
> struct rq *rq;
>
> if (sd_flag != SD_BALANCE_WAKE)
> @@ -1653,10 +1654,19 @@ select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags)
> * other hand, if it has a shorter deadline, we
> * try to make it stay here, it might be important.
> */
> - if (unlikely(dl_task(curr)) &&
> - (curr->nr_cpus_allowed < 2 ||
> - !dl_entity_preempt(&p->dl, &curr->dl)) &&
> - (p->nr_cpus_allowed > 1)) {
> + select_rq = unlikely(dl_task(curr)) &&
> + (curr->nr_cpus_allowed < 2 ||
> + !dl_entity_preempt(&p->dl, &curr->dl)) &&
> + p->nr_cpus_allowed > 1;
> +
> + /*
> + * Take the capacity of the CPU into account to
> + * ensure it fits the requirement of the task.
> + */
> + if (static_branch_unlikely(&sched_asym_cpucapacity))
> + select_rq |= !dl_task_fits_capacity(p, cpu);
> +
> + if (select_rq) {
> int target = find_later_rq(p);
I see that find_later_rq() checks if the previous CPU is part of
later_mask and returns it immediately. So we don't migrate the
task in the case where there previous CPU can't fit the task and
there are no idle CPUs on which the task can fit. LGTM.
>
> if (target != -1 &&
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 511edacc2282..ec0efd99497b 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -317,6 +317,21 @@ static inline bool __dl_overflow(struct dl_bw *dl_b, unsigned long cap,
> cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
> }
>
> +/*
> + * Verify the fitness of task @p to run on @cpu taking into account the
> + * CPU original capacity and the runtime/deadline ratio of the task.
> + *
> + * The function will return true if the CPU original capacity of the
> + * @cpu scaled by SCHED_CAPACITY_SCALE >= runtime/deadline ratio of the
> + * task and false otherwise.
> + */
> +static inline bool dl_task_fits_capacity(struct task_struct *p, int cpu)
> +{
> + unsigned long cap = arch_scale_cpu_capacity(cpu);
> +
> + return cap_scale(p->dl.dl_deadline, cap) >= p->dl.dl_runtime;
> +}
> +
This is same as
return p->dl.dl_bw >> (BW_SHIFT - SCHED_CAPACITY_SHIFT) <= cap
Correct? If yes, would it be better to use this?
Thanks,
Pavan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
Powered by blists - more mailing lists