[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170821134458.gocjoobaybb45egi@e106622-lin>
Date: Mon, 21 Aug 2017 14:44:58 +0100
From: Juri Lelli <juri.lelli@....com>
To: Byungchul Park <byungchul.park@....com>
Cc: peterz@...radead.org, mingo@...nel.org, joel.opensrc@...il.com,
linux-kernel@...r.kernel.org, juri.lelli@...il.com,
rostedt@...dmis.org, kernel-team@....com
Subject: Re: [PATCH v8 1/2] sched/deadline: Add support for SD_PREFER_SIBLING
on find_later_rq()
Hi,
On 18/08/17 17:21, Byungchul Park wrote:
> It would be better to try to check other siblings first if
> SD_PREFER_SIBLING is flaged when pushing tasks - migration.
>
> Signed-off-by: Byungchul Park <byungchul.park@....com>
Mmm, this looks like Peter's proposed patch, maybe add (at least) a
Suggested-by: him ?
https://marc.info/?l=linux-kernel&m=150176183807073
Also, I'm not sure what Peter meant with
"But still this isn't quite right, because when we consider this for SMT
(as was the intent here) we'll happily occupy a full sibling core over
finding an empty one."
since we are still using the later_mask, which should not include full
cores (unless it is the one with the lates deadline)?
> ---
> kernel/sched/deadline.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 52 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 0223694..115250b 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1319,12 +1319,35 @@ static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu
>
> static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
>
> +/*
> + * Find the first cpu in: mask & sd & ~prefer
> + */
> +static int find_cpu(const struct cpumask *mask,
> + const struct sched_domain *sd,
> + const struct sched_domain *prefer)
> +{
> + const struct cpumask *sds = sched_domain_span(sd);
> + const struct cpumask *ps = prefer ? sched_domain_span(prefer) : NULL;
> + int cpu;
> +
> + for_each_cpu(cpu, mask) {
> + if (!cpumask_test_cpu(cpu, sds))
> + continue;
> + if (ps && cpumask_test_cpu(cpu, ps))
> + continue;
> + break;
> + }
> +
> + return cpu;
> +}
> +
> static int find_later_rq(struct task_struct *task)
> {
> - struct sched_domain *sd;
> + struct sched_domain *sd, *prefer = NULL;
> struct cpumask *later_mask = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);
> int this_cpu = smp_processor_id();
> int cpu = task_cpu(task);
> + int fallback_cpu = -1;
>
> /* Make sure the mask is initialized first */
> if (unlikely(!later_mask))
> @@ -1376,8 +1399,7 @@ static int find_later_rq(struct task_struct *task)
> return this_cpu;
> }
>
> - best_cpu = cpumask_first_and(later_mask,
> - sched_domain_span(sd));
> + best_cpu = find_cpu(later_mask, sd, prefer);
> /*
> * Last chance: if a cpu being in both later_mask
> * and current sd span is valid, that becomes our
> @@ -1385,6 +1407,26 @@ static int find_later_rq(struct task_struct *task)
> * already under consideration through later_mask.
> */
It seems that the comment above should be updated as well.
Thanks,
- Juri
Powered by blists - more mailing lists