[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201006155527.w6jck2rgk64t45wm@e107158-lin.cambridge.arm.com>
Date: Tue, 6 Oct 2020 16:55:27 +0100
From: Qais Yousef <qais.yousef@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: tglx@...utronix.de, mingo@...nel.org, linux-kernel@...r.kernel.org,
bigeasy@...utronix.de, swood@...hat.com,
valentin.schneider@....com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vincent.donnefort@....com, tj@...nel.org
Subject: Re: [PATCH -v2 12/17] sched,rt: Use cpumask_any*_distribute()
On 10/05/20 16:57, Peter Zijlstra wrote:
> Replace a bunch of cpumask_any*() instances with
> cpumask_any*_distribute(), by injecting this little bit of random in
> cpu selection, we reduce the chance two competing balance operations
> working off the same lowest_mask pick the same CPU.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> include/linux/cpumask.h | 6 ++++++
> kernel/sched/cpupri.c | 4 ++--
> kernel/sched/deadline.c | 2 +-
> kernel/sched/rt.c | 6 +++---
> lib/cpumask.c | 18 ++++++++++++++++++
> 5 files changed, 30 insertions(+), 6 deletions(-)
>
[...]
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1752,8 +1752,8 @@ static int find_lowest_rq(struct task_st
> return this_cpu;
> }
>
> - best_cpu = cpumask_first_and(lowest_mask,
> - sched_domain_span(sd));
> + best_cpu = cpumask_any_and_distribute(lowest_mask,
> + sched_domain_span(sd));
I guess I should have done this 6 months ago and just got done with it :)
20200414150556.10920-1-qais.yousef@....com
> if (best_cpu < nr_cpu_ids) {
> rcu_read_unlock();
> return best_cpu;
> @@ -1770,7 +1770,7 @@ static int find_lowest_rq(struct task_st
> if (this_cpu != -1)
> return this_cpu;
>
> - cpu = cpumask_any(lowest_mask);
> + cpu = cpumask_any_distribute(lowest_mask);
> if (cpu < nr_cpu_ids)
> return cpu;
>
> --- a/lib/cpumask.c
> +++ b/lib/cpumask.c
> @@ -267,3 +267,21 @@ int cpumask_any_and_distribute(const str
> return next;
> }
> EXPORT_SYMBOL(cpumask_any_and_distribute);
> +
> +int cpumask_any_distribute(const struct cpumask *srcp)
> +{
> + int next, prev;
> +
> + /* NOTE: our first selection will skip 0. */
> + prev = __this_cpu_read(distribute_cpu_mask_prev);
We had a discussion then that __this_cpu*() variant assumes preemption being
disabled and it's safer to use this_cpu*() variant instead. Still holds true
here?
Thanks
--
Qais Yousef
> +
> + next = cpumask_next(prev, srcp);
> + if (next >= nr_cpu_ids)
> + next = cpumask_first(srcp);
> +
> + if (next < nr_cpu_ids)
> + __this_cpu_write(distribute_cpu_mask_prev, next);
> +
> + return next;
> +}
> +EXPORT_SYMBOL(cpumask_any_distribute);
>
>
Powered by blists - more mailing lists