[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201006140926.GI4352@localhost.localdomain>
Date: Tue, 6 Oct 2020 16:09:26 +0200
From: Juri Lelli <juri.lelli@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: tglx@...utronix.de, mingo@...nel.org, linux-kernel@...r.kernel.org,
bigeasy@...utronix.de, qais.yousef@....com, swood@...hat.com,
valentin.schneider@....com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vincent.donnefort@....com,
tj@...nel.org
Subject: Re: [PATCH -v2 12/17] sched,rt: Use cpumask_any*_distribute()
Hi,
On 05/10/20 16:57, Peter Zijlstra wrote:
> Replace a bunch of cpumask_any*() instances with
> cpumask_any*_distribute(), by injecting this little bit of random in
> cpu selection, we reduce the chance two competing balance operations
> working off the same lowest_mask pick the same CPU.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> include/linux/cpumask.h | 6 ++++++
> kernel/sched/cpupri.c | 4 ++--
> kernel/sched/deadline.c | 2 +-
> kernel/sched/rt.c | 6 +++---
> lib/cpumask.c | 18 ++++++++++++++++++
> 5 files changed, 30 insertions(+), 6 deletions(-)
>
> --- a/include/linux/cpumask.h
> +++ b/include/linux/cpumask.h
> @@ -199,6 +199,11 @@ static inline int cpumask_any_and_distri
> return cpumask_next_and(-1, src1p, src2p);
> }
>
> +static inline int cpumask_any_distribute(const struct cpumask *srcp)
> +{
> + return cpumask_first(srcp);
> +}
> +
> #define for_each_cpu(cpu, mask) \
> for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
> #define for_each_cpu_not(cpu, mask) \
> @@ -252,6 +257,7 @@ int cpumask_any_but(const struct cpumask
> unsigned int cpumask_local_spread(unsigned int i, int node);
> int cpumask_any_and_distribute(const struct cpumask *src1p,
> const struct cpumask *src2p);
> +int cpumask_any_distribute(const struct cpumask *srcp);
>
> /**
> * for_each_cpu - iterate over every cpu in a mask
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2001,7 +2001,7 @@ static int find_later_rq(struct task_str
> if (this_cpu != -1)
> return this_cpu;
>
> - cpu = cpumask_any(later_mask);
> + cpu = cpumask_any_distribute(later_mask);
> if (cpu < nr_cpu_ids)
> return cpu;
Think we can use cpumask_any_and_distribute() with later_mask for
deadline as well inside the for_each_domain loop as you do for rt below.
Best,
Juri
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1752,8 +1752,8 @@ static int find_lowest_rq(struct task_st
> return this_cpu;
> }
>
> - best_cpu = cpumask_first_and(lowest_mask,
> - sched_domain_span(sd));
> + best_cpu = cpumask_any_and_distribute(lowest_mask,
> + sched_domain_span(sd));
> if (best_cpu < nr_cpu_ids) {
> rcu_read_unlock();
> return best_cpu;
> @@ -1770,7 +1770,7 @@ static int find_lowest_rq(struct task_st
> if (this_cpu != -1)
> return this_cpu;
>
> - cpu = cpumask_any(lowest_mask);
> + cpu = cpumask_any_distribute(lowest_mask);
> if (cpu < nr_cpu_ids)
> return cpu;
Powered by blists - more mailing lists