[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200414121956.3687d6e9@gandalf.local.home>
Date: Tue, 14 Apr 2020 12:19:56 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Qais Yousef <qais.yousef@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Yury Norov <yury.norov@...il.com>,
Paul Turner <pjt@...gle.com>,
Alexey Dobriyan <adobriyan@...il.com>,
Josh Don <joshdon@...gle.com>,
Pavan Kondeti <pkondeti@...eaurora.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4] cpumask: Make cpumask_any() truly random
On Tue, 14 Apr 2020 16:05:54 +0100
Qais Yousef <qais.yousef@....com> wrote:
> Commit 46a87b3851f0 ("sched/core: Distribute tasks within affinity masks")
> added a new cpumask_any_and_distribute() which truly returns a random
> cpu within the mask.
>
> Previous patch renamed the function to cpumask_any_and(), so that old
> users can take advantage of the new randomness behavior.
>
> Build up on that, and let cpumask_any() truly random too by re-using the
> logic from cpumask_any_and().
>
> Signed-off-by: Qais Yousef <qais.yousef@....com>
> CC: Juri Lelli <juri.lelli@...hat.com>
> CC: Vincent Guittot <vincent.guittot@...aro.org>
> CC: Dietmar Eggemann <dietmar.eggemann@....com>
> CC: Steven Rostedt <rostedt@...dmis.org>
> CC: Ben Segall <bsegall@...gle.com>
> CC: Mel Gorman <mgorman@...e.de>
> CC: Andrew Morton <akpm@...ux-foundation.org>
> CC: Thomas Gleixner <tglx@...utronix.de>
> CC: Yury Norov <yury.norov@...il.com>
> CC: Paul Turner <pjt@...gle.com>
> CC: Alexey Dobriyan <adobriyan@...il.com>
> CC: Josh Don <joshdon@...gle.com>
> CC: Pavan Kondeti <pkondeti@...eaurora.org>
> CC: linux-kernel@...r.kernel.org
> ---
> include/linux/cpumask.h | 14 ++++++--------
> lib/cpumask.c | 24 ++++++++++++++++++++++++
> 2 files changed, 30 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
> index e4d6d140a67c..7fb25d256043 100644
> --- a/include/linux/cpumask.h
> +++ b/include/linux/cpumask.h
> @@ -194,6 +194,11 @@ static inline unsigned int cpumask_local_spread(unsigned int i, int node)
> return 0;
> }
>
> +static inline int cpumask_any(const struct cpumask *src1p)
> +{
> + return 0;
> +}
> +
> static inline int cpumask_any_and(const struct cpumask *src1p,
> const struct cpumask *src2p)
> {
> @@ -251,6 +256,7 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
> int cpumask_next_and(int n, const struct cpumask *, const struct cpumask *);
> int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
> unsigned int cpumask_local_spread(unsigned int i, int node);
> +int cpumask_any(const struct cpumask *srcp);
> int cpumask_any_and(const struct cpumask *src1p, const struct cpumask *src2p);
>
> /**
> @@ -600,14 +606,6 @@ static inline void cpumask_copy(struct cpumask *dstp,
> bitmap_copy(cpumask_bits(dstp), cpumask_bits(srcp), nr_cpumask_bits);
> }
>
> -/**
> - * cpumask_any - pick a "random" cpu from *srcp
> - * @srcp: the input cpumask
> - *
> - * Returns >= nr_cpu_ids if no cpus set.
> - */
> -#define cpumask_any(srcp) cpumask_first(srcp)
> -
> /**
> * cpumask_first_and - return the first cpu from *srcp1 & *srcp2
> * @src1p: the first input
> diff --git a/lib/cpumask.c b/lib/cpumask.c
> index b527a153b023..bcac63e45374 100644
> --- a/lib/cpumask.c
> +++ b/lib/cpumask.c
> @@ -259,3 +259,27 @@ int cpumask_any_and(const struct cpumask *src1p, const struct cpumask *src2p)
> return next;
> }
> EXPORT_SYMBOL(cpumask_any_and);
> +
> +/**
> + * cpumask_any - pick a "random" cpu from *srcp
> + * @srcp: the input cpumask
> + *
> + * Returns >= nr_cpu_ids if no cpus set.
> + */
> +int cpumask_any(const struct cpumask *srcp)
> +{
> + int next, prev;
> +
> + /* NOTE: our first selection will skip 0. */
> + prev = __this_cpu_read(distribute_cpu_mask_prev);
> +
> + next = cpumask_next(prev, srcp);
> + if (next >= nr_cpu_ids)
> + next = cpumask_first(srcp);
> +
> + if (next < nr_cpu_ids)
> + __this_cpu_write(distribute_cpu_mask_prev, next);
Do we care if this gets preempted and migrated to a new CPU where we read
"prev" from one distribute_cpu_mask_prev on one CPU and write it to another
CPU?
-- Steve
> +
> + return next;
> +}
> +EXPORT_SYMBOL(cpumask_any);
Powered by blists - more mailing lists