[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200727112454.GB55660@gmail.com>
Date: Mon, 27 Jul 2020 13:24:54 +0200
From: Ingo Molnar <mingo@...nel.org>
To: vincent.donnefort@....com
Cc: mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org, dietmar.eggemann@....com,
lukasz.luba@....com, valentin.schneider@....com
Subject: Re: [PATCH] sched/fair: provide u64 read for 32-bits arch helper
* vincent.donnefort@....com <vincent.donnefort@....com> wrote:
> From: Vincent Donnefort <vincent.donnefort@....com>
>
> Introducing two macro helpers u64_32read() and u64_32read_set_copy() to
> factorize the u64 vminruntime and last_update_time read on a 32-bits
> architecture. Those new helpers encapsulate smp_rmb() and smp_wmb()
> synchronization and therefore, have a small penalty in set_task_rq_fair()
> and init_cfs_rq().
>
> The choice of using a macro over an inline function is driven by the
> conditional u64 variable copy declarations.
Could you please explain how "conditional u64 variable copy
declarations" prevents us to use an inline function for this:
> +/*
> + * u64_32read() / u64_32read_set_copy()
> + *
> + * Use the copied u64 value to protect against data race. This is only
> + * applicable for 32-bits architectures.
> + */
> +#if !defined(CONFIG_64BIT) && defined(CONFIG_SMP)
> +# define u64_32read(val, copy) \
> +({ \
> + u64 _val; \
> + u64 _val_copy; \
> + \
> + do { \
> + _val_copy = copy; \
> + /* \
> + * paired with u64_32read_set_copy, ordering access \
> + * to val and copy. \
> + */ \
> + smp_rmb(); \
> + _val = val; \
> + } while (_val != _val_copy); \
> + \
> + _val; \
> +})
> +# define u64_32read_set_copy(val, copy) \
> +do { \
> + /* paired with u64_32read, ordering access to val and copy */ \
> + smp_wmb(); \
> + copy = val; \
> +} while (0)
> +#else
> +# define u64_32read(val, copy) (val)
> +# define u64_32read_set_copy(val, copy) do { } while (0)
> +#endif
> +
Thanks,
Ingo
Powered by blists - more mailing lists