[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200728120027.GN43129@hirez.programming.kicks-ass.net>
Date: Tue, 28 Jul 2020 14:00:27 +0200
From: peterz@...radead.org
To: Vincent Donnefort <vincent.donnefort@....com>
Cc: mingo@...hat.com, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org, dietmar.eggemann@....com,
lukasz.luba@....com, valentin.schneider@....com
Subject: Re: [PATCH] sched/fair: provide u64 read for 32-bits arch helper
On Tue, Jul 28, 2020 at 01:13:02PM +0200, peterz@...radead.org wrote:
> On Mon, Jul 27, 2020 at 04:23:03PM +0100, Vincent Donnefort wrote:
>
> > For 32-bit architectures, both min_vruntime and last_update_time are using
> > similar access. This patch is simply an attempt to unify their usage by
> > introducing two macros to rely on when accessing those. At the same time, it
> > brings a comment regarding the barriers usage, as per the kernel policy. So
> > overall this is just a clean-up without any functional changes.
>
> Ah, I though there was perhaps the idea to make use of armv7-lpae
> instructions.
>
> Aside of that, I think we need to spend a little time bike-shedding the
> API/naming here:
>
> > +# define u64_32read(val, copy) (val)
> > +# define u64_32read_set_copy(val, copy) do { } while (0)
>
> How about something like:
>
> #ifdef CONFIG_64BIT
>
> #define DEFINE_U64_U32(name) u64 name
> #define u64_u32_load(name) name
> #define u64_u32_store(name, val)name = val
>
> #else
>
> #define DEFINE_U64_U32(name) \
> struct { \
> u64 name; \
> u64 name##_copy; \
> }
>
> #define u64_u32_load(name) \
> ({ \
> u64 val, copy; \
> do { \
> val = name; \
> smb_rmb(); \
> copy = name##_copy; \
> } while (val != copy); \
wrong order there; we should first read _copy and then the regular one
of course.
> val;
> })
>
> #define u64_u32_store(name, val) \
> do { \
> typeof(val) __val = (val); \
> name = __val; \
> smp_wmb(); \
> name##_copy = __val; \
> } while (0)
>
> #endif
The other approach is making it a full type and inline functions I
suppose.
Powered by blists - more mailing lists