[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YUuFF8+H2PE9m4wy@hirez.programming.kicks-ass.net>
Date: Wed, 22 Sep 2021 21:33:43 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: gor@...ux.ibm.com, jpoimboe@...hat.com, jikos@...nel.org,
mbenes@...e.cz, pmladek@...e.com, mingo@...nel.org,
linux-kernel@...r.kernel.org, joe.lawrence@...hat.com,
fweisbec@...il.com, tglx@...utronix.de, hca@...ux.ibm.com,
svens@...ux.ibm.com, sumanthk@...ux.ibm.com,
live-patching@...r.kernel.org
Subject: Re: [RFC][PATCH 6/7] context_tracking: Provide SMP ordering using RCU
On Wed, Sep 22, 2021 at 08:17:21AM -0700, Paul E. McKenney wrote:
> On Wed, Sep 22, 2021 at 01:05:12PM +0200, Peter Zijlstra wrote:
> > Use rcu_user_{enter,exit}() calls to provide SMP ordering on context
> > tracking state stores:
> >
> > __context_tracking_exit()
> > __this_cpu_write(context_tracking.state, CONTEXT_KERNEL)
> > rcu_user_exit()
> > rcu_eqs_exit()
> > rcu_dynticks_eqs_eit()
> > rcu_dynticks_inc()
> > atomic_add_return() /* smp_mb */
> >
> > __context_tracking_enter()
> > rcu_user_enter()
> > rcu_eqs_enter()
> > rcu_dynticks_eqs_enter()
> > rcu_dynticks_inc()
> > atomic_add_return() /* smp_mb */
> > __this_cpu_write(context_tracking.state, state)
> >
> > This separates USER/KERNEL state with an smp_mb() on each side,
> > therefore, a user of context_tracking_state_cpu() can say the CPU must
> > pass through an smp_mb() before changing.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
>
> For the transformation to negative errno return value and name change
> from an RCU perspective:
>
> Acked-by: Paul E. McKenney <paulmck@...nel.org>
Thanks!
> For the sampling of nohz_full userspace state:
>
> Another approach is for the rcu_data structure's ->dynticks variable to
> use the lower two bits to differentiate between idle, nohz_full userspace
> and kernel. In theory, inlining should make this zero cost for idle
> transition, and should allow you to safely sample nohz_full userspace
> state with a load and a couple of memory barriers instead of an IPI.
That's what I do now, it's like:
<user code>
state = KERNEL
smp_mb()
<kernel code>
smp_mb()
state = USER
<user core>
vs
<patch kernel code>
smp_mb()
if (state == USER)
// then we're guaranteed any subsequent kernel code execution
// will see the modified kernel code
more-or-less
> To make this work nicely, the low-order bits have to be 00 for kernel,
> and (say) 01 for idle and 10 for nohz_full userspace. 11 would be an
> error.
>
> The trick would be for rcu_user_enter() and rcu_user_exit() to atomically
> increment ->dynticks by 2, for rcu_nmi_exit() to increment by 1 and
> rcu_nmi_enter() to increment by 3. The state sampling would need to
> change accordingly.
>
> Does this make sense, or am I missing something?
Why doesn't the proposed patch work? Also, ISTR sampling of remote
context state coming up before. And as is, it's a weird mix between
context_tracking and rcu.
AFAICT there is very little useful in context_tracking as is, but it's
also very weird to have to ask RCU about this. Is there any way to slice
this this code differently? Perhaps move some of the state RCU now keeps
into context_tracking ?
Anyway, lemme see if I get your proposal; lets say the counter starts at
0 and is in kernel space.
0x00(0) - kernel
0x02(2) - user
0x04(0) - kernel
So far so simple, then NMI on top of that goes:
0x00(0) - kernel
0x03(3) - kernel + nmi
0x04(0) - kernel
0x06(2) - user
0x09(1) - user + nmi
0x0a(2) - user
Which then gives us:
(0) := kernel
(1) := nmi-from-user
(2) := user
(3) := nmi-from-kernel
Which should work I suppose. But like I said above, I'd be happier if
this counter would live in context_tracking rather than RCU.
Powered by blists - more mailing lists