[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200324165255.GA242454@google.com>
Date: Tue, 24 Mar 2020 12:52:55 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Steven Rostedt <rostedt@...dmis.org>, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel-team@...com, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
vpillai@...italocean.com
Subject: Re: [PATCH RFC v2 tip/core/rcu 01/22] sched/core: Add function to
sample state of locked-down task
On Tue, Mar 24, 2020 at 08:48:22AM -0700, Paul E. McKenney wrote:
[..]
> >
> > > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> > > index 44edd0a..43991a4 100644
> > > --- a/kernel/rcu/tree.h
> > > +++ b/kernel/rcu/tree.h
> > > @@ -455,6 +455,8 @@ static void rcu_bind_gp_kthread(void);
> > > static bool rcu_nohz_full_cpu(void);
> > > static void rcu_dynticks_task_enter(void);
> > > static void rcu_dynticks_task_exit(void);
> > > +static void rcu_dynticks_task_trace_enter(void);
> > > +static void rcu_dynticks_task_trace_exit(void);
> > >
> > > /* Forward declarations for tree_stall.h */
> > > static void record_gp_stall_check_time(void);
> > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > > index 9355536..f4a344e 100644
> > > --- a/kernel/rcu/tree_plugin.h
> > > +++ b/kernel/rcu/tree_plugin.h
> > > @@ -2553,3 +2553,21 @@ static void rcu_dynticks_task_exit(void)
> > > WRITE_ONCE(current->rcu_tasks_idle_cpu, -1);
> > > #endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */
> > > }
> > > +
> > > +/* Turn on heavyweight RCU tasks trace readers on idle/user entry. */
> > > +static void rcu_dynticks_task_trace_enter(void)
> > > +{
> > > +#ifdef CONFIG_TASKS_RCU_TRACE
> > > + if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
> > > + current->trc_reader_special.b.need_mb = true;
> >
> > If this is every called from middle of a reader section (that is we
> > transition from IPI-mode to using heavier reader-sections), then is a memory
> > barrier needed here just to protect the reader section that already started?
>
> That memory barrier is provided by the memory ordering in the callers
> of rcu_dynticks_task_trace_enter() and rcu_dynticks_task_trace_exit(),
> namely, those callers' atomic_add_return() invocations. These barriers
> pair with the pair of smp_rmb() calls in rcu_dynticks_zero_in_eqs(),
> which is in turn invoked from the function formerly known as
> trc_inspect_reader_notrunning(), AKA trc_inspect_reader().
>
> This same pair of smp_rmb() calls also pair with the conditional smp_mb()
> calls in rcu_read_lock_trace() and rcu_read_unlock_trace().
>
> In your scenario, the calls in rcu_read_lock_trace() and
> rcu_read_unlock_trace() wouldn't happen, but in that case the ordering
> from atomic_add_return() would suffice.
>
> Does that work? Or is there an ordering bug in there somewhere?
Thanks for explaining. Could the following scenario cause a problem?
If we consider the litmus test:
{
int x = 1;
int *y = &x;
int z = 1;
}
P0(int *x, int *z, int **y)
{
int *r0;
int r1;
dynticks_eqs_trace_enter();
rcu_read_lock();
r0 = rcu_dereference(*y);
dynticks_eqs_trace_exit(); // cut-off reader's mb wings :)
r1 = READ_ONCE(*r0); // Reordering of this beyond the unlock() is bad.
rcu_read_unlock();
}
P1(int *x, int *z, int **y)
{
rcu_assign_pointer(*y, z);
synchronize_rcu();
WRITE_ONCE(*x, 0);
}
exists (0:r0=x /\ 0:r1=0)
Then the following situation can happen?
READER UPDATER
y = &z;
eqs_enter(); // full-mb
rcu_read_lock(); // full-mb
// r0 = x;
// GP-start
// ..zero_in_eqs() notices eqs, no IPI
eqs_exit(); // full-mb
// actual r1 = *x but will reorder
rcu_read_unlock(); // no-mb
// GP-finish as notices nesting = 0
x = 0;
// reordered r1 = *x = 0;
Basically r0=x /\ r1=0 happened because r1=0. Or did I miss something that
prevents it?
thanks,
- Joel
> > thanks,
> >
> > - Joel
> >
> >
> > > +#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
> > > +}
> > > +
> > > +/* Turn off heavyweight RCU tasks trace readers on idle/user exit. */
> > > +static void rcu_dynticks_task_trace_exit(void)
> > > +{
> > > +#ifdef CONFIG_TASKS_RCU_TRACE
> > > + if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
> > > + current->trc_reader_special.b.need_mb = false;
> > > +#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
> > > +}
Powered by blists - more mailing lists