[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87jzcpfbc6.fsf@oracle.com>
Date: Tue, 26 Nov 2024 22:19:05 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: Frederic Weisbecker <frederic@...nel.org>, linux-kernel@...r.kernel.org,
peterz@...radead.org, tglx@...utronix.de, paulmck@...nel.org,
mingo@...nel.org, bigeasy@...utronix.de, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, efault@....de, sshegde@...ux.ibm.com,
boris.ostrovsky@...cle.com
Subject: Re: [PATCH v2 3/6] rcu: limit PREEMPT_RCU configurations
Ankur Arora <ankur.a.arora@...cle.com> writes:
> Frederic Weisbecker <frederic@...nel.org> writes:
>
>> Le Mon, Nov 25, 2024 at 01:40:39PM -0800, Ankur Arora a écrit :
>>>
>>> Frederic Weisbecker <frederic@...nel.org> writes:
>>>
>>> > Le Wed, Nov 06, 2024 at 12:17:55PM -0800, Ankur Arora a écrit :
>>> >> PREEMPT_LAZY can be enabled stand-alone or alongside PREEMPT_DYNAMIC
>>> >> which allows for dynamic switching of preemption models.
>>> >>
>>> >> The choice of PREEMPT_RCU or not, however, is fixed at compile time.
>>> >>
>>> >> Given that PREEMPT_RCU makes some trade-offs to optimize for latency
>>> >> as opposed to throughput, configurations with limited preemption
>>> >> might prefer the stronger forward-progress guarantees of PREEMPT_RCU=n.
>>> >>
>>> >> Accordingly, explicitly limit PREEMPT_RCU=y to the latency oriented
>>> >> preemption models: PREEMPT, PREEMPT_RT, and the runtime configurable
>>> >> model PREEMPT_DYNAMIC.
>>> >>
>>> >> This means the throughput oriented models, PREEMPT_NONE,
>>> >> PREEMPT_VOLUNTARY and PREEMPT_LAZY will run with PREEMPT_RCU=n.
>>> >>
>>> >> Cc: Paul E. McKenney <paulmck@...nel.org>
>>> >> Cc: Peter Zijlstra <peterz@...radead.org>
>>> >> Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
>>> >> ---
>>> >> kernel/rcu/Kconfig | 2 +-
>>> >> 1 file changed, 1 insertion(+), 1 deletion(-)
>>> >>
>>> >> diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
>>> >> index 5a7ff5e1cdcb..9d52f87fac27 100644
>>> >> --- a/kernel/rcu/Kconfig
>>> >> +++ b/kernel/rcu/Kconfig
>>> >> @@ -18,7 +18,7 @@ config TREE_RCU
>>> >>
>>> >> config PREEMPT_RCU
>>> >> bool
>>> >> - default y if PREEMPTION
>>> >> + default y if (PREEMPT || PREEMPT_RT || PREEMPT_DYNAMIC)
>>> >> select TREE_RCU
>>> >> help
>>> >> This option selects the RCU implementation that is
>>> >
>>> > Reviewed-by: Frederic Weisbecker <frederic@...nel.org>
>>> >
>>> > But looking at !CONFIG_PREEMPT_RCU code on tree_plugin.h, I see
>>> > some issues now that the code can be preemptible. Well I think
>>> > it has always been preemptible but PREEMPTION && !PREEMPT_RCU
>>> > has seldom been exerciced (or was it even possible?).
>>> >
>>> > For example rcu_read_unlock_strict() can be called with preemption
>>> > enabled so we need the following otherwise the rdp is unstable, the
>>> > norm value becomes racy (though automagically fixed in rcu_report_qs_rdp())
>>> > and rcu_report_qs_rdp() might warn.
>>> >
>>> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
>>> > index 58d84c59f3dd..368f00267d4e 100644
>>> > --- a/include/linux/rcupdate.h
>>> > +++ b/include/linux/rcupdate.h
>>> > @@ -95,9 +95,9 @@ static inline void __rcu_read_lock(void)
>>> >
>>> > static inline void __rcu_read_unlock(void)
>>> > {
>>> > - preempt_enable();
>>> > if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD))
>>> > rcu_read_unlock_strict();
>>> > + preempt_enable();
>>> > }
>>> >
>>> > static inline int rcu_preempt_depth(void)
>>>
>>> Based on the discussion on the thread, how about keeping this and
>>> changing the preempt_count check in rcu_read_unlock_strict() instead?
>>>
>>> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
>>> index 1c7cbd145d5e..8fc67639d3a7 100644
>>> --- a/kernel/rcu/tree_plugin.h
>>> +++ b/kernel/rcu/tree_plugin.h
>>> @@ -831,8 +831,15 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
>>> void rcu_read_unlock_strict(void)
>>> {
>>> struct rcu_data *rdp;
>>> + int pc = ((preempt_count() & PREEMPT_MASK) >> PREEMPT_SHIFT);
>>
>> This should be in_atomic_preempt_off(), otherwise softirqs and IRQs are
>> spuriously accounted as quiescent states.
>
> Not sure I got that. Won't ((preempt_count() & PREEMPT_MASK) >> PREEMPT_SHIFT)
> give us task only preempt count?
Oh wait. I see your point now. My check is too narrow.
Great. This'll work:
- if (irqs_disabled() || preempt_count() || !rcu_state.gp_kthread)
+ if (irqs_disabled() || in_atomic_preempt_off()|| !rcu_state.gp_kthread)
Thanks
Ankur
> And, given that the preempt_count is at least 1, the (pc > 1) check below
> would ensure we have a stable rdp and call rcu_report_qs_rdp() before
> dropping the last preempt-count.
>
>>>
>>> - if (irqs_disabled() || preempt_count() || !rcu_state.gp_kthread)
>>> + /*
>>> + * rcu_report_qs_rdp() can only be invoked with a stable rdp and
>>> + * and from the local CPU.
>>> + * With CONFIG_PREEMPTION=y, do this while holding the last
>>> + * preempt_count which gets dropped after __rcu_read_unlock().
>>> + */
>>> + if (irqs_disabled() || pc > 1 || !rcu_state.gp_kthread)
>>> return;
>>> rdp = this_cpu_ptr(&rcu_data);
>>> rdp->cpu_no_qs.b.norm = false;
>
> Thanks
--
ankur
Powered by blists - more mailing lists