[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181108174655.mnm3cr4wn2hrrtep@linutronix.de>
Date: Thu, 8 Nov 2018 18:46:55 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Paul E. McKenney" <paulmck@...ux.ibm.com>, tglx@...utronix.de
Cc: linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: srcu: use cpu_online() instead custom check
On 2018-11-08 09:10:24 [-0800], Paul E. McKenney wrote:
> > Is this again a hidden RCU detail that preempt_disable() on CPU4 is
> > enough to ensure that CPU2 does not get marked offline between?
>
> The call_rcu_sched parameter to synchronize_rcu_mult() makes this work.
> This synchronize_rcu_mult() call is in sched_cpu_deactivate(), so it
> is a hidden sched/RCU detail, I guess.
>
> Or am I missing the point of your question?
No, this answers it.
> > > Or is getting rid of that preempt_disable region the real reason for
> > > this change?
> >
> > Well, that preempt_disable() + queue_(delayed_)work() does not work -RT.
> > But looking further, that preempt_disable() while looking at online CPUs
> > didn't look good.
>
> That is why it is invoked from the very early CPU-hotplug notifier. That
> early in the process, the preempt_disable() does prevent the current CPU
> from being taken offline twice: Once due to synchronize_rcu_mult(), and
> once due to the stop-machine call.
:)
> > The description is not up-to-date. There was this hunk:
> > |@@ -4236,8 +4232,6 @@ void __init rcu_init(void)
> > | for_each_online_cpu(cpu) {
> > | rcutree_prepare_cpu(cpu);
> > | rcu_cpu_starting(cpu);
> > |- if (IS_ENABLED(CONFIG_TREE_SRCU))
> > |- srcu_online_cpu(cpu);
> > | }
> > | }
> >
> > which got removed in v4.16.
>
> Ah! Here is the current rcu_init() code:
>
> for_each_online_cpu(cpu) {
> rcutree_prepare_cpu(cpu);
> rcu_cpu_starting(cpu);
> rcutree_online_cpu(cpu);
> }
>
> And rcutree_online_cpu() calls srcu_online_cpu() when CONFIG_TREE_SRCU
> is enabled, so no need for the direct call from rcu_init().
So if a CPU goes down, the timer gets migrated to another CPU. If the
CPU is already offline the timer can be programmed and nothing happens.
If timer_add_on() would return an error we could have fallback code.
Looking at the users of queue_delayed_work_on() there are only two using
it really (the others are using smp_processor_id()) and one of them is
using get_online_cpus().
It does not look like there a lot of users affected. Would be reasonable
to avoid adding timers to offlined CPUs?
> Thanx, Paul
Sebastian
Powered by blists - more mailing lists