[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f2ada96-234f-31d8-664a-c84f5b461385@quicinc.com>
Date: Mon, 24 Jan 2022 19:32:01 +0530
From: Mukesh Ojha <quic_mojha@...cinc.com>
To: <paulmck@...nel.org>, Tejun Heo <tj@...nel.org>
CC: lkml <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>, <jiangshanlai@...il.com>
Subject: Re: synchronize_rcu_expedited gets stuck in hotplug path
On 1/19/2022 3:11 AM, Paul E. McKenney wrote:
> On Tue, Jan 18, 2022 at 10:11:34AM -1000, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Jan 18, 2022 at 12:06:46PM -0800, Paul E. McKenney wrote:
>>> Interesting. Adding Tejun and Lai on CC for their perspective.
>>>
>>> As you say, the incoming CPU invoked synchronize_rcu_expedited() which
>>> in turn invoked queue_work(). By default, workqueues will of course
>>> queue that work on the current CPU. But in this case, the CPU's bit
>>> is not yet set in the cpu_active_mask. Thus, a workqueue scheduled on
>>> the incoming CPU won't be invoked until CPUHP_AP_ACTIVE, which won't
>>> be reached until after the grace period ends, which cannot happen until
>>> the workqueue handler is invoked.
>>>
>>> I could imagine doing something as shown in the (untested) patch below,
>>> but first does this help?
>>>
>>> If it does help, would this sort of check be appropriate here or
>>> should it instead go into workqueues?
>> Maybe it can be solved by rearranging the hotplug sequence but it's fragile
>> to schedule per-cpu work items from hotplug paths. Maybe the whole issue can
>> be side-stepped by making synchronize_rcu_expedited() use unbound workqueue
>> instead? Does it require to be per-cpu?
> Good point!
>
> And now that you mention it, RCU expedited grace periods already avoid
> using workqueues during early boot. The (again untested) patch below
> extends that approach to incoming CPUs.
>
> Thoughts?
Hi Paul,
We are not seeing the issue after this patch.
Can we merge this patch ?
-Mukesh
>
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> index 60197ea24ceb9..1a45667402260 100644
> --- a/kernel/rcu/tree_exp.h
> +++ b/kernel/rcu/tree_exp.h
> @@ -816,7 +816,7 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp)
> */
> void synchronize_rcu_expedited(void)
> {
> - bool boottime = (rcu_scheduler_active == RCU_SCHEDULER_INIT);
> + bool no_wq;
> struct rcu_exp_work rew;
> struct rcu_node *rnp;
> unsigned long s;
> @@ -841,9 +841,15 @@ void synchronize_rcu_expedited(void)
> if (exp_funnel_lock(s))
> return; /* Someone else did our work for us. */
>
> + /* Don't use workqueue during boot or from an incoming CPU. */
> + preempt_disable();
> + no_wq = rcu_scheduler_active == RCU_SCHEDULER_INIT ||
> + !cpumask_test_cpu(smp_processor_id(), cpu_active_mask);
> + preempt_enable();
> +
> /* Ensure that load happens before action based on it. */
> - if (unlikely(boottime)) {
> - /* Direct call during scheduler init and early_initcalls(). */
> + if (unlikely(no_wq)) {
> + /* Direct call for scheduler init, early_initcall()s, and incoming CPUs. */
> rcu_exp_sel_wait_wake(s);
> } else {
> /* Marshall arguments & schedule the expedited grace period. */
> @@ -861,7 +867,7 @@ void synchronize_rcu_expedited(void)
> /* Let the next expedited grace period start. */
> mutex_unlock(&rcu_state.exp_mutex);
>
> - if (likely(!boottime))
> + if (likely(!no_wq))
> destroy_work_on_stack(&rew.rew_work);
> }
> EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
Powered by blists - more mailing lists