lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Jan 2022 12:21:38 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Mukesh Ojha <quic_mojha@...cinc.com>
Cc:     Tejun Heo <tj@...nel.org>, lkml <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>, jiangshanlai@...il.com
Subject: Re: synchronize_rcu_expedited gets stuck in hotplug path

On Mon, Jan 24, 2022 at 10:28:28PM +0530, Mukesh Ojha wrote:
> 
> On 1/24/2022 10:14 PM, Paul E. McKenney wrote:
> > On Mon, Jan 24, 2022 at 07:32:01PM +0530, Mukesh Ojha wrote:
> > > On 1/19/2022 3:11 AM, Paul E. McKenney wrote:
> > > > On Tue, Jan 18, 2022 at 10:11:34AM -1000, Tejun Heo wrote:
> > > > > Hello,
> > > > > 
> > > > > On Tue, Jan 18, 2022 at 12:06:46PM -0800, Paul E. McKenney wrote:
> > > > > > Interesting.  Adding Tejun and Lai on CC for their perspective.
> > > > > > 
> > > > > > As you say, the incoming CPU invoked synchronize_rcu_expedited() which
> > > > > > in turn invoked queue_work().  By default, workqueues will of course
> > > > > > queue that work on the current CPU.  But in this case, the CPU's bit
> > > > > > is not yet set in the cpu_active_mask.  Thus, a workqueue scheduled on
> > > > > > the incoming CPU won't be invoked until CPUHP_AP_ACTIVE, which won't
> > > > > > be reached until after the grace period ends, which cannot happen until
> > > > > > the workqueue handler is invoked.
> > > > > > 
> > > > > > I could imagine doing something as shown in the (untested) patch below,
> > > > > > but first does this help?
> > > > > > 
> > > > > > If it does help, would this sort of check be appropriate here or
> > > > > > should it instead go into workqueues?
> > > > > Maybe it can be solved by rearranging the hotplug sequence but it's fragile
> > > > > to schedule per-cpu work items from hotplug paths. Maybe the whole issue can
> > > > > be side-stepped by making synchronize_rcu_expedited() use unbound workqueue
> > > > > instead? Does it require to be per-cpu?
> > > > Good point!
> > > > 
> > > > And now that you mention it, RCU expedited grace periods already avoid
> > > > using workqueues during early boot.  The (again untested) patch below
> > > > extends that approach to incoming CPUs.
> > > > 
> > > > Thoughts?
> > > Hi Paul,
> > > 
> > > We are not seeing the issue after this patch.
> > > Can we merge this patch ?
> > It is currently in -rcu and should also be in -next shortly.  Left to
> > myself, and assuming further testing and reviews all go well, I would
> > submit it during the upcoming v5.18 merge window.
> > 
> > Does that work for you?  Or do you need it in mainline sooner?
> 
> Before reporting this issue, we saw only one instance of it.
> Also got this fix tested with same set of test cases, did not observe any
> issue as of yet.
> 
> I would be happy to get a mail once it clear all the testing and get merges
> to -next. I would cherry-pick it in android branch-5.10.

It is in -next as of next-20220125.

							Thanx, Paul

> -Mukesh
> 
> > 
> > 							Thanx, Paul
> > 
> > > -Mukesh
> > > 
> > > > 							Thanx, Paul
> > > > 
> > > > ------------------------------------------------------------------------
> > > > 
> > > > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > > > index 60197ea24ceb9..1a45667402260 100644
> > > > --- a/kernel/rcu/tree_exp.h
> > > > +++ b/kernel/rcu/tree_exp.h
> > > > @@ -816,7 +816,7 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp)
> > > >     */
> > > >    void synchronize_rcu_expedited(void)
> > > >    {
> > > > -	bool boottime = (rcu_scheduler_active == RCU_SCHEDULER_INIT);
> > > > +	bool no_wq;
> > > >    	struct rcu_exp_work rew;
> > > >    	struct rcu_node *rnp;
> > > >    	unsigned long s;
> > > > @@ -841,9 +841,15 @@ void synchronize_rcu_expedited(void)
> > > >    	if (exp_funnel_lock(s))
> > > >    		return;  /* Someone else did our work for us. */
> > > > +	/* Don't use workqueue during boot or from an incoming CPU. */
> > > > +	preempt_disable();
> > > > +	no_wq = rcu_scheduler_active == RCU_SCHEDULER_INIT ||
> > > > +		!cpumask_test_cpu(smp_processor_id(), cpu_active_mask);
> > > > +	preempt_enable();
> > > > +
> > > >    	/* Ensure that load happens before action based on it. */
> > > > -	if (unlikely(boottime)) {
> > > > -		/* Direct call during scheduler init and early_initcalls(). */
> > > > +	if (unlikely(no_wq)) {
> > > > +		/* Direct call for scheduler init, early_initcall()s, and incoming CPUs. */
> > > >    		rcu_exp_sel_wait_wake(s);
> > > >    	} else {
> > > >    		/* Marshall arguments & schedule the expedited grace period. */
> > > > @@ -861,7 +867,7 @@ void synchronize_rcu_expedited(void)
> > > >    	/* Let the next expedited grace period start. */
> > > >    	mutex_unlock(&rcu_state.exp_mutex);
> > > > -	if (likely(!boottime))
> > > > +	if (likely(!no_wq))
> > > >    		destroy_work_on_stack(&rew.rew_work);
> > > >    }
> > > >    EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ