lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220903151022.GV6159@paulmck-ThinkPad-P17-Gen-1>
Date:   Sat, 3 Sep 2022 08:10:22 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     Frederic Weisbecker <frederic@...nel.org>, rcu@...r.kernel.org,
        linux-kernel@...r.kernel.org, rushikesh.s.kadam@...el.com,
        urezki@...il.com, neeraj.iitr10@...il.com, rostedt@...dmis.org,
        vineeth@...byteword.org, boqun.feng@...il.com
Subject: Re: [PATCH v5 04/18] rcu: Fix late wakeup when flush of bypass
 cblist happens

On Fri, Sep 02, 2022 at 07:58:42PM -0400, Joel Fernandes wrote:
> On 9/2/2022 7:35 AM, Frederic Weisbecker wrote:
> > On Thu, Sep 01, 2022 at 10:17:06PM +0000, Joel Fernandes (Google) wrote:
> >> When the bypass cblist gets too big or its timeout has occurred, it is
> >> flushed into the main cblist. However, the bypass timer is still running
> >> and the behavior is that it would eventually expire and wake the GP
> >> thread.
> >>
> >> Since we are going to use the bypass cblist for lazy CBs, do the wakeup
> >> soon as the flush happens. Otherwise, the lazy-timer will go off much
> >> later and the now-non-lazy cblist CBs can get stranded for the duration
> >> of the timer.
> >>
> >> This is a good thing to do anyway (regardless of this series), since it
> >> makes the behavior consistent with behavior of other code paths where queueing
> >> something into the ->cblist makes the GP kthread in a non-sleeping state
> >> quickly.
> >>
> >> Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> >> ---
> >>  kernel/rcu/tree_nocb.h | 8 +++++++-
> >>  1 file changed, 7 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> >> index 0a5f0ef41484..31068dd31315 100644
> >> --- a/kernel/rcu/tree_nocb.h
> >> +++ b/kernel/rcu/tree_nocb.h
> >> @@ -447,7 +447,13 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp,
> >>  			rcu_advance_cbs_nowake(rdp->mynode, rdp);
> >>  			rdp->nocb_gp_adv_time = j;
> >>  		}
> >> -		rcu_nocb_unlock_irqrestore(rdp, flags);
> >> +
> >> +		// The flush succeeded and we moved CBs into the ->cblist.
> >> +		// However, the bypass timer might still be running. Wakeup the
> >> +		// GP thread by calling a helper with was_all_done set so that
> >> +		// wake up happens (needed if main CB list was empty before).
> >> +		__call_rcu_nocb_wake(rdp, true, flags)
> >> +
> > 
> > Ok so there are two different changes here:
> > 
> > 1) wake up nocb_gp as we just flushed the bypass list. Indeed if the regular
> >    callback list was empty before flushing, we rather want to immediately wake
> >    up nocb_gp instead of waiting for the bypass timer to process them.
> > 
> > 2) wake up nocb_gp unconditionally (ie: even if the regular queue was not empty
> >    before bypass flushing) so that nocb_gp_wait() is forced through another loop
> >    starting with cancelling the bypass timer (I suggest you put such explanation
> >    in the comment btw because that process may not be obvious for mortals).
> > 
> > The change 1) looks like a good idea to me.
> > 
> > The change 2) has unclear motivation. It forces nocb_gp_wait() through another
> > costly loop even though the timer might have been cancelled into some near
> > future, eventually avoiding that extra costly loop. Also it abuses the
> > was_alldone stuff and we may get rcu_nocb_wake with incoherent meanings
> > (WakeEmpty/WakeEmptyIsDeferred) when it's actually not empty.
> 
> Yes #2 can be optimized as follows I think on top of this patch, good point:

I am holding off on this for the moment.

							Thanx, Paul

> =============
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index ee5924ba2f3b..24aabd723abd 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -514,12 +514,13 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp,
> struct rcu_head *rhp,
>             ncbs >= qhimark) {
>                 rcu_nocb_lock(rdp);
> 
> +               *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist);
> +
>                 rcu_cblist_set_flush(&rdp->nocb_bypass,
>                                 lazy ? BIT(CB_DEBUG_BYPASS_LAZY_FLUSHED) :
> BIT(CB_DEBUG_BYPASS_FLUSHED),
>                                 (j - READ_ONCE(cb_debug_jiffies_first)));
> 
>                 if (!rcu_nocb_flush_bypass(rdp, rhp, j, lazy, false)) {
> -                       *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist);
>                         if (*was_alldone)
>                                 trace_rcu_nocb_wake(rcu_state.name, rdp->cpu,
>                                                     TPS("FirstQ"));
> @@ -537,7 +538,7 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct
> rcu_head *rhp,
>                 // However, the bypass timer might still be running. Wakeup the
>                 // GP thread by calling a helper with was_all_done set so that
>                 // wake up happens (needed if main CB list was empty before).
> -               __call_rcu_nocb_wake(rdp, true, flags)
> +               __call_rcu_nocb_wake(rdp, *was_all_done, flags)
> 
>                 return true; // Callback already enqueued.
>         }
> =============
> 
> > So you may need to clarify the purpose. And I would suggest to make two patches
> > here.
> I guess this change only #2 is no longer a concern? And splitting is not needed
> then as it is only #1.
> 
> Thanks,
> 
>  - Joel
> 
> 
> 
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ