[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YzCQYW6yRRBWWMXk@google.com>
Date: Sun, 25 Sep 2022 17:31:13 +0000
From: Joel Fernandes <joel@...lfernandes.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
rushikesh.s.kadam@...el.com, urezki@...il.com,
neeraj.iitr10@...il.com, frederic@...nel.org, rostedt@...dmis.org
Subject: Re: [PATCH v6 1/4] rcu: Make call_rcu() lazy to save power
Hi Paul,
Back to Mutt for this one ;-)
Replies below:
On Sat, Sep 24, 2022 at 02:11:32PM -0700, Paul E. McKenney wrote:
[...]
> > >> + */
> > >> + if (lazy && rhp) {
> > >> + rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, NULL);
> > >> + rcu_cblist_enqueue(&rcl, rhp);
> > >
> > > Would it makes sense to enqueue rhp onto ->nocb_bypass first, NULL out
> > > rhp, then let the rcu_cblist_flush_enqueue() be common code? Or did this
> > > function grow a later use of rhp that I missed?
> >
> > No that could be done, but it prefer to keep it this
> > way because rhp is a function parameter and I
> > prefer not to modify those since it could add a
> > bug in future where rhp passed by user is now
> > NULL for some reason, half way through the
> > function.
>
> I agree that changing a function parameter is bad practice.
>
> So the question becomes whether introducing a local would outweigh
> consolidating this code. Could you please at least give it a shot?
>
> > >> + WRITE_ONCE(rdp->lazy_len, 0);
> > >> + } else {
> > >> + rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp);
> > >> + WRITE_ONCE(rdp->lazy_len, 0);
> > >
> > > This WRITE_ONCE() can be dropped out of the "if" statement, correct?
> >
> > Yes will update.
>
> Thank you!
>
> > > If so, this could be an "if" statement with two statements in its "then"
> > > clause, no "else" clause, and two statements following the "if" statement.
> >
> > I don’t think we can get rid of the else part but I’ll see what it looks like.
>
> In the function header, s/rhp/rhp_in/, then:
>
> struct rcu_head *rhp = rhp_in;
>
> And then:
>
> if (lazy && rhp) {
> rcu_cblist_enqueue(&rdp->nocb_bypass, rhp);
> rhp = NULL;
This enqueues on to the bypass list, where as if lazy && rhp, I want to queue
the new rhp on to the main cblist. So the pseudo code in my patch is:
if (lazy and rhp) then
1. flush bypass CBs on to main list.
2. queue new CB on to main list.
else
1. flush bypass CBs on to main list
2. queue new CB on to bypass list.
> }
> rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp);
> WRITE_ONCE(rdp->lazy_len, 0);
>
> Or did I mess something up?
So the rcu_cblist_flush_enqueue() has to happen before the
rcu_cblist_enqueue() to preserve the ordering of flushing into the main list,
and queuing on to the main list for the "if". Where as in your snip, the
order is reversed.
If I consolidate it then, it looks like the following. However, it is a bit
more unreadable. I could instead just take the WRITE_ONCE out of both if/else
and move it to after the if/else, that would be cleanest. Does that sound
good to you? Thanks!
---8<-----------------------
diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 1a182b9c4f6c..bd3f54d314e8 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -327,10 +327,11 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype,
*
* Note that this function always returns true if rhp is NULL.
*/
-static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp,
+static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp_in,
unsigned long j, unsigned long flush_flags)
{
struct rcu_cblist rcl;
+ struct rcu_head *rhp = rhp_in;
bool lazy = flush_flags & FLUSH_BP_LAZY;
WARN_ON_ONCE(!rcu_rdp_is_offloaded(rdp));
@@ -348,14 +349,13 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp,
* If the new CB requested was a lazy one, queue it onto the main
* ->cblist so we can take advantage of a sooner grade period.
*/
- if (lazy && rhp) {
- rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, NULL);
- rcu_cblist_enqueue(&rcl, rhp);
- WRITE_ONCE(rdp->lazy_len, 0);
- } else {
- rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp);
- WRITE_ONCE(rdp->lazy_len, 0);
- }
+ if (lazy && rhp)
+ rhp = NULL;
+ rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp);
+ if (lazy && rhp_in)
+ rcu_cblist_enqueue(&rcl, rhp_in);
+
+ WRITE_ONCE(rdp->lazy_len, 0);
rcu_segcblist_insert_pend_cbs(&rdp->cblist, &rcl);
WRITE_ONCE(rdp->nocb_bypass_first, j);
Powered by blists - more mailing lists