[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZT6ilzH0HQNFLb2Y@boqun-archlinux>
Date: Sun, 29 Oct 2023 11:21:11 -0700
From: Boqun Feng <boqun.feng@...il.com>
To: Uladzislau Rezki <urezki@...il.com>
Cc: "Paul E . McKenney" <paulmck@...nel.org>,
RCU <rcu@...r.kernel.org>,
Neeraj upadhyay <Neeraj.Upadhyay@....com>,
Hillf Danton <hdanton@...a.com>,
Joel Fernandes <joel@...lfernandes.org>,
LKML <linux-kernel@...r.kernel.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
Frederic Weisbecker <frederic@...nel.org>
Subject: Re: [PATCH 1/3] rcu: Reduce synchronize_rcu() waiting time
On Thu, Oct 26, 2023 at 03:09:02PM +0200, Uladzislau Rezki wrote:
[...]
> > Late to the party, but I kinda wonder whether we can resolve it by:
> >
> > 1) either introduce a separate seglist that only contains callbacks
> > queued by call_rcu_hurry(), and whenever after an GP and callbacks are
> > ready, call_rcu_hurry() callbacks will be called first.
> >
> > 2) or make call_rcu_hurry() callbacks always inserted at the head of the
> > NEXT list instead of the tail, e.g. (untested code):
> >
> > diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
> > index f71fac422c8f..89a875f8ecc7 100644
> > --- a/kernel/rcu/rcu_segcblist.c
> > +++ b/kernel/rcu/rcu_segcblist.c
> > @@ -338,13 +338,21 @@ bool rcu_segcblist_nextgp(struct rcu_segcblist *rsclp, unsigned long *lp)
> > * absolutely not OK for it to ever miss posting a callback.
> > */
> > void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp,
> > - struct rcu_head *rhp)
> > + struct rcu_head *rhp,
> > + bool is_lazy)
> > {
> > rcu_segcblist_inc_len(rsclp);
> > rcu_segcblist_inc_seglen(rsclp, RCU_NEXT_TAIL);
> > - rhp->next = NULL;
> > - WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rhp);
> > - WRITE_ONCE(rsclp->tails[RCU_NEXT_TAIL], &rhp->next);
> > + /* If hurry and the list is not empty, put it in the front */
> > + if (!is_lazy && rcu_segcblist_get_seglen(rsclp, RCU_NEXT_TAIL) > 1) {
> > + // hurry callback, queued at front
> > + rhp->next = READ_ONCE(*rsclp->tails[RCU_NEXT_READY_TAIL]);
> > + WRITE_ONCE(*rsclp->tails[RCU_NEXT_READY_TAIL], rhp);
> > + } else {
> > + rhp->next = NULL;
> > + WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rhp);
> > + WRITE_ONCE(rsclp->tails[RCU_NEXT_TAIL], &rhp->next);
> > + }
> > }
> >
> > /*
> > diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h
> > index 4fe877f5f654..459475bb8df9 100644
> > --- a/kernel/rcu/rcu_segcblist.h
> > +++ b/kernel/rcu/rcu_segcblist.h
> > @@ -136,7 +136,8 @@ struct rcu_head *rcu_segcblist_first_cb(struct rcu_segcblist *rsclp);
> > struct rcu_head *rcu_segcblist_first_pend_cb(struct rcu_segcblist *rsclp);
> > bool rcu_segcblist_nextgp(struct rcu_segcblist *rsclp, unsigned long *lp);
> > void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp,
> > - struct rcu_head *rhp);
> > + struct rcu_head *rhp,
> > + bool is_lazy);
> > bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp,
> > struct rcu_head *rhp);
> > void rcu_segcblist_extract_done_cbs(struct rcu_segcblist *rsclp,
> > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> > index 20d7a238d675..53adf5ab9c9f 100644
> > --- a/kernel/rcu/srcutree.c
> > +++ b/kernel/rcu/srcutree.c
> > @@ -1241,7 +1241,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
> > sdp = raw_cpu_ptr(ssp->sda);
> > spin_lock_irqsave_sdp_contention(sdp, &flags);
> > if (rhp)
> > - rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
> > + rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp, true);
> > rcu_segcblist_advance(&sdp->srcu_cblist,
> > rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq));
> > s = rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq);
> > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > index 8d65f7d576a3..7dec7c68f88f 100644
> > --- a/kernel/rcu/tasks.h
> > +++ b/kernel/rcu/tasks.h
> > @@ -362,7 +362,7 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
> > }
> > if (needwake)
> > rtpcp->urgent_gp = 3;
> > - rcu_segcblist_enqueue(&rtpcp->cblist, rhp);
> > + rcu_segcblist_enqueue(&rtpcp->cblist, rhp, true);
> > raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
> > if (unlikely(needadjust)) {
> > raw_spin_lock_irqsave(&rtp->cbs_gbl_lock, flags);
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index cb1caefa8bd0..e05cbff40dc7 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -2670,7 +2670,7 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy_in)
> > if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy))
> > return; // Enqueued onto ->nocb_bypass, so just leave.
> > // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock.
> > - rcu_segcblist_enqueue(&rdp->cblist, head);
> > + rcu_segcblist_enqueue(&rdp->cblist, head, lazy_in);
> > if (__is_kvfree_rcu_offset((unsigned long)func))
> > trace_rcu_kvfree_callback(rcu_state.name, head,
> > (unsigned long)func,
> >
Surprisingly, this survives from a whole rcutorture run ;-)
> > Sure, there may be some corner cases I'm missing, but I think overall
> > this is better than (sorta) duplicating the logic of seglist (the llist
> > in sr_normal_state) or the logic of wake_rcu_gp()
> > (synchronize_rcu_normal).
> >
> > Anyway, these are just if-you-have-time-to-try options ;-)
> >
> Hm.. You still mix callbacks and there is a dependency in order
> of execution. The callback process time also might be varied from
> one callback to another.
>
> If you have many *_hurry() calls we end in the same situation. Apart
I plan to resolve that by only puting a call_rcu_hurry(wakeme_after_gp)
in the front of the list.
> of that we also have !CONFIG_RCU_NOCB_CPU path that is also covered
> by the patch that is in question.
I don't see why the above approach doesn't work for
!CONFIG_RCU_NOCB_CPU, but I maybe miss something here.
Do you have a benchmark I can try out to see if my diff can achieve the
similar result? Thanks!
Regards,
Boqun
>
> --
> Uladzislau Rezki
Powered by blists - more mailing lists