[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEXW_YRquk15oGMCrYXLTKWtMzfPJhEJpjENM_rqt4qjwtAt+g@mail.gmail.com>
Date: Wed, 2 Nov 2022 13:29:17 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: Uladzislau Rezki <urezki@...il.com>
Cc: "Paul E. McKenney" <paulmck@...nel.org>, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] rcu/kfree: Do not request RCU when not needed
On Wed, Nov 2, 2022 at 1:24 PM Uladzislau Rezki <urezki@...il.com> wrote:
>
> On Wed, Nov 02, 2022 at 09:35:44AM -0700, Paul E. McKenney wrote:
> > On Wed, Nov 02, 2022 at 12:13:17PM -0400, Joel Fernandes wrote:
> > > On Wed, Nov 2, 2022 at 8:37 AM Uladzislau Rezki <urezki@...il.com> wrote:
> > > >
> > > > On Sat, Oct 29, 2022 at 01:28:56PM +0000, Joel Fernandes (Google) wrote:
> > > > > On ChromeOS, I am (almost) always seeing the optimization trigger.
> > > > > Tested boot up and trace_printk'ing how often it triggers.
> > > > >
> > > > > Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> > > > > ---
> > > > > kernel/rcu/tree.c | 18 +++++++++++++++++-
> > > > > 1 file changed, 17 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > > > index 591187b6352e..3e4c50b9fd33 100644
> > > > > --- a/kernel/rcu/tree.c
> > > > > +++ b/kernel/rcu/tree.c
> > > > > @@ -2935,6 +2935,7 @@ struct kfree_rcu_cpu_work {
> > > > >
> > > > > /**
> > > > > * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period
> > > > > + * @rdp: The rdp of the CPU that this kfree_rcu corresponds to.
> > > > > * @head: List of kfree_rcu() objects not yet waiting for a grace period
> > > > > * @bkvhead: Bulk-List of kvfree_rcu() objects not yet waiting for a grace period
> > > > > * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period
> > > > > @@ -2964,6 +2965,8 @@ struct kfree_rcu_cpu {
> > > > > struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES];
> > > > > raw_spinlock_t lock;
> > > > > struct delayed_work monitor_work;
> > > > > + struct rcu_data *rdp;
> > > > > + unsigned long last_gp_seq;
> > > > > bool initialized;
> > > > > int count;
> > > > >
> > > > > @@ -3167,6 +3170,7 @@ schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
> > > > > mod_delayed_work(system_wq, &krcp->monitor_work, delay);
> > > > > return;
> > > > > }
> > > > > + krcp->last_gp_seq = krcp->rdp->gp_seq;
> > > > > queue_delayed_work(system_wq, &krcp->monitor_work, delay);
> > > > > }
> > > > >
> > > > > @@ -3217,7 +3221,17 @@ static void kfree_rcu_monitor(struct work_struct *work)
> > > > > // be that the work is in the pending state when
> > > > > // channels have been detached following by each
> > > > > // other.
> > > > > - queue_rcu_work(system_wq, &krwp->rcu_work);
> > > > > + //
> > > > > + // NOTE about gp_seq wrap: In case of gp_seq overflow,
> > > > > + // it is possible for rdp->gp_seq to be less than
> > > > > + // krcp->last_gp_seq even though a GP might be over. In
> > > > > + // this rare case, we would just have one extra GP.
> > > > > + if (krcp->last_gp_seq &&
> > > > >
> > > > This check can be eliminated i think. A kfree_rcu_cpu is defined as
> > > > static so by default the last_gp_set is set to zero.
> > >
> > > Ack.
> > >
> > > > > @@ -4802,6 +4816,8 @@ static void __init kfree_rcu_batch_init(void)
> > > > > for_each_possible_cpu(cpu) {
> > > > > struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
> > > > >
> > > > > + krcp->rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > + krcp->last_gp_seq = 0;
> > > > >
> > > > Yep. This one can be just dropped.
> > > >
> > > > But all the rest looks good :) I will give it a try from test point of
> > > > view. It is interested from the memory footprint point of view.
> > >
> > > Ack. Thanks. Even though we should not sample rdp->gp_seq, I think it
> > > is still worth a test.
> >
> > Just for completeness, the main purpose of rdp->gp_seq is to reject
> > quiescent states that were seen during already-completed grace periods.
> >
> So it means that instead of gp_seq reading we should take a snaphshot
> of the current state:
>
> snp = get_state_synchronize_rcu();
>
> and later on do a:
>
> cond_synchronize_rcu(snp);
>
> to wait for a GP.
This can't be called from the timer IRQ handler though (monitor)
> Or if the poll_state_synchronize_rcu(oldstate)) != 0
> queue_rcu_work().
But something like this should be possible (maybe)
> Sorry for a description using the RCU API functions name :)
I believe you will have to call rcu_poll_gp_seq_start() as well if you
are using polled API. I am planning to look at this properly more,
soon. Right now I am going to write up the rcutop doc and share with
you guys.
(Maybe RCU polling is the right thing to do as we reuse all the infra
and any corner case it is handling)
thanks,
- Joel
> --
> Uladzislau Rezki
Powered by blists - more mailing lists