[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210510142018.GA2350@pc638.lan>
Date: Mon, 10 May 2021 16:20:18 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: "Paul E. McKenney" <paulmck@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Uladzislau Rezki <urezki@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
Michal Hocko <mhocko@...e.com>, Daniel Axtens <dja@...ens.net>,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <neeraju@...eaurora.org>,
Joel Fernandes <joel@...lfernandes.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Theodore Y . Ts'o" <tytso@....edu>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [PATCH v1 4/5] kvfree_rcu: Refactor kfree_rcu_monitor() function
On Mon, May 10, 2021 at 07:01:43AM -0700, Paul E. McKenney wrote:
> On Mon, May 10, 2021 at 12:09:01PM +0200, Uladzislau Rezki wrote:
> > On Sun, May 09, 2021 at 04:59:54PM -0700, Andrew Morton wrote:
> > > On Wed, 28 Apr 2021 15:44:21 +0200 "Uladzislau Rezki (Sony)" <urezki@...il.com> wrote:
> > >
> > > > Rearm the monitor work directly from its own function that
> > > > is kfree_rcu_monitor(). So this patch puts the invocation
> > > > timing control in one place.
> > > >
> > > > ...
> > > >
> > > > --- a/kernel/rcu/tree.c
> > > > +++ b/kernel/rcu/tree.c
> > > > @@ -3415,37 +3415,44 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
> > > > return !repeat;
> > > > }
> > > >
> > > > -static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
> > > > - unsigned long flags)
> > > > +/*
> > > > + * This function queues a new batch. If success or nothing to
> > > > + * drain it returns 1. Otherwise 0 is returned indicating that
> > > > + * a reclaim kthread has not processed a previous batch.
> > > > + */
> > > > +static inline int kfree_rcu_drain(struct kfree_rcu_cpu *krcp)
> > > > {
> > > > + unsigned long flags;
> > > > + int ret;
> > > > +
> > > > + raw_spin_lock_irqsave(&krcp->lock, flags);
> > > > +
> > > > // Attempt to start a new batch.
> > > > - if (queue_kfree_rcu_work(krcp)) {
> > > > + ret = queue_kfree_rcu_work(krcp);
> > >
> > > This code has changed slightly in mainline. Can you please redo,
> > > retest and resend?
> > >
> > > > + if (ret)
> > > > // Success! Our job is done here.
> > > > krcp->monitor_todo = false;
> > > > - raw_spin_unlock_irqrestore(&krcp->lock, flags);
> > > > - return;
> > > > - }
> > >
> > > It's conventional to retain the braces here, otherwise the code looks
> > > weird. Unless you're a python programmer ;)
> > >
> > >
> > Hello, Anrew.
> >
> > This refactoring is not up to date and is obsolete, instead we have done
> > bigger rework of kfree_rcu_monitor(). It is located here:
> >
> > https://kernel.googlesource.com/pub/scm/linux/kernel/git/paulmck/linux-rcu/+/2349a35d39e7af5eef9064cbd0e42309040551da%5E%21/#F0
>
> If Andrew would like to start taking these kvfree_rcu() patches,
> that would be all to the good. For example, there is likely much
> more opportunity for optimization by bringing them closer to the
> sl*b allocators. Yes, they will need some privileged access to RCU
> internals, but not that much. And at some point, they should move from
> their current home in kernel/rcu/tree.c to somewhere in mm.
>
That is the plan to change the home :)
> To that end, here is the list in -rcu against current mainline, from
> youngest to oldest:
>
> b5691dd1cd7a kvfree_rcu: Fix comments according to current code
> 2349a35d39e7 kvfree_rcu: Refactor kfree_rcu_monitor()
> bfa15885893f kvfree_rcu: Release a page cache under memory pressure
> de9d86c3b0b7 kvfree_rcu: Use [READ/WRITE]_ONCE() macros to access to nr_bkv_objs
> 54a0393340f7 kvfree_rcu: Add a bulk-list check when a scheduler is run
> 7490789de1ac kvfree_rcu: Update "monitor_todo" once a batch is started
> 28e690ce0347 kvfree_rcu: Use kfree_rcu_monitor() instead of open-coded variant
>
> Please let me know how you would like to proceed.
>
> Thanx, Paul
Powered by blists - more mailing lists