[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210428134422.1894-4-urezki@gmail.com>
Date: Wed, 28 Apr 2021 15:44:21 +0200
From: "Uladzislau Rezki (Sony)" <urezki@...il.com>
To: LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
"Paul E . McKenney" <paulmck@...nel.org>
Cc: Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Axtens <dja@...ens.net>,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <neeraju@...eaurora.org>,
Joel Fernandes <joel@...lfernandes.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Theodore Y . Ts'o" <tytso@....edu>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Uladzislau Rezki <urezki@...il.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: [PATCH v1 4/5] kvfree_rcu: Refactor kfree_rcu_monitor() function
Rearm the monitor work directly from its own function that
is kfree_rcu_monitor(). So this patch puts the invocation
timing control in one place.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
---
kernel/rcu/tree.c | 35 +++++++++++++++++++++--------------
1 file changed, 21 insertions(+), 14 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index e44d6f8c56f0..229e909ad437 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3415,37 +3415,44 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
return !repeat;
}
-static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
- unsigned long flags)
+/*
+ * This function queues a new batch. If success or nothing to
+ * drain it returns 1. Otherwise 0 is returned indicating that
+ * a reclaim kthread has not processed a previous batch.
+ */
+static inline int kfree_rcu_drain(struct kfree_rcu_cpu *krcp)
{
+ unsigned long flags;
+ int ret;
+
+ raw_spin_lock_irqsave(&krcp->lock, flags);
+
// Attempt to start a new batch.
- if (queue_kfree_rcu_work(krcp)) {
+ ret = queue_kfree_rcu_work(krcp);
+ if (ret)
// Success! Our job is done here.
krcp->monitor_todo = false;
- raw_spin_unlock_irqrestore(&krcp->lock, flags);
- return;
- }
// Previous RCU batch still in progress, try again later.
- schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
raw_spin_unlock_irqrestore(&krcp->lock, flags);
+ return ret;
}
/*
* This function is invoked after the KFREE_DRAIN_JIFFIES timeout.
- * It invokes kfree_rcu_drain_unlock() to attempt to start another batch.
+ * It invokes kfree_rcu_drain() to attempt to start another batch.
*/
static void kfree_rcu_monitor(struct work_struct *work)
{
- unsigned long flags;
struct kfree_rcu_cpu *krcp = container_of(work, struct kfree_rcu_cpu,
monitor_work.work);
- raw_spin_lock_irqsave(&krcp->lock, flags);
- if (krcp->monitor_todo)
- kfree_rcu_drain_unlock(krcp, flags);
- else
- raw_spin_unlock_irqrestore(&krcp->lock, flags);
+ if (kfree_rcu_drain(krcp))
+ return;
+
+ // Not success. A previous batch is still in progress.
+ // Rearm a work to repeat an attempt of starting another batch.
+ schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
}
static enum hrtimer_restart
--
2.20.1
Powered by blists - more mailing lists