[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200323113621.12048-4-urezki@gmail.com>
Date: Mon, 23 Mar 2020 12:36:17 +0100
From: "Uladzislau Rezki (Sony)" <urezki@...il.com>
To: LKML <linux-kernel@...r.kernel.org>,
"Paul E . McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>
Cc: RCU <rcu@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: [PATCH 3/7] rcu/tree: introduce expedited_drain flag
It is used and set to true when the bulk array can not
be maintained, it happens under low memory condition
and memory pressure.
In that case the drain work is scheduled right away and
not after KFREE_DRAIN_JIFFIES. It tends to speed up the
reclaim path. From the other hand there are no any data
showing the difference yet.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
---
kernel/rcu/tree.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 20d08eca7006..869a72e25d38 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3061,14 +3061,16 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
* due to memory pressure.
*
* Each kvfree_call_rcu() request is added to a batch. The batch will be drained
- * every KFREE_DRAIN_JIFFIES number of jiffies. All the objects in the batch will
- * be free'd in workqueue context. This allows us to: batch requests together to
- * reduce the number of grace periods during heavy kfree_rcu()/kvfree_rcu() load.
+ * every KFREE_DRAIN_JIFFIES number of jiffies or can be scheduled right away if
+ * a low memory is detected. All the objects in the batch will be free'd in
+ * workqueue context. This allows us to: batch requests together to reduce the
+ * number of grace periods during heavy kfree_rcu()/kvfree_rcu() load.
*/
void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
{
unsigned long flags;
struct kfree_rcu_cpu *krcp;
+ bool expedited_drain = false;
void *ptr;
local_irq_save(flags); // For safely calling this_cpu_ptr().
@@ -3094,6 +3096,14 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
head->func = func;
head->next = krcp->head;
krcp->head = head;
+
+ /*
+ * There was an issue to place the pointer directly
+ * into array, due to memory pressure. Initiate an
+ * expedited drain to accelerate lazy invocation of
+ * appropriate free calls.
+ */
+ expedited_drain = true;
}
WRITE_ONCE(krcp->count, krcp->count + 1);
@@ -3102,7 +3112,9 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
!krcp->monitor_todo) {
krcp->monitor_todo = true;
- schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
+
+ schedule_delayed_work(&krcp->monitor_work,
+ expedited_drain ? 0:KFREE_DRAIN_JIFFIES);
}
unlock_return:
--
2.20.1
Powered by blists - more mailing lists