[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200918194817.48921-4-urezki@gmail.com>
Date: Fri, 18 Sep 2020 21:48:16 +0200
From: "Uladzislau Rezki (Sony)" <urezki@...il.com>
To: LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
"Paul E . McKenney" <paulmck@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
"Theodore Y . Ts'o" <tytso@....edu>,
Joel Fernandes <joel@...lfernandes.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Uladzislau Rezki <urezki@...il.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: [PATCH 3/4] rcu/tree: use __rcu_alloc_page_lockless() func.
Use a newly introduced __rcu_alloc_page_lockless() function
directly in the k[v]free_rcu() path, a new pointer array can
be obtained by demand, what reduces a memory footprint, does
it without any delays and in time.
Please note, we still keep the worker approach introduced earlier,
because the lock-less page allocation uses a per-cpu-list cache
that can be depleted, what is absolutely a normal behaviour.
If so, the worker we have, by requesting a new page will also
initiate an internal process that prefetches specified number
of elements from the buddy allocator populating the "pcplist"
by new fresh pages.
A number of pre-fetched elements can be controlled via sysfs
attribute. Please see the /proc/sys/vm/percpu_pagelist_fraction.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
---
kernel/rcu/tree.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4bfc46a1e9d1..d51209343029 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3401,6 +3401,10 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
if (!krcp->bkvhead[idx] ||
krcp->bkvhead[idx]->nr_records == KVFREE_BULK_MAX_ENTR) {
bnode = get_cached_bnode(krcp);
+ if (!bnode)
+ bnode = (struct kvfree_rcu_bulk_data *)
+ __rcu_alloc_page_lockless();
+
/* Switch to emergency path. */
if (!bnode)
return false;
--
2.20.1
Powered by blists - more mailing lists