[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230408142517.800549-1-qiang1.zhang@intel.com>
Date: Sat, 8 Apr 2023 22:25:17 +0800
From: Zqiang <qiang1.zhang@...el.com>
To: urezki@...il.com, paulmck@...nel.org, frederic@...nel.org,
joel@...lfernandes.org, qiang1.zhang@...el.com
Cc: qiang.zhang1211@...il.com, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v3] rcu/kvfree: Prevents cache growing when the backoff_page_cache_fill is set
Currently, in kfree_rcu_shrink_scan(), the drain_page_cache() is
executed before kfree_rcu_monitor() to drain page cache, if the bnode
structure's->gp_snap has done, the kvfree_rcu_bulk() will fill the
page cache again in kfree_rcu_monitor(), this commit add a check
for krcp structure's->backoff_page_cache_fill in put_cached_bnode(),
if the krcp structure's->backoff_page_cache_fill is set, prevent page
cache growing and disable allocated page in fill_page_cache_func().
Signed-off-by: Zqiang <qiang1.zhang@...el.com>
---
kernel/rcu/tree.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index cc34d13be181..9d9d3772cc45 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2908,6 +2908,8 @@ static inline bool
put_cached_bnode(struct kfree_rcu_cpu *krcp,
struct kvfree_rcu_bulk_data *bnode)
{
+ if (atomic_read(&krcp->backoff_page_cache_fill))
+ return false;
// Check the limit.
if (krcp->nr_bkv_objs >= rcu_min_cached_objs)
return false;
@@ -3221,7 +3223,7 @@ static void fill_page_cache_func(struct work_struct *work)
int i;
nr_pages = atomic_read(&krcp->backoff_page_cache_fill) ?
- 1 : rcu_min_cached_objs;
+ 0 : rcu_min_cached_objs;
for (i = 0; i < nr_pages; i++) {
bnode = (struct kvfree_rcu_bulk_data *)
--
2.32.0
Powered by blists - more mailing lists