[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241013201704.49576-16-Julia.Lawall@inria.fr>
Date: Sun, 13 Oct 2024 22:17:02 +0200
From: Julia Lawall <Julia.Lawall@...ia.fr>
To: Pablo Neira Ayuso <pablo@...filter.org>
Cc: kernel-janitors@...r.kernel.org,
vbabka@...e.cz,
paulmck@...nel.org,
Jozsef Kadlecsik <kadlec@...filter.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
netfilter-devel@...r.kernel.org,
coreteam@...filter.org,
netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 15/17] netfilter: nf_conncount: replace call_rcu by kfree_rcu for simple kmem_cache_free callback
Since SLOB was removed and since
commit 6c6c47b063b5 ("mm, slab: call kvfree_rcu_barrier() from kmem_cache_destroy()"),
it is not necessary to use call_rcu when the callback only performs
kmem_cache_free. Use kfree_rcu() directly.
The changes were made using Coccinelle.
Signed-off-by: Julia Lawall <Julia.Lawall@...ia.fr>
---
net/netfilter/nf_conncount.c | 10 +---------
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
index 4890af4dc263..6a7a6c2d6ebc 100644
--- a/net/netfilter/nf_conncount.c
+++ b/net/netfilter/nf_conncount.c
@@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
}
EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
-static void __tree_nodes_free(struct rcu_head *h)
-{
- struct nf_conncount_rb *rbconn;
-
- rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
- kmem_cache_free(conncount_rb_cachep, rbconn);
-}
-
/* caller must hold tree nf_conncount_locks[] lock */
static void tree_nodes_free(struct rb_root *root,
struct nf_conncount_rb *gc_nodes[],
@@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
spin_lock(&rbconn->list.list_lock);
if (!rbconn->list.count) {
rb_erase(&rbconn->node, root);
- call_rcu(&rbconn->rcu_head, __tree_nodes_free);
+ kfree_rcu(rbconn, rcu_head);
}
spin_unlock(&rbconn->list.list_lock);
}
Powered by blists - more mailing lists