[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1506356054.6617.13.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Mon, 25 Sep 2017 09:14:14 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: David Miller <davem@...emloft.net>
Cc: netdev <netdev@...r.kernel.org>
Subject: [PATCH net-next] inetpeer: speed up inetpeer_invalidate_tree()
From: Eric Dumazet <edumazet@...gle.com>
As measured in my prior patch ("sch_netem: faster rb tree removal"),
rbtree_postorder_for_each_entry_safe() is nice looking but much slower
than using rb_next() directly, except when tree is small enough
to fit in CPU caches (then the cost is the same)
From: Eric Dumazet <edumazet@...gle.com>
---
net/ipv4/inetpeer.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index e7eb590c86ce2b33654c17c61619de74ff07bfd1..6e5626cc366c150b13fd44a8e36169c2fb54476d 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -284,14 +284,17 @@ EXPORT_SYMBOL(inet_peer_xrlim_allow);
void inetpeer_invalidate_tree(struct inet_peer_base *base)
{
- struct inet_peer *p, *n;
+ struct rb_node *p = rb_first(&base->rb_root);
- rbtree_postorder_for_each_entry_safe(p, n, &base->rb_root, rb_node) {
- inet_putpeer(p);
+ while (p) {
+ struct inet_peer *peer = rb_entry(p, struct inet_peer, rb_node);
+
+ p = rb_next(p);
+ rb_erase(&peer->rb_node, &base->rb_root);
+ inet_putpeer(peer);
cond_resched();
}
- base->rb_root = RB_ROOT;
base->total = 0;
}
EXPORT_SYMBOL(inetpeer_invalidate_tree);
Powered by blists - more mailing lists