[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4122BAE8-E48F-4C3B-9505-D0E033342416@drivenets.com>
Date: Mon, 23 Sep 2024 07:46:48 +0000
From: Gilad Naaman <gnaaman@...venets.com>
To: "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Adding net_device->neighbours pointers
Hello,
We're required to support a massive amount of VLANs over a single link.
In one of the flows we tested, we set the carrier-link down, which took
10s of seconds. (with the rtnl_lock being held for the entire time)
While profiling I realized that a significant amount of time is spent iterating
the neighbour tables in order to flush the neighbours of the VLANs. [0]
(~50% of 40s, for 4000 VLANs and 50K neighbours of each of IPv4/IPv6)
We managed to mostly eliminate this time being spent by throwing a few more
pointers to the mix:
struct neighbour {
- struct neighbour __rcu *next;
+ struct hlist_node __rcu list;
+ struct hlist_node __rcu dev_list;
struct net_device {
+ struct hlist_head neighbours[NEIGH_NR_TABLES];
The cost is that every neighbour is now 3 pointers larger,
and that every net_device is either 3 pointers larger,
or, if decnet is removed in the future, 2 pointers larger.
In return, we are able to iterate the neighbours owned by the device,
if they exist, instead of the entire table.
I can say that we're willing to pay this price in memory,
but I'm uncertain if this trade-off is right for the mainstream kernel user.
I would love to find a way to see this patch being upstreamed in some form
or another, and would love some advice.
Thank you,
Gilad
[0] perf-Flamegraph: https://gist.github.com/gnaaman-dn/eff753141e65b31a34cd14d14b942747
Powered by blists - more mailing lists