[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210220110356.84399-1-redsky110@gmail.com>
Date: Sat, 20 Feb 2021 19:03:56 +0800
From: Honglei Wang <redsky110@...il.com>
To: davem@...emloft.net, edumazet@...gle.com
Cc: netdev@...r.kernel.org, redsky110@...il.com
Subject: [PATCH] tcp: avoid unnecessary loop if even ports are used up
We are getting port for connect() from even ports firstly now. This
makes bind() users have more available slots at odd part. But there is a
problem here when the even ports are used up. This happens when there
is a flood of short life cycle connections. In this scenario, it starts
getting ports from the odd part, but each requirement has to walk all of
the even port and the related hash buckets (it probably gets nothing
before the workload pressure's gone) before go to the odd part. This
makes the code path __inet_hash_connect()->__inet_check_established()
and the locks there hot.
This patch tries to improve the strategy so we can go faster when the
even part is used up. It'll record the last gotten port was odd or even,
if it's an odd one, it means there is no available even port for us and
we probably can't get an even port this time, neither. So we just walk
1/16 of the whole even ports. If we can get one in this way, it probably
means there are more available even part, we'll go back to the old
strategy and walk all of them when next connect() comes. If still can't
get even port in the 1/16 part, we just go to the odd part directly and
avoid doing unnecessary loop.
Signed-off-by: Honglei Wang <redsky110@...il.com>
---
net/ipv4/inet_hashtables.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index 45fb450b4522..c95bf5cf9323 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -721,9 +721,10 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
struct net *net = sock_net(sk);
struct inet_bind_bucket *tb;
u32 remaining, offset;
- int ret, i, low, high;
+ int ret, i, low, high, span;
static u32 hint;
int l3mdev;
+ static bool last_port_is_odd;
if (port) {
head = &hinfo->bhash[inet_bhashfn(net, port,
@@ -756,8 +757,19 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
*/
offset &= ~1U;
other_parity_scan:
+ /* If the last available port is odd, it means
+ * we walked all of the even ports, but got
+ * nothing last time. It's telling us the even
+ * part is busy to get available port. In this
+ * case, we can go a bit faster.
+ */
+ if (last_port_is_odd && !(offset & 1) && remaining > 32)
+ span = 32;
+ else
+ span = 2;
+
port = low + offset;
- for (i = 0; i < remaining; i += 2, port += 2) {
+ for (i = 0; i < remaining; i += span, port += span) {
if (unlikely(port >= high))
port -= remaining;
if (inet_is_local_reserved_port(net, port))
@@ -806,6 +818,11 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
ok:
hint += i + 2;
+ if (offset & 1)
+ last_port_is_odd = true;
+ else
+ last_port_is_odd = false;
+
/* Head lock still held and bh's disabled */
inet_bind_hash(sk, tb, port);
if (sk_unhashed(sk)) {
--
2.14.1
Powered by blists - more mailing lists