[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211105014953.972946-2-dima@arista.com>
Date: Fri, 5 Nov 2021 01:49:49 +0000
From: Dmitry Safonov <dima@...sta.com>
To: linux-kernel@...r.kernel.org
Cc: Dmitry Safonov <0x7f454c46@...il.com>,
Dmitry Safonov <dima@...sta.com>,
Andy Lutomirski <luto@...capital.net>,
David Ahern <dsahern@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Francesco Ruggeri <fruggeri@...sta.com>,
Jakub Kicinski <kuba@...nel.org>,
Herbert Xu <herbert@...dor.apana.org.au>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
Leonard Crestez <cdleonard@...il.com>,
linux-crypto@...r.kernel.org, netdev@...r.kernel.org
Subject: [PATCH 1/5] tcp/md5: Don't BUG_ON() failed kmemdup()
static_branch_unlikely(&tcp_md5_needed) is enabled by
tcp_alloc_md5sig_pool(), so as long as the code doesn't change
tcp_md5sig_pool has been already populated if this code is being
executed.
In case tcptw->tw_md5_key allocaion failed - no reason to crash kernel:
tcp_{v4,v6}_send_ack() will send unsigned segment, the connection won't be
established, which is bad enough, but in OOM situation totally
acceptable and better than kernel crash.
Introduce tcp_md5sig_pool_ready() helper.
tcp_alloc_md5sig_pool() usage is intentionally avoided here as it's
fast-path here and it's check for sanity rather than point of actual
pool allocation. That will allow to have generic slow-path allocator
for tcp crypto pool.
Signed-off-by: Dmitry Safonov <dima@...sta.com>
---
include/net/tcp.h | 1 +
net/ipv4/tcp.c | 5 +++++
net/ipv4/tcp_minisocks.c | 5 +++--
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 4da22b41bde6..3e5423a10a74 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1672,6 +1672,7 @@ tcp_md5_do_lookup(const struct sock *sk, int l3index,
#endif
bool tcp_alloc_md5sig_pool(void);
+bool tcp_md5sig_pool_ready(void);
struct tcp_md5sig_pool *tcp_get_md5sig_pool(void);
static inline void tcp_put_md5sig_pool(void)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index b7796b4cf0a0..c0856a6af9f5 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -4314,6 +4314,11 @@ bool tcp_alloc_md5sig_pool(void)
}
EXPORT_SYMBOL(tcp_alloc_md5sig_pool);
+bool tcp_md5sig_pool_ready(void)
+{
+ return tcp_md5sig_pool_populated;
+}
+EXPORT_SYMBOL(tcp_md5sig_pool_ready);
/**
* tcp_get_md5sig_pool - get md5sig_pool for this user
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index cf913a66df17..c99cdb529902 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -293,11 +293,12 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
tcptw->tw_md5_key = NULL;
if (static_branch_unlikely(&tcp_md5_needed)) {
struct tcp_md5sig_key *key;
+ bool err = WARN_ON(!tcp_md5sig_pool_ready());
key = tp->af_specific->md5_lookup(sk, sk);
- if (key) {
+ if (key && !err) {
tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC);
- BUG_ON(tcptw->tw_md5_key && !tcp_alloc_md5sig_pool());
+ WARN_ON_ONCE(tcptw->tw_md5_key == NULL);
}
}
} while (0);
--
2.33.1
Powered by blists - more mailing lists