[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221012103844.1095777-1-luwei32@huawei.com>
Date: Wed, 12 Oct 2022 18:38:44 +0800
From: Lu Wei <luwei32@...wei.com>
To: <davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
<pabeni@...hat.com>, <yoshfuji@...ux-ipv6.org>,
<dsahern@...nel.org>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: [PATCH -next] tcp: fix a signed-integer-overflow bug in tcp_add_backlog()
The type of sk_rcvbuf and sk_sndbuf in struct sock is int, and
in tcp_add_backlog(), the variable limit is caculated by adding
sk_rcvbuf, sk_sndbuf and 64 * 1024, it may exceed the max value
of u32 and be truncated. So change it to u64 to avoid a potential
signed-integer-overflow, which leads to opposite result is returned
in the following function.
Signed-off-by: Lu Wei <luwei32@...wei.com>
---
include/net/sock.h | 4 ++--
net/ipv4/tcp_ipv4.c | 6 ++++--
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index 08038a385ef2..fc0fa29d8865 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1069,7 +1069,7 @@ static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)
* Do not take into account this skb truesize,
* to allow even a single big packet to come.
*/
-static inline bool sk_rcvqueues_full(const struct sock *sk, unsigned int limit)
+static inline bool sk_rcvqueues_full(const struct sock *sk, u64 limit)
{
unsigned int qsize = sk->sk_backlog.len + atomic_read(&sk->sk_rmem_alloc);
@@ -1078,7 +1078,7 @@ static inline bool sk_rcvqueues_full(const struct sock *sk, unsigned int limit)
/* The per-socket spinlock must be held here. */
static inline __must_check int sk_add_backlog(struct sock *sk, struct sk_buff *skb,
- unsigned int limit)
+ u64 limit)
{
if (sk_rcvqueues_full(sk, limit))
return -ENOBUFS;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 6376ad915765..3d4f9ac64165 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1769,7 +1769,8 @@ int tcp_v4_early_demux(struct sk_buff *skb)
bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
enum skb_drop_reason *reason)
{
- u32 limit, tail_gso_size, tail_gso_segs;
+ u32 tail_gso_size, tail_gso_segs;
+ u64 limit;
struct skb_shared_info *shinfo;
const struct tcphdr *th;
struct tcphdr *thtail;
@@ -1878,7 +1879,8 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
* to reduce memory overhead, so add a little headroom here.
* Few sockets backlog are possibly concurrently non empty.
*/
- limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;
+ limit = (u64)READ_ONCE(sk->sk_rcvbuf) +
+ (u64)READ_ONCE(sk->sk_sndbuf) + 64*1024;
if (unlikely(sk_add_backlog(sk, skb, limit))) {
bh_unlock_sock(sk);
--
2.31.1
Powered by blists - more mailing lists