[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220214092514.420537078@linuxfoundation.org>
Date: Mon, 14 Feb 2022 10:26:07 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
syzbot <syzkaller@...glegroups.com>,
Talal Ahmad <talalahmad@...gle.com>,
Arjun Roy <arjunroy@...gle.com>,
Willem de Bruijn <willemb@...gle.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.16 123/203] tcp: take care of mixed splice()/sendmsg(MSG_ZEROCOPY) case
From: Eric Dumazet <edumazet@...gle.com>
[ Upstream commit f8d9d938514f46c4892aff6bfe32f425e84d81cc ]
syzbot found that mixing sendpage() and sendmsg(MSG_ZEROCOPY)
calls over the same TCP socket would again trigger the
infamous warning in inet_sock_destruct()
WARN_ON(sk_forward_alloc_get(sk));
While Talal took into account a mix of regular copied data
and MSG_ZEROCOPY one in the same skb, the sendpage() path
has been forgotten.
We want the charging to happen for sendpage(), because
pages could be coming from a pipe. What is missing is the
downgrading of pure zerocopy status to make sure
sk_forward_alloc will stay synced.
Add tcp_downgrade_zcopy_pure() helper so that we can
use it from the two callers.
Fixes: 9b65b17db723 ("net: avoid double accounting for pure zerocopy skbs")
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Reported-by: syzbot <syzkaller@...glegroups.com>
Cc: Talal Ahmad <talalahmad@...gle.com>
Cc: Arjun Roy <arjunroy@...gle.com>
Cc: Willem de Bruijn <willemb@...gle.com>
Acked-by: Soheil Hassas Yeganeh <soheil@...gle.com>
Link: https://lore.kernel.org/r/20220203225547.665114-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@...nel.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
net/ipv4/tcp.c | 33 +++++++++++++++++++--------------
1 file changed, 19 insertions(+), 14 deletions(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 94cbba9fb12b1..28abb0bb1c515 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -936,6 +936,22 @@ void tcp_remove_empty_skb(struct sock *sk)
}
}
+/* skb changing from pure zc to mixed, must charge zc */
+static int tcp_downgrade_zcopy_pure(struct sock *sk, struct sk_buff *skb)
+{
+ if (unlikely(skb_zcopy_pure(skb))) {
+ u32 extra = skb->truesize -
+ SKB_TRUESIZE(skb_end_offset(skb));
+
+ if (!sk_wmem_schedule(sk, extra))
+ return -ENOMEM;
+
+ sk_mem_charge(sk, extra);
+ skb_shinfo(skb)->flags &= ~SKBFL_PURE_ZEROCOPY;
+ }
+ return 0;
+}
+
static struct sk_buff *tcp_build_frag(struct sock *sk, int size_goal, int flags,
struct page *page, int offset, size_t *size)
{
@@ -971,7 +987,7 @@ static struct sk_buff *tcp_build_frag(struct sock *sk, int size_goal, int flags,
tcp_mark_push(tp, skb);
goto new_segment;
}
- if (!sk_wmem_schedule(sk, copy))
+ if (tcp_downgrade_zcopy_pure(sk, skb) || !sk_wmem_schedule(sk, copy))
return NULL;
if (can_coalesce) {
@@ -1319,19 +1335,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
copy = min_t(int, copy, pfrag->size - pfrag->offset);
- /* skb changing from pure zc to mixed, must charge zc */
- if (unlikely(skb_zcopy_pure(skb))) {
- u32 extra = skb->truesize -
- SKB_TRUESIZE(skb_end_offset(skb));
-
- if (!sk_wmem_schedule(sk, extra))
- goto wait_for_space;
-
- sk_mem_charge(sk, extra);
- skb_shinfo(skb)->flags &= ~SKBFL_PURE_ZEROCOPY;
- }
-
- if (!sk_wmem_schedule(sk, copy))
+ if (tcp_downgrade_zcopy_pure(sk, skb) ||
+ !sk_wmem_schedule(sk, copy))
goto wait_for_space;
err = skb_copy_to_page_nocache(sk, &msg->msg_iter, skb,
--
2.34.1
Powered by blists - more mailing lists