[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1666941754-10216-1-git-send-email-wangyufen@huawei.com>
Date: Fri, 28 Oct 2022 15:22:34 +0800
From: Wang Yufen <wangyufen@...wei.com>
To: <bpf@...r.kernel.org>, <netdev@...r.kernel.org>
CC: <john.fastabend@...il.com>, <jakub@...udflare.com>
Subject: [PATCH net] bpf, sockmap: fix the sk->sk_forward_alloc warning of sk_stream_kill_queues()
When running `test_sockmap` selftests, got the following warning:
WARNING: CPU: 2 PID: 197 at net/core/stream.c:205 sk_stream_kill_queues+0xd3/0xf0
Call Trace:
<TASK>
inet_csk_destroy_sock+0x55/0x110
tcp_rcv_state_process+0xd28/0x1380
? tcp_v4_do_rcv+0x77/0x2c0
tcp_v4_do_rcv+0x77/0x2c0
__release_sock+0x106/0x130
__tcp_close+0x1a7/0x4e0
tcp_close+0x20/0x70
inet_release+0x3c/0x80
__sock_release+0x3a/0xb0
sock_close+0x14/0x20
__fput+0xa3/0x260
task_work_run+0x59/0xb0
exit_to_user_mode_prepare+0x1b3/0x1c0
syscall_exit_to_user_mode+0x19/0x50
do_syscall_64+0x48/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
The root case is: In commit 84472b436e76 ("bpf, sockmap: Fix more
uncharged while msg has more_data") , I used msg->sg.size replace
tosend rudely, which break the
if (msg->apply_bytes && msg->apply_bytes < send)
scene.
Fixes: 84472b436e76 ("bpf, sockmap: Fix more uncharged while msg has more_data")
Reported-by: Jakub Sitnicki <jakub@...udflare.com>
Signed-off-by: Wang Yufen <wangyufen@...wei.com>
---
net/ipv4/tcp_bpf.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index a1626af..38d4735 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -278,7 +278,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
{
bool cork = false, enospc = sk_msg_full(msg);
struct sock *sk_redir;
- u32 tosend, delta = 0;
+ u32 tosend, orgsize, sended, delta = 0;
u32 eval = __SK_NONE;
int ret;
@@ -333,10 +333,12 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
cork = true;
psock->cork = NULL;
}
- sk_msg_return(sk, msg, msg->sg.size);
+ sk_msg_return(sk, msg, tosend);
release_sock(sk);
+ orgsize = msg->sg.size;
ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
+ sended = orgsize - msg->sg.size;
if (eval == __SK_REDIRECT)
sock_put(sk_redir);
@@ -374,8 +376,8 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
if (msg &&
msg->sg.data[msg->sg.start].page_link &&
msg->sg.data[msg->sg.start].length) {
- if (eval == __SK_REDIRECT)
- sk_mem_charge(sk, msg->sg.size);
+ if (eval == __SK_REDIRECT && tosend > sended)
+ sk_mem_charge(sk, tosend - sended);
goto more_data;
}
}
--
1.8.3.1
Powered by blists - more mailing lists