lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20230529113804.GA20300@didi-ThinkCentre-M920t-N000> Date: Mon, 29 May 2023 19:38:42 +0800 From: fuyuanli <fuyuanli@...iglobal.com> To: Eric Dumazet <edumazet@...gle.com>, "David S. Miller" <davem@...emloft.net>, David Ahern <dsahern@...nel.org>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Neal Cardwell <ncardwell@...gle.com> CC: <netdev@...r.kernel.org>, Jason Xing <kerneljasonxing@...il.com>, zhangweiping <zhangweiping@...iglobal.com>, tiozhang <tiozhang@...iglobal.com>, <linux-kernel@...r.kernel.org>, <bpf@...r.kernel.org> Subject: [PATCH net] tcp: introduce a compack timer handler in sack compression We've got some issues when sending a compressed ack is deferred to release phrase due to the socket owned by another user: 1. a compressed ack would not be sent because of lack of ICSK_ACK_TIMER flag. 2. the tp->compressed_ack counter should be decremented by 1. 3. we cannot pass timeout check and reset the delack timer in tcp_delack_timer_handler(). 4. we are not supposed to increment the LINUX_MIB_DELAYEDACKS counter. ... The reason why it could happen is that we previously reuse the delayed ack logic when handling the sack compression. With this patch applied, the sack compression logic would go into the same function (tcp_compack_timer_handler()) whether we defer sending ack or not. Therefore, those two issued could be easily solved. Here are more details in the old logic: When sack compression is triggered in the tcp_compressed_ack_kick(), if the sock is owned by user, it will set TCP_DELACK_TIMER_DEFERRED and then defer to the release cb phrase. Later once user releases the sock, tcp_delack_timer_handler() should send a ack as expected, which, however, cannot happen due to lack of ICSK_ACK_TIMER flag. Therefore, the receiver would not sent an ack until the sender's retransmission timeout. It definitely increases unnecessary latency. This issue happens rarely in the production environment. I used kprobe to hook some key functions like tcp_compressed_ack_kick, tcp_release_cb, tcp_delack_timer_handler and then found that when tcp_delack_timer_handler was called, value of icsk_ack.pending was 1, which means we only had flag ICSK_ACK_SCHED set, not including ICSK_ACK_TIMER. It was against our expectations. In conclusion, we chose to separate the sack compression from delayed ack logic to solve issues only happening when the process is deferred. Fixes: 5d9f4262b7ea ("tcp: add SACK compression") Signed-off-by: fuyuanli <fuyuanli@...iglobal.com> Signed-off-by: Jason Xing <kerneljasonxing@...il.com> --- include/linux/tcp.h | 2 ++ include/net/tcp.h | 1 + net/ipv4/tcp_output.c | 4 ++++ net/ipv4/tcp_timer.c | 28 +++++++++++++++++++--------- 4 files changed, 26 insertions(+), 9 deletions(-) diff --git a/include/linux/tcp.h b/include/linux/tcp.h index b4c08ac86983..cd15a9972c48 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -461,6 +461,7 @@ enum tsq_enum { TCP_MTU_REDUCED_DEFERRED, /* tcp_v{4|6}_err() could not call * tcp_v{4|6}_mtu_reduced() */ + TCP_COMPACK_TIMER_DEFERRED, /* tcp_compressed_ack_kick() found socket was owned */ }; enum tsq_flags { @@ -470,6 +471,7 @@ enum tsq_flags { TCPF_WRITE_TIMER_DEFERRED = (1UL << TCP_WRITE_TIMER_DEFERRED), TCPF_DELACK_TIMER_DEFERRED = (1UL << TCP_DELACK_TIMER_DEFERRED), TCPF_MTU_REDUCED_DEFERRED = (1UL << TCP_MTU_REDUCED_DEFERRED), + TCPF_COMPACK_TIMER_DEFERRED = (1UL << TCP_DELACK_TIMER_DEFERRED), }; #define tcp_sk(ptr) container_of_const(ptr, struct tcp_sock, inet_conn.icsk_inet.sk) diff --git a/include/net/tcp.h b/include/net/tcp.h index 18a038d16434..e310d7bf400c 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -342,6 +342,7 @@ void tcp_release_cb(struct sock *sk); void tcp_wfree(struct sk_buff *skb); void tcp_write_timer_handler(struct sock *sk); void tcp_delack_timer_handler(struct sock *sk); +void tcp_compack_timer_handler(struct sock *sk); int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg); int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb); void tcp_rcv_established(struct sock *sk, struct sk_buff *skb); diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index cfe128b81a01..1703caab6632 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1110,6 +1110,10 @@ void tcp_release_cb(struct sock *sk) tcp_delack_timer_handler(sk); __sock_put(sk); } + if (flags & TCPF_COMPACK_TIMER_DEFERRED) { + tcp_compack_timer_handler(sk); + __sock_put(sk); + } if (flags & TCPF_MTU_REDUCED_DEFERRED) { inet_csk(sk)->icsk_af_ops->mtu_reduced(sk); __sock_put(sk); diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index b839c2f91292..069f6442069b 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -318,6 +318,23 @@ void tcp_delack_timer_handler(struct sock *sk) } } +/* Called with BH disabled */ +void tcp_compack_timer_handler(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + + if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) + return; + + if (tp->compressed_ack) { + /* Since we have to send one ack finally, + * subtract one from tp->compressed_ack to keep + * LINUX_MIB_TCPACKCOMPRESSED accurate. + */ + tp->compressed_ack--; + tcp_send_ack(sk); + } +} /** * tcp_delack_timer() - The TCP delayed ACK timeout handler @@ -757,16 +774,9 @@ static enum hrtimer_restart tcp_compressed_ack_kick(struct hrtimer *timer) bh_lock_sock(sk); if (!sock_owned_by_user(sk)) { - if (tp->compressed_ack) { - /* Since we have to send one ack finally, - * subtract one from tp->compressed_ack to keep - * LINUX_MIB_TCPACKCOMPRESSED accurate. - */ - tp->compressed_ack--; - tcp_send_ack(sk); - } + tcp_compack_timer_handler(sk); } else { - if (!test_and_set_bit(TCP_DELACK_TIMER_DEFERRED, + if (!test_and_set_bit(TCP_COMPACK_TIMER_DEFERRED, &sk->sk_tsq_flags)) sock_hold(sk); } -- 2.17.1
Powered by blists - more mailing lists