[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADVnQy=BMM2e8VokChK1E3bV61-LzWYuKe92eqHKgwvS28qCcg@mail.gmail.com>
Date: Mon, 13 Jan 2025 11:48:09 -0500
From: Neal Cardwell <ncardwell@...gle.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org, Simon Horman <horms@...nel.org>,
Kuniyuki Iwashima <kuniyu@...zon.com>, Jason Xing <kerneljasonxing@...il.com>, eric.dumazet@...il.com
Subject: Re: [PATCH v2 net-next 3/3] tcp: add LINUX_MIB_PAWS_OLD_ACK SNMP counter
On Mon, Jan 13, 2025 at 9:28 AM Neal Cardwell <ncardwell@...gle.com> wrote:
>
> On Mon, Jan 13, 2025 at 8:56 AM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > Prior patch in the series added TCP_RFC7323_PAWS_ACK drop reason.
> >
> > This patch adds the corresponding SNMP counter, for folks
> > using nstat instead of tracing for TCP diagnostics.
> >
> > nstat -az | grep PAWSOldAck
> >
> > Suggested-by: Neal Cardwell <ncardwell@...gle.com>
> > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> > ---
> > include/net/dropreason-core.h | 1 +
> > include/uapi/linux/snmp.h | 1 +
> > net/ipv4/proc.c | 1 +
> > net/ipv4/tcp_input.c | 7 ++++---
> > 4 files changed, 7 insertions(+), 3 deletions(-)
>
> Looks great to me. Thanks, Eric!
>
> Reviewed-by: Neal Cardwell <ncardwell@...gle.com>
Wrote a little packetdrill test for this, and it passes, as expected...
Tested-by: Neal Cardwell <ncardwell@...gle.com>
neal
ps: the packetdrill test, which I'm tentatively calling
gtests/net/tcp/paws/paws-disordered-ack-old-seq-discard.pkt :
---
// Test PAWS processing for reordered pure ACK packets with old sequence
// numbers. These are common due to the following common case:
// ACKs being generated on different CPUs than data segments,
// causing ACKs to be transmitted on different tx queues than data segments
// causing reordering of older ACKs behind newer data segments
// if the ACKs end up in a longer queue than the data segments.
// For these packets, we simply discard them, since this 2024 commit:
// tcp: add TCP_RFC7323_PAWS_ACK drop reason
// Check outgoing TCP timestamp values.
--tcp_ts_tick_usecs=1000
// Set up config.
`../common/defaults.sh`
// Establish a connection.
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
+0 < S 0:0(0) win 32792 <mss 1012,sackOK,TS val 2000 ecr 0,nop,wscale 7>
+0 > S. 0:0(0) ack 1 <mss 1460,sackOK,TS val 1000 ecr 100,nop,wscale 8>
+.010 < . 1:1(0) ack 1 win 257 <nop, nop, TS val 2010 ecr 1000>
+0 accept(3, ..., ...) = 4
+0 %{ TCP_INFINITE_SSTHRESH = 0x7fffffff }%
// Send a request.
+0 write(4, ..., 1000) = 1000
+0 > P. 1:1001(1000) ack 1 <nop, nop, TS val 1010 ecr 2010>
+0 %{ assert tcpi_ca_state == TCP_CA_Open }%
+0 %{ assert tcpi_snd_cwnd == 10, tcpi_snd_cwnd }%
+0 %{ assert tcpi_snd_ssthresh == TCP_INFINITE_SSTHRESH, tcpi_snd_ssthresh }%
// The peer received our request and ACKed it immediately, and then
// sent a data segment in reply. However, the ACK is reordered
// behind the reply data segment.
// First we receive a response packet.
+.010 < . 1:1001(1000) ack 1001 win 257 <nop, nop, TS val 2022 ecr 1010>
+0 > . 1001:1001(0) ack 1001 <nop, nop, TS val 1020 ecr 2022>
// Reset nstat counters, in case we're not in a new network namespce:
+0 `nstat -n`
// Then we receive a disordered pure ACK packet with old sequence number.
+0 < . 1:1(0) ack 1001 win 257 <nop, nop, TS val 2020 ecr 1010>
// Because it fails PAWS but is an expected kind of pure ACK with
// an old sequence number, we don't send a dupack.
// But verify that we increment TcpExtPAWSOldAck.
+0 `nstat | grep TcpExtPAWSOldAck | grep -q " 1 "`
// Wait a while to verify we don't send a dupack for that disoordered ACK.
// Expect another transmit so that packetdrill will sniff that
// and see any erroneous dupack instead if we sent that.
+.300 write(4, ..., 1000) = 1000
+0 > P. 1001:2001(1000) ack 1001 <nop, nop, TS val 1320 ecr 2021>
Powered by blists - more mailing lists