lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a7a89aa2-7354-42c7-8219-99a3cafd3b33@redhat.com>
Date: Tue, 15 Jul 2025 10:25:31 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>, Neal Cardwell <ncardwell@...gle.com>,
 "Matthieu Baerts (NGI0)" <matttbe@...nel.org>
Cc: Simon Horman <horms@...nel.org>, Kuniyuki Iwashima <kuniyu@...gle.com>,
 Willem de Bruijn <willemb@...gle.com>, netdev@...r.kernel.org,
 eric.dumazet@...il.com, "David S . Miller" <davem@...emloft.net>,
 Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH net-next 0/8] tcp: receiver changes

On 7/11/25 1:39 PM, Eric Dumazet wrote:
> Before accepting an incoming packet:
> 
> - Make sure to not accept a packet beyond advertized RWIN.
>   If not, increment a new SNMP counter (LINUX_MIB_BEYOND_WINDOW)
> 
> - ooo packets should update rcv_mss and tp->scaling_ratio.
> 
> - Make sure to not accept packet beyond sk_rcvbuf limit.
> 
> This series includes three associated packetdrill tests.

I suspect this series is causing pktdrill failures for the
tcp_rcv_big_endseq.pkt test case:

# selftests: net/packetdrill: tcp_rcv_big_endseq.pkt
# TAP version 13
# 1..2
# tcp_rcv_big_endseq.pkt:41: error handling packet: timing error:
expected outbound packet at 1.347964 sec but happened at 1.307939 sec;
tolerance 0.014000 sec
# script packet:  1.347964 . 1:1(0) ack 54001 win 0
# actual packet:  1.307939 . 1:1(0) ack 54001 win 0
# not ok 1 ipv4
# tcp_rcv_big_endseq.pkt:41: error handling packet: timing error:
expected outbound packet at 1.354946 sec but happened at 1.314923 sec;
tolerance 0.014000 sec
# script packet:  1.354946 . 1:1(0) ack 54001 win 0
# actual packet:  1.314923 . 1:1(0) ack 54001 win 0
# not ok 2 ipv6
# # Totals: pass:0 fail:2 xfail:0 xpass:0 skip:0 error:0

the event is happening _before_ the expected time, I guess it's more a
functional issue than a timing one.

I also suspect this series is causing flakes in mptcp tests, i.e.:

# INFO: disconnect
# 63 ns1 MPTCP -> ns1 (10.0.1.1:20001      ) MPTCP     (duration
227ms) [ OK ]
# 64 ns1 MPTCP -> ns1 (10.0.1.1:20002      ) TCP       (duration
96ms) [ OK ]
# 65 ns1 TCP   -> ns1 (10.0.1.1:20003      ) MPTCP     copyfd_io_poll:
poll timed out (events: POLLIN 0, POLLOUT 4)
# copyfd_io_poll: poll timed out (events: POLLIN 1, POLLOUT 0)
# (duration 30318ms) [FAIL] client exit code 2, server 0
#
# netns ns1-VslcTV (listener) socket stat for 20003:
# Netid State      Recv-Q Send-Q Local Address:Port  Peer Address:Port

# tcp   FIN-WAIT-2 0      0           10.0.1.1:20003     10.0.1.1:60698
timer:(timewait,59sec,0) ino:0 sk:1012
#
# tcp   TIME-WAIT  0      0           10.0.1.1:20003     10.0.1.1:60696
timer:(timewait,29sec,0) ino:0 sk:1013
#
# TcpActiveOpens                  3                  0.0
# TcpPassiveOpens                 3                  0.0
# TcpInSegs                       1472               0.0
# TcpOutSegs                      1471               0.0
# TcpRetransSegs                  3                  0.0
# TcpExtPruneCalled               4                  0.0
# TcpExtRcvPruned                 3                  0.0
# TcpExtTW                        3                  0.0
# TcpExtBeyondWindow              7                  0.0
# TcpExtTCPHPHits                 34                 0.0
# TcpExtTCPPureAcks               386                0.0
# TcpExtTCPHPAcks                 33                 0.0
# TcpExtTCPSackRecovery           1                  0.0
# TcpExtTCPFastRetrans            1                  0.0
# TcpExtTCPLossProbes             2                  0.0
# TcpExtTCPLossProbeRecovery      1                  0.0
# TcpExtTCPRcvCollapsed           3                  0.0
# TcpExtTCPBacklogCoalesce        261                0.0
# TcpExtTCPSackShiftFallback      1                  0.0
# TcpExtTCPRcvCoalesce            500                0.0
# TcpExtTCPOFOQueue               1                  0.0
# TcpExtTCPFromZeroWindowAdv      60                 0.0
# TcpExtTCPToZeroWindowAdv        58                 0.0
# TcpExtTCPWantZeroWindowAdv      296                0.0
# TcpExtTCPOrigDataSent           1038               0.0
# TcpExtTCPHystartTrainDetect     1                  0.0
# TcpExtTCPHystartTrainCwnd       16                 0.0
# TcpExtTCPACKSkippedSeq          1                  0.0
# TcpExtTCPWinProbe               7                  0.0
# TcpExtTCPDelivered              1041               0.0
# TcpExtTCPRcvQDrop               2                  0.0
#
# netns ns1-VslcTV (connector) socket stat for 20003:
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Netid State     Recv-Q Send-Q  Local Address:Port  Peer Address:Port

# tcp   TIME-WAIT 0      0            10.0.1.1:60684     10.0.1.1:20003
timer:(timewait,29sec,0) ino:0 sk:11
#
# tcp   LAST-ACK  0      1735147      10.0.1.1:60698     10.0.1.1:20003
timer:(persist,22sec,0) ino:0 sk:12 cgroup:unreachable:1 ---
#  skmem:(r0,rb361100,t0,tb2626560,f2838,w1758442,o0,bl0,d61) ts sack
cubic wscale:7,7 rto:201 backoff:7 rtt:0.12/0.215 ato:40 mss:65483
pmtu:65535 rcvmss:65483 advmss:65483 cwnd:7 ssthresh:7
bytes_sent:1738187 bytes_retrans:65461 bytes_acked:1672727
bytes_received:7659224 segs_out:180 segs_in:243 data_segs_out:103
data_segs_in:221 send 30558733333bps lastsnd:30125 lastrcv:30322
lastack:3693 pacing_rate 36480477512bps delivery_rate 196449000000bps
delivered:103 app_limited busy:30351ms rwnd_limited:30350ms(100.0%)
retrans:0/1 rcv_rtt:0.005 rcv_space:289974 rcv_ssthresh:324480
notsent:1735147 minrtt:0.001 rcv_wnd:324480

@Matttbe: can you reproduce the flakes locally? if so, does reverting
that series stop them? (not that I'm planning a revert, just to validate
my guess).

Thanks,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ