[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bbefd183-83be-a165-6a82-53100b5ace70@drivenets.com>
Date: Mon, 26 Apr 2021 05:32:46 +0300
From: Leonard Crestez <lcrestez@...venets.com>
To: Matt Mathis <mattmathis@...gle.com>
Cc: "Cc: Willem de Bruijn" <willemb@...gle.com>,
Neal Cardwell <ncardwell@...gle.com>,
Ilya Lesokhin <ilyal@...lanox.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
David Ahern <dsahern@...nel.org>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Roopa Prabhu <roopa@...ulusnetworks.com>,
netdev <netdev@...r.kernel.org>, linux-kernel@...r.kernel.org,
Yuchung Cheng <ycheng@...gle.com>,
John Heffner <johnwheffner@...il.com>
Subject: Re: Fwd: [RFC] tcp: Delay sending non-probes for RFC4821 mtu probing
On 4/21/21 7:45 PM, Matt Mathis wrote:
> (Resending in plain text mode)
>
> Surely there is a way to adapt tcp_tso_should_defer(), it is trying to
> solve a similar problem.
>
> If I were to implement PLPMTUD today, I would more deeply entwine it
> into TCP's support for TSO. e.g. successful deferring segments
> sometimes enables TSO and sometimes enables PLPMTUD.
The mechanisms for delaying sending are difficult to understand, this
RFC just added a brand-new unrelated timer. Intertwining it with
existing mechanisms would indeed be better. On a closer look it seems
that they're not actually based on a timer but other heuristics.
It seems that tcp_sendmsg will "tcp_push_one" once the skb at the head
of the queue reaches tcp_xmit_size_goal and tcp_xmit_size_goal does not
take mtu probing into account. In practice this would mean that
application-limited streams won't perform mtu probing unless a single
write is 5*mss + probe_size (1*mss over size_needed)
I sent a different RFC which tries to modify tcp_xmit_size_goal.
> But there is a deeper question: John Heffner and I invested a huge
> amount of energy in trying to make PLPMTUD work for opportunistic
> Jumbo discovery, only to discover that we had moved the problem down
> to the device driver/nic, were it isn't so readily solvable.
>
> The driver needs to carve nic buffer memory before it can communicate
> with a switch (to either ask or measure the MTU), and once it has done
> that it needs to either re-carve the memory or run with suboptimal
> carving. Both of these are problematic.
>
> There is also a problem that many link technologies will
> non-deterministically deliver jumbo frames at greatly increased error
> rates. This issue requires a long conversation on it's own.
I'm looking to improve this for tunnels that don't correctly send ICMP
packet-too-big messages, the hardware is assumed to be fine.
--
Regards,
Leonard
Powered by blists - more mailing lists