[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250617144017.82931-13-maxim@isovalent.com>
Date: Tue, 17 Jun 2025 16:40:11 +0200
From: Maxim Mikityanskiy <maxtram95@...il.com>
To: Daniel Borkmann <daniel@...earbox.net>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
David Ahern <dsahern@...nel.org>,
Nikolay Aleksandrov <razor@...ckwall.org>
Cc: netdev@...r.kernel.org,
Maxim Mikityanskiy <maxim@...valent.com>
Subject: [PATCH RFC net-next 12/17] net: Enable BIG TCP with partial GSO
From: Maxim Mikityanskiy <maxim@...valent.com>
skb_segment is called for partial GSO, when netif_needs_gso returns true
in validate_xmit_skb. Partial GSO is needed, for example, when
segmentation of tunneled traffic is offloaded to a NIC that only
supports inner checksum offload.
Currently, skb_segment clamps the segment length to 65534 bytes, because
gso_size == 65535 is a special value GSO_BY_FRAGS, and we don't want
to accidentally assign mss = 65535, as it would fall into the
GSO_BY_FRAGS check further in the function.
This implementation, however, artificially blocks len > 65534, which is
possible since the introduction of BIG TCP. To allow bigger lengths and
avoid resegmentation of BIG TCP packets, store the gso_by_frags flag in
the beginning and don't use a special value of mss for this purpose
after mss was modified.
Signed-off-by: Maxim Mikityanskiy <maxim@...valent.com>
---
net/core/skbuff.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 85fc82f72d26..43b6d638a702 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -4696,6 +4696,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
struct sk_buff *tail = NULL;
struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list;
unsigned int mss = skb_shinfo(head_skb)->gso_size;
+ bool gso_by_frags = mss == GSO_BY_FRAGS;
unsigned int doffset = head_skb->data - skb_mac_header(head_skb);
unsigned int offset = doffset;
unsigned int tnl_hlen = skb_tnl_header_len(head_skb);
@@ -4711,7 +4712,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
int nfrags, pos;
if ((skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY) &&
- mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) {
+ !gso_by_frags && mss != skb_headlen(head_skb)) {
struct sk_buff *check_skb;
for (check_skb = list_skb; check_skb; check_skb = check_skb->next) {
@@ -4739,7 +4740,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
sg = !!(features & NETIF_F_SG);
csum = !!can_checksum_protocol(features, proto);
- if (sg && csum && (mss != GSO_BY_FRAGS)) {
+ if (sg && csum && !gso_by_frags) {
if (!(features & NETIF_F_GSO_PARTIAL)) {
struct sk_buff *iter;
unsigned int frag_len;
@@ -4773,9 +4774,8 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
/* GSO partial only requires that we trim off any excess that
* doesn't fit into an MSS sized block, so take care of that
* now.
- * Cap len to not accidentally hit GSO_BY_FRAGS.
*/
- partial_segs = min(len, GSO_BY_FRAGS - 1) / mss;
+ partial_segs = len / mss;
if (partial_segs > 1)
mss *= partial_segs;
else
@@ -4799,7 +4799,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
int hsize;
int size;
- if (unlikely(mss == GSO_BY_FRAGS)) {
+ if (unlikely(gso_by_frags)) {
len = list_skb->len;
} else {
len = head_skb->len - offset;
--
2.49.0
Powered by blists - more mailing lists