[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKSrG40FKLpE3-qbftdXs9Goo61JfkmfXX_1=R5XV-=eQ@mail.gmail.com>
Date: Wed, 29 Jan 2025 13:06:49 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Thomas Bogendoerfer <tbogendoerfer@...e.de>
Cc: Paolo Abeni <pabeni@...hat.com>, "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Simon Horman <horms@...nel.org>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 net] gro_cells: Avoid packet re-ordering for cloned skbs
On Wed, Jan 29, 2025 at 12:57 PM Thomas Bogendoerfer
<tbogendoerfer@...e.de> wrote:
>
> On Wed, 29 Jan 2025 12:31:29 +0100
> Thomas Bogendoerfer <tbogendoerfer@...e.de> wrote:
>
> > My test scenario is simple:
> >
> > TCP Sender in namespace A -> ip6_tunnel -> ipvlan -> ipvlan -> ip6_tunnel -> TCP receiver
>
> sorry, messed it up. It looks like this
>
> <- Namespace A -> <- Namespace b ->
> TCP Sender -> ip6_tunnel -> ipvlan -> ipvlan -> ip6_tunnel -> TCP Receiver
>
We are trying to avoid adding costs in GRO layer (a critical piece of
software for high speed flows), for a doubtful use case,
possibly obsolete.
BTW I am still unsure about the skb_cloned() test vs
skb_header_cloned() which would solve this case just fine.
Because TCP sender is ok if some layer wants to change the headers,
thanks to __skb_header_release() call
from tcp_skb_entail()
"TCP Sender in namespace A -> ip6_tunnel -> ipvlan -> ipvlan ->
ip6_tunnel -> TCP receiver"
or
" TCP Sender -> ip6_tunnel -> ipvlan -> ipvlan -> ip6_tunnel -> TCP Receiver"
In this case, GRO in ip6_tunnel is not needed at all, since proper TSO
packets should already be cooked by TCP sender and be carried
to the receiver as plain GRO packets.
gro_cells was added at a time GRO layer was only supporting native
encapsulations : IPv4 + TCP or IPv6 + TCP.
Nowadays, GRO supports encapsulated traffic just fine, same for TSO
packets encapsulated in ip6_tunnel
Maybe it is time to remove gro_cells from net/ipv6/ip6_tunnel.c
diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
index 48fd53b9897265338086136e96ea8e8c6ec3cac..b91c253dc4f1998f8df74251a93e29d00c03db5
100644
--- a/net/ipv6/ip6_tunnel.c
+++ b/net/ipv6/ip6_tunnel.c
@@ -246,7 +246,6 @@ static void ip6_dev_free(struct net_device *dev)
{
struct ip6_tnl *t = netdev_priv(dev);
- gro_cells_destroy(&t->gro_cells);
dst_cache_destroy(&t->dst_cache);
}
@@ -877,7 +876,7 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel,
struct sk_buff *skb,
if (tun_dst)
skb_dst_set(skb, (struct dst_entry *)tun_dst);
- gro_cells_receive(&tunnel->gro_cells, skb);
+ netif_rx(skb);
return 0;
drop:
@@ -1884,10 +1883,6 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
if (ret)
return ret;
- ret = gro_cells_init(&t->gro_cells, dev);
- if (ret)
- goto destroy_dst;
-
t->tun_hlen = 0;
t->hlen = t->encap_hlen + t->tun_hlen;
t_hlen = t->hlen + sizeof(struct ipv6hdr);
@@ -1902,11 +1897,6 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
netdev_hold(dev, &t->dev_tracker, GFP_KERNEL);
netdev_lockdep_set_classes(dev);
return 0;
-
-destroy_dst:
- dst_cache_destroy(&t->dst_cache);
-
- return ret;
}
/**
Powered by blists - more mailing lists