[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6440a277-536e-402f-a47e-43ee182b22c7@redhat.com>
Date: Tue, 3 Jun 2025 11:27:18 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Simon Horman <horms@...nel.org>, Ronak Doshi <ronak.doshi@...adcom.com>
Cc: netdev@...r.kernel.org, Guolin Yang <guolin.yang@...adcom.com>,
Broadcom internal kernel review list
<bcm-kernel-feedback-list@...adcom.com>, Andrew Lunn
<andrew+netdev@...n.ch>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net v4] vmxnet3: correctly report gso type for UDP tunnels
On 6/3/25 9:23 AM, Simon Horman wrote:
> On Fri, May 30, 2025 at 03:27:00PM +0000, Ronak Doshi wrote:
>> Commit 3d010c8031e3 ("udp: do not accept non-tunnel GSO skbs landing
>> in a tunnel") added checks in linux stack to not accept non-tunnel
>> GRO packets landing in a tunnel. This exposed an issue in vmxnet3
>> which was not correctly reporting GRO packets for tunnel packets.
>>
>> This patch fixes this issue by setting correct GSO type for the
>> tunnel packets.
>>
>> Currently, vmxnet3 does not support reporting inner fields for LRO
>> tunnel packets. The issue is not seen for egress drivers that do not
>> use skb inner fields. The workaround is to enable tnl-segmentation
>> offload on the egress interfaces if the driver supports it. This
>> problem pre-exists this patch fix and can be addressed as a separate
>> future patch.
>>
>> Fixes: dacce2be3312 ("vmxnet3: add geneve and vxlan tunnel offload support")
>> Signed-off-by: Ronak Doshi <ronak.doshi@...adcom.com>
>> Acked-by: Guolin Yang <guolin.yang@...adcom.com>
>>
>> Changes v1-->v2:
>> Do not set encapsulation bit as inner fields are not updated
>> Changes v2-->v3:
>> Update the commit message explaining the next steps to address
>> segmentation issues that pre-exists this patch fix.
>> Changes v3->v4:
>> Update the commit message to clarify the workaround.
>> ---
>> drivers/net/vmxnet3/vmxnet3_drv.c | 26 ++++++++++++++++++++++++++
>> 1 file changed, 26 insertions(+)
>>
>> diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
>> index c676979c7ab9..287b7c20c0d6 100644
>> --- a/drivers/net/vmxnet3/vmxnet3_drv.c
>> +++ b/drivers/net/vmxnet3/vmxnet3_drv.c
>> @@ -1568,6 +1568,30 @@ vmxnet3_get_hdr_len(struct vmxnet3_adapter *adapter, struct sk_buff *skb,
>> return (hlen + (hdr.tcp->doff << 2));
>> }
>>
>> +static void
>> +vmxnet3_lro_tunnel(struct sk_buff *skb, __be16 ip_proto)
>> +{
>> + struct udphdr *uh = NULL;
>> +
>> + if (ip_proto == htons(ETH_P_IP)) {
>> + struct iphdr *iph = (struct iphdr *)skb->data;
>> +
>> + if (iph->protocol == IPPROTO_UDP)
>> + uh = (struct udphdr *)(iph + 1);
>> + } else {
>> + struct ipv6hdr *iph = (struct ipv6hdr *)skb->data;
>> +
>> + if (iph->nexthdr == IPPROTO_UDP)
>> + uh = (struct udphdr *)(iph + 1);
>> + }
>
> Hi Ronak,
>
> Possibly a naive question, but does skb->data always contain an iphdr
> or ipv6hdr? Or perhaps more to the point, is it safe to assume IPv6
> is ip_proto is not ETH_P_IP?
I think it's safe, or at least the guest can assume that. Otherwise
there is a bug in the hypervisor cooking the descriptor, and the guest
can do little to nothing is such scenario.
/P
Powered by blists - more mailing lists