[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d8061b19ec2a8123d7cf69dad03f1250a5b03220.camel@redhat.com>
Date: Fri, 02 Jul 2021 16:21:34 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Matthias Treydte <mt@...dheinz.de>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: David Ahern <dsahern@...il.com>, stable@...r.kernel.org,
netdev@...r.kernel.org, regressions@...ts.linux.dev,
davem@...emloft.net, yoshfuji@...ux-ipv6.org, dsahern@...nel.org
Subject: Re: [regression] UDP recv data corruption
Hi,
On Fri, 2021-07-02 at 16:06 +0200, Paolo Abeni wrote:
> On Fri, 2021-07-02 at 14:36 +0200, Matthias Treydte wrote:
> > And to answer Paolo's questions from his mail to the list (@Paolo: I'm
> > not subscribed, please also send to me directly so I don't miss your mail)
>
> (yup, that is what I did ?!?)
>
> > > Could you please:
> > > - tell how frequent is the pkt corruption, even a rough estimate of the
> > > frequency.
> >
> > # journalctl --since "5min ago" | grep "Packet corrupt" | wc -l
> > 167
> >
> > So there are 167 detected failures in 5 minutes, while the system is receiving
> > at a moderate rate of about 900 pkts/s (according to Prometheus' node exporter
> > at least, but seems about right)
I'm sorry for the high-frequency spamming.
Could you please try the following patch ? (only compile-tested)
I fear some packets may hang in the GRO engine for no reasons.
---
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
index 54e06b88af69..458c888337a5 100644
--- a/net/ipv4/udp_offload.c
+++ b/net/ipv4/udp_offload.c
@@ -526,6 +526,8 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
if ((!sk && (skb->dev->features & NETIF_F_GRO_UDP_FWD)) ||
(sk && udp_sk(sk)->gro_enabled) || NAPI_GRO_CB(skb)->is_flist)
pp = call_gro_receive(udp_gro_receive_segment, head, skb);
+ else
+ goto out;
return pp;
}
Powered by blists - more mailing lists