lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA85sZsHKb3Wtsa5ktSAPJsjLrcmahtgaemPhN5dTeTxEBWaqw@mail.gmail.com>
Date:   Fri, 7 Jul 2023 00:32:30 +0200
From:   Ian Kumlien <ian.kumlien@...il.com>
To:     Paolo Abeni <pabeni@...hat.com>
Cc:     Eric Dumazet <edumazet@...gle.com>,
        Willem de Bruijn <willemb@...gle.com>,
        Alexander Lobakin <aleksander.lobakin@...el.com>,
        intel-wired-lan <intel-wired-lan@...ts.osuosl.org>,
        Jakub Kicinski <kuba@...nel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [Intel-wired-lan] bug with rx-udp-gro-forwarding offloading?

On Thu, Jul 6, 2023 at 7:10 PM Paolo Abeni <pabeni@...hat.com> wrote:
> On Thu, 2023-07-06 at 18:17 +0200, Ian Kumlien wrote:
> > On Thu, Jul 6, 2023 at 4:04 PM Paolo Abeni <pabeni@...hat.com> wrote:
> > >
> > > On Thu, 2023-07-06 at 15:56 +0200, Eric Dumazet wrote:
> > > > On Thu, Jul 6, 2023 at 3:02 PM Paolo Abeni <pabeni@...hat.com> wrote:
> > > > >
> > > > > On Thu, 2023-07-06 at 13:27 +0200, Ian Kumlien wrote:
> > > > > > On Thu, Jul 6, 2023 at 10:42 AM Paolo Abeni <pabeni@...hat.com> wrote:
> > > > > > > On Wed, 2023-07-05 at 15:58 +0200, Ian Kumlien wrote:
> > > > > > > > On Wed, Jul 5, 2023 at 3:29 PM Paolo Abeni <pabeni@...hat.com> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, 2023-07-05 at 13:32 +0200, Ian Kumlien wrote:
> > > > > > > > > > On Wed, Jul 5, 2023 at 12:28 PM Paolo Abeni <pabeni@...hat.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > On Tue, 2023-07-04 at 16:27 +0200, Ian Kumlien wrote:
> > > > > > > > > > > > More stacktraces.. =)
> > > > > > > > > > > >
> > > > > > > > > > > > cat bug.txt | ./scripts/decode_stacktrace.sh vmlinux
> > > > > > > > > > > > [  411.413767] ------------[ cut here ]------------
> > > > > > > > > > > > [  411.413792] WARNING: CPU: 9 PID: 942 at include/net/ud     p.h:509
> > > > > > > > > > > > udpv6_queue_rcv_skb (./include/net/udp.h:509 net/ipv6/udp.c:800
> > > > > > > > > > > > net/ipv6/udp.c:787)
> > > > > > > > > > >
> > > > > > > > > > > I'm really running out of ideas here...
> > > > > > > > > > >
> > > > > > > > > > > This is:
> > > > > > > > > > >
> > > > > > > > > > >         WARN_ON_ONCE(UDP_SKB_CB(skb)->partial_cov);
> > > > > > > > > > >
> > > > > > > > > > > sort of hint skb being shared (skb->users > 1) while enqueued in
> > > > > > > > > > > multiple places (bridge local input and br forward/flood to tun
> > > > > > > > > > > device). I audited the bridge mc flooding code, and I could not find
> > > > > > > > > > > how a shared skb could land into the local input path.
> > > > > > > > > > >
> > > > > > > > > > > Anyway the other splats reported here and in later emails are
> > > > > > > > > > > compatible with shared skbs.
> > > > > > > > > > >
> > > > > > > > > > > The above leads to another bunch of questions:
> > > > > > > > > > > * can you reproduce the issue after disabling 'rx-gro-list' on the
> > > > > > > > > > > ingress device? (while keeping 'rx-udp-gro-forwarding' on).
> > > > > > > > > >
> > > > > > > > > > With rx-gro-list off, as in never turned on, everything seems to run fine
> > > > > > > > > >
> > > > > > > > > > > * do you have by chance qdiscs on top of the VM tun devices?
> > > > > > > > > >
> > > > > > > > > > default qdisc is fq
> > > > > > > > >
> > > > > > > > > IIRC libvirt could reset the qdisc to noqueue for the owned tun
> > > > > > > > > devices.
> > > > > > > > >
> > > > > > > > > Could you please report the output of:
> > > > > > > > >
> > > > > > > > > tc -d -s qdisc show dev <tun dev name>
> > > > > > > >
> > > > > > > > I don't have these set:
> > > > > > > > CONFIG_NET_SCH_INGRESS
> > > > > > > > CONFIG_NET_SCHED
> > > > > > > >
> > > > > > > > so tc just gives an error...
> > > > > > >
> > > > > > > The above is confusing. AS CONFIG_NET_SCH_DEFAULT depends on
> > > > > > > CONFIG_NET_SCHED, you should not have a default qdisc, too ;)
> > > > > >
> > > > > > Well it's still set in sysctl - dunno if it fails
> > > > > >
> > > > > > > Could you please share your kernel config?
> > > > > >
> > > > > > Sure...
> > > > > >
> > > > > > As a side note, it hasn't crashed - no traces since we did the last change
> > > > >
> > > > > It sounds like an encouraging sing! (last famous words...). I'll wait 1
> > > > > more day, than I'll submit formally...
> > > > >
> > > > > > For reference, this is git diff on the running kernels source tree:
> > > > > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > > > > > index cea28d30abb5..1b2394ebaf33 100644
> > > > > > --- a/net/core/skbuff.c
> > > > > > +++ b/net/core/skbuff.c
> > > > > > @@ -4270,6 +4270,17 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
> > > > > >
> > > > > >         skb_push(skb, -skb_network_offset(skb) + offset);
> > > > > >
> > > > > > +       if (WARN_ON_ONCE(skb_shared(skb))) {
> > > > > > +               skb = skb_share_check(skb, GFP_ATOMIC);
> > > > > > +               if (!skb)
> > > > > > +                       goto err_linearize;
> > > > > > +       }
> > > > > > +
> > > > > > +       /* later code will clear the gso area in the shared info */
> > > > > > +       err = skb_header_unclone(skb, GFP_ATOMIC);
> > > > > > +       if (err)
> > > > > > +               goto err_linearize;
> > > > > > +
> > > > > >         skb_shinfo(skb)->frag_list = NULL;
> > > > > >
> > > > > >         while (list_skb) {
> > > > >
> > > > > ...the above check only, as the other 2 should only catch-up side
> > > > > effects of lack of this one. In any case the above address a real
> > > > > issue, so we likely want it no-matter-what.
> > > > >
> > > >
> > > > Interesting, I wonder if this could also fix some syzbot reports
> > > > Willem and I are investigating.
> > > >
> > > > Any idea of when the bug was 'added' or 'revealed' ?
> > >
> > > The issue specifically addressed above should be present since
> > > frag_list introduction commit 3a1296a38d0c ("net: Support GRO/GSO
> > > fraglist chaining."). AFAICS triggering it requires non trivial setup -
> > > mcast rx on bridge with frag-list enabled and forwarding to multiple
> > > ports - so perhaps syzkaller found it later due to improvements on its
> > > side ?!?
> >
> > I'm also a bit afraid that we just haven't triggered it - i don't see
> > any warnings or anything... :/
>
> Let me try to clarify: I hope/think that this chunk alone:
>
> +       /* later code will clear the gso area in the shared info */
> +       err = skb_header_unclone(skb, GFP_ATOMIC);
> +       if (err)
> +               goto err_linearize;
> +
>         skb_shinfo(skb)->frag_list = NULL;
>
>         while (list_skb) {
>
> does the magic/avoids the skb corruptions -> it everything goes well,
> you should not see any warnings at all. Running 'nstat' in the DUT
> should give some hints about reaching the relevant code paths.

Sorry about the html mail... but...

I was fully expecting a warning from:
 if (WARN_ON_ONCE(skb_shared(skb))) {

But I could be completely wrong and things =)

Which fields would i be looking at in nstat
nstat
#kernel
IpInReceives                    11076              0.0
IpForwDatagrams                 2384               0.0
IpInDelivers                    5107               0.0
IpOutRequests                   3478               0.0
IcmpInMsgs                      42                 0.0
IcmpInDestUnreachs              9                  0.0
IcmpInEchos                     32                 0.0
IcmpInEchoReps                  1                  0.0
IcmpOutMsgs                     49                 0.0
IcmpOutDestUnreachs             15                 0.0
IcmpOutEchos                    2                  0.0
IcmpOutEchoReps                 32                 0.0
IcmpMsgInType0                  1                  0.0
IcmpMsgInType3                  9                  0.0
IcmpMsgInType8                  32                 0.0
IcmpMsgOutType0                 32                 0.0
IcmpMsgOutType3                 15                 0.0
IcmpMsgOutType8                 2                  0.0
TcpInSegs                       220                0.0
TcpOutSegs                      381                0.0
UdpInDatagrams                  4893               0.0
UdpInErrors                     5                  0.0
UdpOutDatagrams                 655                0.0
UdpRcvbufErrors                 5                  0.0
UdpIgnoredMulti                 86                 0.0
Ip6InReceives                   7155               0.0
Ip6InDelivers                   7139               0.0
Ip6OutRequests                  136                0.0
Ip6OutNoRoutes                  8                  0.0
Ip6InMcastPkts                  7146               0.0
Ip6OutMcastPkts                 130                0.0
Ip6InOctets                     1062180            0.0
Ip6OutOctets                    41215              0.0
Ip6InMcastOctets                1061292            0.0
Ip6OutMcastOctets               40807              0.0
Ip6InNoECTPkts                  7845               0.0
Icmp6InMsgs                     44                 0.0
Icmp6OutMsgs                    21                 0.0
Icmp6InGroupMembQueries         8                  0.0
Icmp6InRouterAdvertisements     4                  0.0
Icmp6InNeighborSolicits         6                  0.0
Icmp6InNeighborAdvertisements   26                 0.0
Icmp6OutNeighborSolicits        3                  0.0
Icmp6OutNeighborAdvertisements  6                  0.0
Icmp6OutMLDv2Reports            12                 0.0
Icmp6InType130                  8                  0.0
Icmp6InType134                  4                  0.0
Icmp6InType135                  6                  0.0
Icmp6InType136                  26                 0.0
Icmp6OutType135                 3                  0.0
Icmp6OutType136                 6                  0.0
Icmp6OutType143                 12                 0.0
Udp6InDatagrams                 6537               0.0
Udp6InErrors                    1248               0.0
Udp6OutDatagrams                115                0.0
Udp6RcvbufErrors                1248               0.0
TcpExtTCPHPAcks                 200                0.0
TcpExtTCPBacklogCoalesce        3                  0.0
TcpExtIPReversePathFilter       89                 0.0
TcpExtTCPAutoCorking            4                  0.0
TcpExtTCPOrigDataSent           381                0.0
TcpExtTCPDelivered              381                0.0
IpExtInMcastPkts                4174               0.0
IpExtOutMcastPkts               68                 0.0
IpExtInBcastPkts                86                 0.0
IpExtOutBcastPkts               4                  0.0
IpExtInOctets                   1866664            0.0
IpExtOutOctets                  1715287            0.0
IpExtInMcastOctets              539751             0.0
IpExtOutMcastOctets             25636              0.0
IpExtInBcastOctets              7131               0.0
IpExtOutBcastOctets             304                0.0
IpExtInNoECTPkts                12158              0.0

But we do have a extreme uptime for this test:
 00:31:44 up 1 day, 10:55,  2 users,  load average: 0,77, 0,75, 0,82

> Cheers,
>
> Paolo
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ