[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJEV1ijyteQ9BxS1xtythC3O0y5+mdostL7-RKQhnkCf93iufg@mail.gmail.com>
Date: Wed, 7 Feb 2024 18:07:21 +0200
From: Pavel Vazharov <pavel@...e.net>
To: Magnus Karlsson <magnus.karlsson@...il.com>
Cc: Toke Høiland-Jørgensen <toke@...nel.org>,
Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
"Fijalkowski, Maciej" <maciej.fijalkowski@...el.com>
Subject: Re: Need of advice for XDP sockets on top of the interfaces behind a
Linux bonding device
On Wed, Feb 7, 2024 at 5:49 PM Pavel Vazharov <pavel@...e.net> wrote:
>
> On Mon, Feb 5, 2024 at 9:07 AM Magnus Karlsson
> <magnus.karlsson@...il.com> wrote:
> >
> > On Tue, 30 Jan 2024 at 15:54, Toke Høiland-Jørgensen <toke@...nel.org> wrote:
> > >
> > > Pavel Vazharov <pavel@...e.net> writes:
> > >
> > > > On Tue, Jan 30, 2024 at 4:32 PM Toke Høiland-Jørgensen <toke@...nel.org> wrote:
> > > >>
> > > >> Pavel Vazharov <pavel@...e.net> writes:
> > > >>
> > > >> >> On Sat, Jan 27, 2024 at 7:08 AM Pavel Vazharov <pavel@...e.net> wrote:
> > > >> >>>
> > > >> >>> On Sat, Jan 27, 2024 at 6:39 AM Jakub Kicinski <kuba@...nel.org> wrote:
> > > >> >>> >
> > > >> >>> > On Sat, 27 Jan 2024 05:58:55 +0200 Pavel Vazharov wrote:
> > > >> >>> > > > Well, it will be up to your application to ensure that it is not. The
> > > >> >>> > > > XDP program will run before the stack sees the LACP management traffic,
> > > >> >>> > > > so you will have to take some measure to ensure that any such management
> > > >> >>> > > > traffic gets routed to the stack instead of to the DPDK application. My
> > > >> >>> > > > immediate guess would be that this is the cause of those warnings?
> > > >> >>> > >
> > > >> >>> > > Thank you for the response.
> > > >> >>> > > I already checked the XDP program.
> > > >> >>> > > It redirects particular pools of IPv4 (TCP or UDP) traffic to the application.
> > > >> >>> > > Everything else is passed to the Linux kernel.
> > > >> >>> > > However, I'll check it again. Just to be sure.
> > > >> >>> >
> > > >> >>> > What device driver are you using, if you don't mind sharing?
> > > >> >>> > The pass thru code path may be much less well tested in AF_XDP
> > > >> >>> > drivers.
> > > >> >>> These are the kernel version and the drivers for the 3 ports in the
> > > >> >>> above bonding.
> > > >> >>> ~# uname -a
> > > >> >>> Linux 6.3.2 #1 SMP Wed May 17 08:17:50 UTC 2023 x86_64 GNU/Linux
> > > >> >>> ~# lspci -v | grep -A 16 -e 1b:00.0 -e 3b:00.0 -e 5e:00.0
> > > >> >>> 1b:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> > > >> >>> SFI/SFP+ Network Connection (rev 01)
> > > >> >>> ...
> > > >> >>> Kernel driver in use: ixgbe
> > > >> >>> --
> > > >> >>> 3b:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> > > >> >>> SFI/SFP+ Network Connection (rev 01)
> > > >> >>> ...
> > > >> >>> Kernel driver in use: ixgbe
> > > >> >>> --
> > > >> >>> 5e:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> > > >> >>> SFI/SFP+ Network Connection (rev 01)
> > > >> >>> ...
> > > >> >>> Kernel driver in use: ixgbe
> > > >> >>>
> > > >> >>> I think they should be well supported, right?
> > > >> >>> So far, it seems that the present usage scenario should work and the
> > > >> >>> problem is somewhere in my code.
> > > >> >>> I'll double check it again and try to simplify everything in order to
> > > >> >>> pinpoint the problem.
> > > >> > I've managed to pinpoint that forcing the copying of the packets
> > > >> > between the kernel and the user space
> > > >> > (XDP_COPY) fixes the issue with the malformed LACPDUs and the not
> > > >> > working bonding.
> > > >>
> > > >> (+Magnus)
> > > >>
> > > >> Right, okay, that seems to suggest a bug in the internal kernel copying
> > > >> that happens on XDP_PASS in zero-copy mode. Which would be a driver bug;
> > > >> any chance you could test with a different driver and see if the same
> > > >> issue appears there?
> > > >>
> > > >> -Toke
> > > > No, sorry.
> > > > We have only servers with Intel 82599ES with ixgbe drivers.
> > > > And one lab machine with Intel 82540EM with igb driver but we can't
> > > > set up bonding there
> > > > and the problem is not reproducible there.
> > >
> > > Right, okay. Another thing that may be of some use is to try to capture
> > > the packets on the physical devices using tcpdump. That should (I think)
> > > show you the LACDPU packets as they come in, before they hit the bonding
> > > device, but after they are copied from the XDP frame. If it's a packet
> > > corruption issue, that should be visible in the captured packet; you can
> > > compare with an xdpdump capture to see if there are any differences...
> >
> > Pavel,
> >
> > Sounds like an issue with the driver in zero-copy mode as it works
> > fine in copy mode. Maciej and I will take a look at it.
> >
> > > -Toke
> > >
>
> First I want to apologize for not responding for such a long time.
> I had different tasks the previous week and this week went back to this issue.
> I had to modify the code of the af_xdp driver inside the DPDK so that it loads
> the XDP program in a way which is compatible with the xdp-dispatcher.
> Finally, I was able to run our application with the XDP sockets and the xdpdump
> at the same time.
>
> Back to the issue.
> I just want to say again that we are not binding the XDP sockets to
> the bonding device.
> We are binding the sockets to the queues of the physical interfaces
> "below" the bonding device.
> My further observation this time is that when the issue happens and
> the remote device reports
> the LACP error there is no incoming LACP traffic on the corresponding
> local port,
> as seen by the xdump.
> The tcpdump at the same time sees only outgoing LACP packets and
> nothing incoming.
> For example:
> Remote device
> Local Server
> TrunkName=Eth-Trunk20, PortName=XGigabitEthernet0/0/12 <---> eth0
> TrunkName=Eth-Trunk20, PortName=XGigabitEthernet0/0/13 <---> eth2
> TrunkName=Eth-Trunk20, PortName=XGigabitEthernet0/0/14 <---> eth4
> And when the remote device reports "received an abnormal LACPDU"
> for PortName=XGigabitEthernet0/0/14 I can see via xdpdump that there
> is no incoming LACP traffic
> on eth4 but there is incoming LACP traffic on eth0 and eth2.
> At the same time, according to the dmesg the kernel sees all of the
> interfaces as
> "link status definitely up, 10000 Mbps full duplex".
> The issue goes aways if I stop the application even without removing
> the XDP programs
> from the interfaces - the running xdpdump starts showing the incoming
> LACP traffic immediately.
> The issue also goes away if I do "ip link set down eth4 && ip link set up eth4".
> However, I'm not sure what happens with the bound XDP sockets in this case
> because I haven't tested further.
>
> It seems to me that something racy happens when the interfaces go down
> and back up
> (visible in the dmesg) when the XDP sockets are bound to their queues.
> I mean, I'm not sure why the interfaces go down and up but setting
> only the XDP programs
> on the interfaces doesn't cause this behavior. So, I assume it's
> caused by the binding of the XDP sockets.
> It could be that the issue is not related to the XDP sockets but just
> to the down/up actions of the interfaces.
> On the other hand, I'm not sure why the issue is easily reproducible
> when the zero copy mode is enabled
> (4 out of 5 tests reproduced the issue).
> However, when the zero copy is disabled this issue doesn't happen
> (I tried 10 times in a row and it doesn't happen).
>
> Pavel.
My thoughts at the end are not correct. I forgot that we tested with
traffic too.
Even when the bonding/LACP looked OK after the application start, it started
breaking later when the traffic is started for the case of zero copy mode.
However, it worked OK when the zero copy is disabled.
Pavel.
Powered by blists - more mailing lists