[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <32234f4c5b4adcaf2560098a01b1544d8d8d3c2c.camel@mellanox.com>
Date: Mon, 2 Mar 2020 19:10:03 +0000
From: Saeed Mahameed <saeedm@...lanox.com>
To: Roi Dayan <roid@...lanox.com>,
"ian.kumlien@...il.com" <ian.kumlien@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: Yevgeny Kliteynik <kliteyn@...lanox.com>,
Leon Romanovsky <leonro@...lanox.com>
Subject: Re: [VXLAN] [MLX5] Lost traffic and issues
On Fri, 2020-02-28 at 16:02 +0100, Ian Kumlien wrote:
> Hi,
>
> Including netdev - to see if someone else has a clue.
>
> We have a few machines in a cloud and when upgrading from 4.16.7 ->
> 5.4.15 we ran in to
> unexpected and intermittent problems.
> (I have tested 5.5.6 and the problems persists)
>
> What we saw, using several monitoring points, was that traffic
> disappeared after what we can see when tcpdumping on "bond0"
>
> We had tcpdump running on:
> 1, DHCP nodes (local tap interfaces)
> 2, Router instances on L3 node
> 3, Local node (where the VM runs) (tap, bridge and eventually tap
> interface dumping VXLAN traffic)
> 4, Using port mirroring on the 100gbit switch to see what ended up on
> the physical wire.
>
> What we can see is that from the four step handshake for DHCP only
> two
> steps works, the forth step will be dropped "on the nic".
>
> We can see it go out bond0, in tagged VLAN and within a VXLAN packet
> -
> however the switch never sees it.
>
Hi,
Have you seen the packets actually going out on one of the mlx5 100gbit
legs ?
> There has been a few mlx5 changes wrt VXLAN which can be culprits but
> it's really hard to judge.
>
> dmesg |grep mlx
> [ 2.231399] mlx5_core 0000:0b:00.0: firmware version: 16.26.1040
> [ 2.912595] mlx5_core 0000:0b:00.0: Rate limit: 127 rates are
> supported, range: 0Mbps to 97656Mbps
> [ 2.935012] mlx5_core 0000:0b:00.0: Port module event: module 0,
> Cable plugged
> [ 2.949528] mlx5_core 0000:0b:00.1: firmware version: 16.26.1040
> [ 3.638647] mlx5_core 0000:0b:00.1: Rate limit: 127 rates are
> supported, range: 0Mbps to 97656Mbps
> [ 3.661206] mlx5_core 0000:0b:00.1: Port module event: module 1,
> Cable plugged
> [ 3.675562] mlx5_core 0000:0b:00.0: MLX5E: StrdRq(1) RqSz(8)
> StrdSz(64) RxCqeCmprss(0)
> [ 3.846149] mlx5_core 0000:0b:00.1: MLX5E: StrdRq(1) RqSz(8)
> StrdSz(64) RxCqeCmprss(0)
> [ 4.021738] mlx5_core 0000:0b:00.0 enp11s0f0: renamed from eth0
> [ 4.021962] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0
>
> I have tried turning all offloads off, but the problem persists as
> well - it's really weird that it seems to be only some packets.
>
> To be clear, the bond0 interface is 2*100gbit, using 802.1ad (LACP)
> with layer2+3 hashing.
> This seems to be offloaded in to the nic (can it be turned off?) and
> messages about modifying the "lag map" was
> quite frequent until we did a firmware upgrade - even with upgraded
> firmware, it continued but to a lesser extent.
>
> With 5.5.7 approaching, we would want a path forward to handle
> this...
What type of mlx5 configuration you have (Native PV virtualization ?
SRIOV ? legacy mode or switchdev mode ? )
The only change that i could think of is the lag multi-path support we
added, Roi can you please take a look at this ?
Thanks,
Saeed.
Powered by blists - more mailing lists