[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7bd2baab-56a7-95d1-e63b-74dc92da936b@01019freenet.de>
Date: Fri, 8 Dec 2017 13:45:38 +0100
From: Andreas Hartmann <andihartmann@...19freenet.de>
To: Michal Kubecek <mkubecek@...e.cz>
Cc: Jason Wang <jasowang@...hat.com>,
David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: Linux 4.14 - regression: broken tun/tap / bridge network with
virtio - bisected
On 12/08/2017 at 12:40 PM Michal Kubecek wrote:
> On Fri, Dec 08, 2017 at 11:31:50AM +0100, Andreas Hartmann wrote:
>> On 12/08/2017 at 09:47 AM Michal Kubecek wrote:
>>> On Fri, Dec 08, 2017 at 08:21:16AM +0100, Andreas Hartmann wrote:
>>>>
>>>> All my VMs are using virtio_net. BTW: I couldn't see the problems
>>>> (sometimes, the VM couldn't be stopped at all) if all my VMs are using
>>>> e1000 as interface instead.
>>>>
>>>> This finding now matches pretty much the responsible UDP-package which
>>>> caused the stall. I already mentioned it here [2].
>>>>
>>>> To prove it, I reverted from the patch series "[PATCH v2 RFC 0/13]
>>>> Remove UDP Fragmentation Offload support" [3]
>>>>
>>>> 11/13 [v2,RFC,11/13] net: Remove all references to SKB_GSO_UDP. [4]
>>>> 12/13 [v2,RFC,12/13] inet: Remove software UFO fragmenting code. [5]
>>>> 13/13 [v2,RFC,13/13] net: Kill NETIF_F_UFO and SKB_GSO_UDP. [6]
>>>>
>>>> and applied it to Linux 4.14.4. It compiled fine and is running fine.
>>>> The vnet doesn't die anymore. Yet, I can't say if the qemu stop hangs
>>>> are gone, too.
>>>>
>>>> Obviously, there is something broken with the new UDP handling. Could
>>>> you please analyze this problem? I could test some more patches ... .
>>>
>>> Any chance your VMs were live migrated from pre-4.14 host kernel?
>>
>> No - the VMs are not live migrated. They are always running on the same
>> host - either with kernel < 4.14 or with kernel 4.14.x.
>
> This is disturbing... unless I'm mistaken, it shouldn't be possible to
> have UFO enabled on a virtio device in a VM booted on a host with 4.14
> kernel.
It is on by default. I have to explicitly switch it off. As described below.
host:
# rebooted to kernel 4.14.x
uname -r
4.14.4-2.1-default
# just checked: bridges on host have disabled ufo w/ 4.14 per default.
guest:
uname -r
4.9.63-1.2-default # same with 3.10.x
lsmod | grep -e e1000 -e virtio_net
virtio_net 32768 0
virtio 16384 4
virtio_net,virtio_balloon,virtio_pci,virtio_scsi
virtio_ring 24576 4
virtio_net,virtio_balloon,virtio_pci,virtio_scsi
lspci -vs 00:03.0
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
Subsystem: Red Hat, Inc Device 0001
Physical Slot: 3
Flags: bus master, fast devsel, latency 0, IRQ 10
I/O ports at c060 [size=32]
Memory at febf1000 (32-bit, non-prefetchable) [size=4K]
Expansion ROM at feb80000 [disabled] [size=256K]
Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
Kernel driver in use: virtio-pci
Kernel modules: virtio_pci
# after ufo was manually turned off on VM boot:
ethtool -k eth0 | grep fragm
udp-fragmentation-offload: off
ethtool -K eth0 ufo on
ethtool -k eth0 | grep fragm
udp-fragmentation-offload: on
ethtool -K eth0 ufo off
ethtool -k eth0 | grep fragm
udp-fragmentation-offload: off
>
>>> If this is the case, you should try commit 0c19f846d582 ("net:
>>> accept UFO datagrams from tuntap and packet").
>>
>> It doesn't apply to 4.14.4
>>
>>> Or disabling UFO in the guest should
>>> work around the issue.
>>
>> ethtool -K ethX ufo off for each device / bridge in VM.
>>
>> Yes, this seems to work. I'll wait and see if the non stoppable
>> qemu-problem on shutdown will remain.
>>
>> When will there be a fix for 4.14? It is clearly a regression. Is it
>> possible / a good idea to just remove the complete patch series "Remove
>> UDP Fragmentation Offload support"?
>
> I cannot give an exact date but the patch is queued for stable
> (see http://patchwork.ozlabs.org/bundle/davem/stable/?state=* ) so that
> it should land in stable-4.14 in near future (weeks at most).
Which one is it? I couldn't find any patch related to this problem at
first glance.
Thanks,
Andreas
Powered by blists - more mailing lists