lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 8 Dec 2017 21:44:24 +0100
From:   Andreas Hartmann <andihartmann@...19freenet.de>
To:     Willem de Bruijn <willemdebruijn.kernel@...il.com>,
        Michal Kubecek <mkubecek@...e.cz>
Cc:     Jason Wang <jasowang@...hat.com>,
        David Miller <davem@...emloft.net>,
        Network Development <netdev@...r.kernel.org>
Subject: Re: Linux 4.14 - regression: broken tun/tap / bridge network with
 virtio - bisected

On 12/08/2017 at 09:11 PM Andreas Hartmann wrote:
> On 12/08/2017 at 05:04 PM Willem de Bruijn wrote:
>> On Fri, Dec 8, 2017 at 6:40 AM, Michal Kubecek <mkubecek@...e.cz> wrote:
>>> On Fri, Dec 08, 2017 at 11:31:50AM +0100, Andreas Hartmann wrote:
>>>> On 12/08/2017 at 09:47 AM Michal Kubecek wrote:
>>>>> On Fri, Dec 08, 2017 at 08:21:16AM +0100, Andreas Hartmann wrote:
>>>>>>
>>>>>> All my VMs are using virtio_net. BTW: I couldn't see the problems
>>>>>> (sometimes, the VM couldn't be stopped at all) if all my VMs are using
>>>>>> e1000 as interface instead.
>>>>>>
>>>>>> This finding now matches pretty much the responsible UDP-package which
>>>>>> caused the stall. I already mentioned it here [2].
>>>>>>
>>>>>> To prove it, I reverted from the patch series "[PATCH v2 RFC 0/13]
>>>>>> Remove UDP Fragmentation Offload support" [3]
>>>>>>
>>>>>> 11/13 [v2,RFC,11/13] net: Remove all references to SKB_GSO_UDP. [4]
>>>>>> 12/13 [v2,RFC,12/13] inet: Remove software UFO fragmenting code. [5]
>>>>>> 13/13 [v2,RFC,13/13] net: Kill NETIF_F_UFO and SKB_GSO_UDP. [6]
>>>>>>
>>>>>> and applied it to Linux 4.14.4. It compiled fine and is running fine.
>>>>>> The vnet doesn't die anymore. Yet, I can't say if the qemu stop hangs
>>>>>> are gone, too.
>>>>>>
>>>>>> Obviously, there is something broken with the new UDP handling. Could
>>>>>> you please analyze this problem? I could test some more patches ... .
>>>>>
>>>>> Any chance your VMs were live migrated from pre-4.14 host kernel?
>>>>
>>>> No - the VMs are not live migrated. They are always running on the same
>>>> host - either with kernel < 4.14 or with kernel 4.14.x.
>>>
>>> This is disturbing... unless I'm mistaken, it shouldn't be possible to
>>> have UFO enabled on a virtio device in a VM booted on a host with 4.14
>>> kernel.
>>
>> Indeed. When working on that revert patch I verified that UFO in
>> the guest virtio_net was off before the revert patch, on after.
>>
>> Qemu should check host support with tap_probe_has_ufo
>> before advertising support to the guest. Indeed, this is exactly
>> what broke live migration in virtio_net_load_device at
>>
>>     if (qemu_get_byte(f) && !peer_has_ufo(n)) {
>>         error_report("virtio-net: saved image requires TUN_F_UFO support");
>>         return -1;
>>     }
>>
>> Which follows
>>
>>    peer_has_ufo
>>      qemu_has_ufo
>>        tap_has_ufo
>>          s->has_ufo
>>
>> where s->has_ufo was set by tap_probe_has_ufo in net_tap_fd_init.
>>
>> Now, checking my qemu git branch, I ran pretty old 2.7.0-rc3. But this
>> codepath does not seem to have changed between then and 2.10.1.
>>
>> I cherry-picked the revert onto 4.14.3. It did not apply cleanly, but the
>> fix-up wasn't too hard. Compiled and booted, but untested otherwise. At
>>
>>   https://github.com/wdebruij/linux/commits/v4.14.3-aargh-ufo
> 
> I'm just running it at the moment. I didn't face any network hang until
> now - although the critical UDP packages have been gone through.
> Therefore: looks nice.

Well, the patch does not fix hanging VMs, which have been shutdown and
can't be killed any more.
Because of the stack trace

[<ffffffffc0d0e3c5>] vhost_net_ubuf_put_and_wait+0x35/0x60 [vhost_net]
[<ffffffffc0d0f264>] vhost_net_ioctl+0x304/0x870 [vhost_net]
[<ffffffff9b25460f>] do_vfs_ioctl+0x8f/0x5c0
[<ffffffff9b254bb4>] SyS_ioctl+0x74/0x80
[<ffffffff9b00365b>] do_syscall_64+0x5b/0x100
[<ffffffff9b78e7ab>] entry_SYSCALL64_slow_path+0x25/0x25
[<ffffffffffffffff>] 0xffffffffffffffff

I was hoping, that the problems could be related - but that seems not to
be true.

Does anybody have any idea what happened here and how to analyze / fix it?


Thanks,
Andreas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ