lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 Jun 2021 12:39:03 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     David Woodhouse <dwmw2@...radead.org>, netdev@...r.kernel.org
Cc:     Eugenio Pérez <eperezma@...hat.com>,
        Willem de Bruijn <willemb@...gle.com>,
        "Michael S.Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH v3 3/5] vhost_net: remove virtio_net_hdr validation, let
 tun/tap do it themselves


在 2021/6/29 下午6:49, David Woodhouse 写道:
> On Tue, 2021-06-29 at 11:43 +0800, Jason Wang wrote:
>>> The kernel on a c5.metal can transmit (AES128-SHA1) ESP at about
>>> 1.2Gb/s from iperf, as it seems to be doing it all from the iperf
>>> thread.
>>>
>>> Before I started messing with OpenConnect, it could transmit 1.6Gb/s.
>>>
>>> When I pull in the 'stitched' AES+SHA code from OpenSSL instead of
>>> doing the encryption and the HMAC in separate passes, I get to 2.1Gb/s.
>>>
>>> Adding vhost support on top of that takes me to 2.46Gb/s, which is a
>>> decent enough win.
>>
>> Interesting, I think the latency should be improved as well in this
>> case.
> I tried using 'ping -i 0.1' to get an idea of latency for the
> interesting VoIP-like case of packets where we have to wake up each
> time.
>
> For the *inbound* case, RX on the tun device followed by TX of the
> replies, I see results like this:
>
>       --- 172.16.0.2 ping statistics ---
>       141 packets transmitted, 141 received, 0% packet loss, time 14557ms
>       rtt min/avg/max/mdev = 0.380/0.419/0.461/0.024 ms
>
>
> The opposite direction (tun TX then RX) is similar:
>
>       --- 172.16.0.1 ping statistics ---
>       295 packets transmitted, 295 received, 0% packet loss, time 30573ms
>       rtt min/avg/max/mdev = 0.454/0.545/0.718/0.024 ms
>
>
> Using vhost-net (and TUNSNDBUF of INT_MAX-1 just to avoid XDP), it
> looks like this. Inbound:
>
>       --- 172.16.0.2 ping statistics ---
>       139 packets transmitted, 139 received, 0% packet loss, time 14350ms
>       rtt min/avg/max/mdev = 0.432/0.578/0.658/0.058 ms
>
> Outbound:
>
>       --- 172.16.0.1 ping statistics ---
>       149 packets transmitted, 149 received, 0% packet loss, time 15391ms
>       rtt mn/avg/max/mdev = 0.496/0.682/0.935/0.036 ms
>
>
> So as I expected, the throughput is better with vhost-net once I get to
> the point of 100% CPU usage in my main thread, because it offloads the
> kernel←→user copies. But latency is somewhat worse.
>
> I'm still using select() instead of epoll() which would give me a
> little back — but only a little, as I only poll on 3-4 fds, and more to
> the point it'll give me just as much win in the non-vhost case too, so
> it won't make much difference to the vhost vs. non-vhost comparison.
>
> Perhaps I really should look into that trick of "if the vhost TX ring
> is already stopped and would need a kick, and I only have a few packets
> in the batch, just write them directly to /dev/net/tun".


That should work on low throughput.


>
> I'm wondering how that optimisation would translate to actual guests,
> which presumably have the same problem. Perhaps it would be an
> operation on the vhost fd, which ends up processing the ring right
> there in the context of *that* process instead of doing a wakeup?


It might improve the latency in an ideal case but several possible issues:

1) this will blocks vCPU running until the sent is done
2) copy_from_user() may sleep which will block the vcpu thread further


>
> FWIW if I pull in my kernel patches and stop working around those bugs,
> enabling the TX XDP path and dropping the virtio-net header that I
> don't need, I get some of that latency back:
>
>       --- 172.16.0.2 ping statistics ---
>       151 packets transmitted, 151 received, 0% packet loss, time 15599ms
>       rtt min/avg/max/mdev = 0.372/0.550/0.661/0.061 ms
>
>       --- 172.16.0.1 ping statistics ---
>       214 packets transmitted, 214 received, 0% packet loss, time 22151ms
>       rtt min/avg/max/mdev = 0.453/0.626/0.765/0.049 ms
>
> My bandwidth tests go up from 2.46Gb/s with the workarounds, to
> 2.50Gb/s once I enable XDP, and 2.52Gb/s when I drop the virtio-net
> header. But there's no way for userspace to *detect* that those bugs
> are fixed, which makes it hard to ship that version.


Yes, that's sad. One possible way to advertise a VHOST_NET_TUN flag via 
VHOST_GET_BACKEND_FEATURES?

Thanks


>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ