[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5db593687d2adbecc2f084d17de6d3d3c7deaef5.camel@infradead.org>
Date: Tue, 29 Jun 2021 14:15:45 +0100
From: David Woodhouse <dwmw2@...radead.org>
To: Jason Wang <jasowang@...hat.com>, netdev@...r.kernel.org
Cc: Eugenio Pérez <eperezma@...hat.com>,
Willem de Bruijn <willemb@...gle.com>,
"Michael S.Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH v3 3/5] vhost_net: remove virtio_net_hdr validation, let
tun/tap do it themselves
On Tue, 2021-06-29 at 11:49 +0100, David Woodhouse wrote:
> On Tue, 2021-06-29 at 11:43 +0800, Jason Wang wrote:
> > > The kernel on a c5.metal can transmit (AES128-SHA1) ESP at about
> > > 1.2Gb/s from iperf, as it seems to be doing it all from the iperf
> > > thread.
> > >
> > > Before I started messing with OpenConnect, it could transmit 1.6Gb/s.
> > >
> > > When I pull in the 'stitched' AES+SHA code from OpenSSL instead of
> > > doing the encryption and the HMAC in separate passes, I get to 2.1Gb/s.
> > >
> > > Adding vhost support on top of that takes me to 2.46Gb/s, which is a
> > > decent enough win.
> >
> >
> > Interesting, I think the latency should be improved as well in this
> > case.
>
> I tried using 'ping -i 0.1' to get an idea of latency for the
> interesting VoIP-like case of packets where we have to wake up each
> time.
>
> For the *inbound* case, RX on the tun device followed by TX of the
> replies, I see results like this:
>
> --- 172.16.0.2 ping statistics ---
> 141 packets transmitted, 141 received, 0% packet loss, time 14557ms
> rtt min/avg/max/mdev = 0.380/0.419/0.461/0.024 ms
>
>
> The opposite direction (tun TX then RX) is similar:
>
> --- 172.16.0.1 ping statistics ---
> 295 packets transmitted, 295 received, 0% packet loss, time 30573ms
> rtt min/avg/max/mdev = 0.454/0.545/0.718/0.024 ms
>
>
> Using vhost-net (and TUNSNDBUF of INT_MAX-1 just to avoid XDP), it
> looks like this. Inbound:
>
> --- 172.16.0.2 ping statistics ---
> 139 packets transmitted, 139 received, 0% packet loss, time 14350ms
> rtt min/avg/max/mdev = 0.432/0.578/0.658/0.058 ms
>
> Outbound:
>
> --- 172.16.0.1 ping statistics ---
> 149 packets transmitted, 149 received, 0% packet loss, time 15391ms
> rtt mn/avg/max/mdev = 0.496/0.682/0.935/0.036 ms
>
>
> So as I expected, the throughput is better with vhost-net once I get to
> the point of 100% CPU usage in my main thread, because it offloads the
> kernel←→user copies. But latency is somewhat worse.
>
> I'm still using select() instead of epoll() which would give me a
> little back — but only a little, as I only poll on 3-4 fds, and more to
> the point it'll give me just as much win in the non-vhost case too, so
> it won't make much difference to the vhost vs. non-vhost comparison.
>
> Perhaps I really should look into that trick of "if the vhost TX ring
> is already stopped and would need a kick, and I only have a few packets
> in the batch, just write them directly to /dev/net/tun".
>
> I'm wondering how that optimisation would translate to actual guests,
> which presumably have the same problem. Perhaps it would be an
> operation on the vhost fd, which ends up processing the ring right
> there in the context of *that* process instead of doing a wakeup?
That turns out to be fairly trivial:
https://gitlab.com/openconnect/openconnect/-/commit/668ff1399541be927
It gives me back about half the latency I lost by moving to vhost-net:
--- 172.16.0.2 ping statistics ---
133 packets transmitted, 133 received, 0% packet loss, time 13725ms
rtt min/avg/max/mdev = 0.437/0.510/0.621/0.035 ms
--- 172.16.0.1 ping statistics ---
133 packets transmitted, 133 received, 0% packet loss, time 13728ms
rtt min/avg/max/mdev = 0.541/0.605/0.658/0.022 ms
I think it's definitely worth looking at whether we can/should do
something roughly equivalent for actual guests.
Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5174 bytes)
Powered by blists - more mailing lists