[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGF4SLhvhdO4UXOwmv-65VJ0suid8xFm819_tZa+caUCuv+HRQ@mail.gmail.com>
Date: Sun, 4 Nov 2018 22:40:34 -0500
From: Vitaly Mayatskih <v.mayatskih@...il.com>
To: Jason Wang <jasowang@...hat.com>
Cc: "Michael S . Tsirkin" <mst@...hat.com>, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/1] vhost: parallel virtqueue handling
On Sun, Nov 4, 2018 at 9:52 PM Jason Wang <jasowang@...hat.com> wrote:
> Thanks a lot for the patches. Here's some thoughts:
>
> - This is not the first attempt that tries to parallelize vhost workers.
> So we need a comparing among them.
>
> 1) Multiple vhost workers from Anthony,
> https://www.spinics.net/lists/netdev/msg189432.html
>
> 2) ELVIS from IBM, http://www.mulix.org/pubs/eli/elvis-h319.pdf
>
> 3) CMWQ from Bandan,
> http://www.linux-kvm.org/images/5/52/02x08-Aspen-Bandan_Das-vhost-sharing_is_better.pdf
>
> - vhost-net use a different multiqueue model. Each vhost device on host
> is only dealing with a specific queue pair instead of a whole device.
> This allow great flexibility and multiqueue could be implemented without
> touching vhost codes.
I'm no way a network expert, but I think this is because it follows a
combined queue model of the NIC. Having a TX/RX queues pair looks like
a natural choice for this case.
> - current vhost-net implementation depends heavily on the assumption of
> single thread model especially its busy polling code. It would be broken
> by this attempt. If we decide to go this way, this needs to be fixed.
> And we do need performance result of networking.
Thanks for noting that, I miss a lot of historical background. Will
check that up.
> - Having more threads is not necessarily a win, at least we need a
> module parameter to other stuffs to control the number of threads I
> believe.
I agree I didn't think fully about other cases, but for the disk it is
already under controll: QEMU's num-queues disk parameter.
There's a certain saturation point when adding more threads does not
yield lot more more performance. For my environment it's about 12
queues.
So, how does it sound: the default behaviour is 1 worker per vhost
device. If the user needs per-vq worker he does a new VHOST_SET_
ioctl?
--
wbr, Vitaly
Powered by blists - more mailing lists