lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <baca6444-0ef8-2b3b-4ff9-84737f4ecd32@redhat.com>
Date:   Mon, 5 Nov 2018 10:51:52 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Vitaly Mayatskikh <v.mayatskih@...il.com>,
        "Michael S . Tsirkin" <mst@...hat.com>
Cc:     kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/1] vhost: parallel virtqueue handling


On 2018/11/3 上午12:07, Vitaly Mayatskikh wrote:
> Hi,
>
> I stumbled across poor performance of virtio-blk while working on a
> high-performance network storage protocol. Moving virtio-blk's host
> side to kernel did increase single queue IOPS, but multiqueue disk
> still was not scaling well. It turned out that vhost handles events
> from all virtio queues in one helper thread, and that's pretty much a
> big serialization point.
>
> The following patch enables events handling in per-queue thread and
> increases IO concurrency, see IOPS numbers:


Thanks a lot for the patches. Here's some thoughts:

- This is not the first attempt that tries to parallelize vhost workers. 
So we need a comparing among them.

1) Multiple vhost workers from Anthony, 
https://www.spinics.net/lists/netdev/msg189432.html

2) ELVIS from IBM, http://www.mulix.org/pubs/eli/elvis-h319.pdf

3) CMWQ from Bandan, 
http://www.linux-kvm.org/images/5/52/02x08-Aspen-Bandan_Das-vhost-sharing_is_better.pdf

- vhost-net use a different multiqueue model. Each vhost device on host 
is only dealing with a specific queue pair instead of a whole device. 
This allow great flexibility and multiqueue could be implemented without 
touching vhost codes.

- current vhost-net implementation depends heavily on the assumption of 
single thread model especially its busy polling code. It would be broken 
by this attempt. If we decide to go this way, this needs to be fixed. 
And we do need performance result of networking.

- Having more threads is not necessarily a win, at least we need a 
module parameter to other stuffs to control the number of threads I 
believe.


Thanks


>
> # num-queues
> # bare metal
> # virtio-blk
> # vhost-blk
>
> 1  171k  148k 195k
> 2  328k  249k 349k
> 3  479k  179k 501k
> 4  622k  143k 620k
> 5  755k  136k 737k
> 6  887k  131k 830k
> 7  1004k 126k 926k
> 8  1099k 117k 1001k
> 9  1194k 115k 1055k
> 10 1278k 109k 1130k
> 11 1345k 110k 1119k
> 12 1411k 104k 1201k
> 13 1466k 106k 1260k
> 14 1517k 103k 1296k
> 15 1552k 102k 1322k
> 16 1480k 101k 1346k
>
> Vitaly Mayatskikh (1):
>    vhost: add per-vq worker thread
>
>   drivers/vhost/vhost.c | 123 +++++++++++++++++++++++++++++++-----------
>   drivers/vhost/vhost.h |  11 +++-
>   2 files changed, 100 insertions(+), 34 deletions(-)
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ