lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <c042686f-db95-514d-8bd8-92b72e4e087a@de.ibm.com>
Date:   Mon, 5 Nov 2018 16:48:13 +0100
From:   Christian Borntraeger <borntraeger@...ibm.com>
To:     Jason Wang <jasowang@...hat.com>,
        Vitaly Mayatskikh <v.mayatskih@...il.com>,
        "Michael S . Tsirkin" <mst@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH 0/1] vhost: add vhost_blk driver



On 11/05/2018 04:00 AM, Jason Wang wrote:
> 
> On 2018/11/3 上午2:21, Vitaly Mayatskikh wrote:
>> vhost_blk is a host-side kernel mode accelerator for virtio-blk. The
>> driver allows VM to reach a near bare-metal disk performance. See IOPS
>> numbers below (fio --rw=randread --bs=4k).
>>
>> This implementation uses kiocb interface. It is slightly slower than
>> going directly through bio, but is simpler and also works with disk
>> images placed on a file system.
>>
>> # fio num-jobs
>> # A: bare metal over block
>> # B: bare metal over file
>> # C: virtio-blk over block
>> # D: virtio-blk over file
>> # E: vhost-blk bio over block
>> # F: vhost-blk kiocb over block
>> # G: vhost-blk kiocb over file
>> #
>> #  A     B     C    D    E     F     G
>>
>> 1  171k  151k  148k 151k 195k  187k  175k
>> 2  328k  302k  249k 241k 349k  334k  296k
>> 3  479k  437k  179k 174k 501k  464k  404k
>> 4  622k  568k  143k 183k 620k  580k  492k
>> 5  755k  697k  136k 128k 737k  693k  579k
>> 6  887k  808k  131k 120k 830k  782k  640k
>> 7  1004k 926k  126k 131k 926k  863k  693k
>> 8  1099k 1015k 117k 115k 1001k 931k  712k
>> 9  1194k 1119k 115k 111k 1055k 991k  711k
>> 10 1278k 1207k 109k 114k 1130k 1046k 695k
>> 11 1345k 1280k 110k 108k 1119k 1091k 663k
>> 12 1411k 1356k 104k 106k 1201k 1142k 629k
>> 13 1466k 1423k 106k 106k 1260k 1170k 607k
>> 14 1517k 1486k 103k 106k 1296k 1179k 589k
>> 15 1552k 1543k 102k 102k 1322k 1191k 571k
>> 16 1480k 1506k 101k 102k 1346k 1202k 566k
>>
>> Vitaly Mayatskikh (1):
>>    Add vhost_blk driver
>>
>>   drivers/vhost/Kconfig  |  13 ++
>>   drivers/vhost/Makefile |   3 +
>>   drivers/vhost/blk.c    | 510 +++++++++++++++++++++++++++++++++++++++++
>>   3 files changed, 526 insertions(+)
>>   create mode 100644 drivers/vhost/blk.c
>>
> 
> Hi:
> 
> Thanks for the patches.
> 
> This is not the first attempt for having vhost-blk:
> 
> - Badari's version: https://lwn.net/Articles/379864/
> 
> - Asias' version: https://lwn.net/Articles/519880/
> 
> It's better to describe the differences (kiocb vs bio? performance?). E.g if my memory is correct, Asias said it doesn't give much improvement compared with userspace qemu.
> 
> And what's more important, I believe we tend to use virtio-scsi nowdays. So what's the advantages of vhost-blk over vhost-scsi?


For the record, we still do use virtio-blk a lot. As we see new things like discard/write zero 
support it seems that others do as well.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ