lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181106154048.GB31579@stefanha-x1.localdomain>
Date:   Tue, 6 Nov 2018 15:40:48 +0000
From:   Stefan Hajnoczi <stefanha@...il.com>
To:     Vitaly Mayatskikh <v.mayatskih@...il.com>
Cc:     Jason Wang <jasowang@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org, Kevin Wolf <kwolf@...hat.com>,
        "Michael S. Tsirkin" <mst@...hat.com>, den@...tuozzo.com
Subject: Re: [PATCH 0/1] vhost: add vhost_blk driver

On Fri, Nov 02, 2018 at 02:26:00PM -0400, Michael S. Tsirkin wrote:
> On Fri, Nov 02, 2018 at 06:21:22PM +0000, Vitaly Mayatskikh wrote:
> > vhost_blk is a host-side kernel mode accelerator for virtio-blk. The
> > driver allows VM to reach a near bare-metal disk performance. See IOPS
> > numbers below (fio --rw=randread --bs=4k).
> > 
> > This implementation uses kiocb interface. It is slightly slower than
> > going directly through bio, but is simpler and also works with disk
> > images placed on a file system.
> > 
> > # fio num-jobs
> > # A: bare metal over block
> > # B: bare metal over file
> > # C: virtio-blk over block
> > # D: virtio-blk over file
> > # E: vhost-blk bio over block
> > # F: vhost-blk kiocb over block
> > # G: vhost-blk kiocb over file
> > #
> > #  A     B     C    D    E     F     G
> > 
> > 1  171k  151k  148k 151k 195k  187k  175k
> > 2  328k  302k  249k 241k 349k  334k  296k
> > 3  479k  437k  179k 174k 501k  464k  404k
> > 4  622k  568k  143k 183k 620k  580k  492k
> > 5  755k  697k  136k 128k 737k  693k  579k
> > 6  887k  808k  131k 120k 830k  782k  640k
> > 7  1004k 926k  126k 131k 926k  863k  693k
> > 8  1099k 1015k 117k 115k 1001k 931k  712k
> > 9  1194k 1119k 115k 111k 1055k 991k  711k
> > 10 1278k 1207k 109k 114k 1130k 1046k 695k
> > 11 1345k 1280k 110k 108k 1119k 1091k 663k
> > 12 1411k 1356k 104k 106k 1201k 1142k 629k
> > 13 1466k 1423k 106k 106k 1260k 1170k 607k
> > 14 1517k 1486k 103k 106k 1296k 1179k 589k
> > 15 1552k 1543k 102k 102k 1322k 1191k 571k
> > 16 1480k 1506k 101k 102k 1346k 1202k 566k
> > 
> > Vitaly Mayatskikh (1):
> >   Add vhost_blk driver
> 
> 
> Thanks!
> Before merging this, I'd like to get some acks from userspace that it's
> actually going to be used - e.g. QEMU block maintainers.

I have CCed Kevin, who is the overall QEMU block layer maintainer.

Also CCing Denis since I think someone was working on a QEMU userspace
multiqueue virtio-blk device for maximum performance.

Previously vhost_blk.ko implementations were basically the same thing as
the QEMU x-data-plane=on (dedicated thread using Linux AIO), except they
were using a kernel thread and maybe submitted bios.

The performance differences weren't convincing enough that it seemed
worthwhile maintaining another code path which loses live migration, I/O
throttling, image file formats, etc (all the things that QEMU's block
layer supports).

Two changes since then:

1. x-data-plane=on has been replaced with a full trip down QEMU's block
layer (-object iothread,id=iothread0 -device
virtio-blk-pci,iothread=iothread0,...).  It's slower and not truly
multiqueue (yet!).

So from this perspective vhost_blk.ko might be more attractive again, at
least until further QEMU block layer work eliminates the multiqueue and
performance overheads.

2. SPDK has become available for users who want the best I/O performance
and are willing to sacrifice CPU cores for polling.

If you want better performance and don't care about QEMU block layer
features, could you use SPDK?  People who are the target market for
vhost_blk.ko would probably be willing to use SPDK and it already
exists...

From the QEMU userspace perspective, I think the best way to integrate
vhost_blk.ko is to transparently switch to it when possible.  If the
user enables QEMU block layer features that are incompatible with
vhost_blk.ko, then it should fall back to the QEMU block layer
transparently.

I'm not keen on yet another code path with it's own set of limitations
and having to educate users about how to make the choice.  But if it can
be integrated transparently as an "accelerator", then it could be
valuable.

Stefan

Download attachment "signature.asc" of type "application/pgp-signature" (456 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ