lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 04 Nov 2018 13:57:37 +0200
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Vitaly Mayatskikh <v.mayatskih@...il.com>,
        "Michael S . Tsirkin" <mst@...hat.com>
Cc:     Jason Wang <jasowang@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/1] vhost: add vhost_blk driver

On Fri, 2018-11-02 at 18:21 +0000, Vitaly Mayatskikh wrote:
> vhost_blk is a host-side kernel mode accelerator for virtio-blk. The
> driver allows VM to reach a near bare-metal disk performance. See IOPS
> numbers below (fio --rw=randread --bs=4k).
> 
> This implementation uses kiocb interface. It is slightly slower than
> going directly through bio, but is simpler and also works with disk
> images placed on a file system.
> 
> # fio num-jobs
> # A: bare metal over block
> # B: bare metal over file
> # C: virtio-blk over block
> # D: virtio-blk over file
> # E: vhost-blk bio over block
> # F: vhost-blk kiocb over block
> # G: vhost-blk kiocb over file
> #
> #  A     B     C    D    E     F     G
> 

Hi!
I am also working in this area, and so I am very intersted in this driver.

> 1  171k  151k  148k 151k 195k  187k  175k
If I understand correctly this is fio --numjobs=1?
It looks like you are getting better that native performance over bare metal in
E,F,G (vhost-blk cases in fact). Is this correct?

Could you share the full fio command line you have used?
Which IO device did you use for the test? NVME?
Which system (cpu model/number of cores/etc) did you test on?

Best regards,
      Maxim Levitsky


Powered by blists - more mailing lists