[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190321161352.GA21682@stefanha-x1.localdomain>
Date: Thu, 21 Mar 2019 16:13:52 +0000
From: Stefan Hajnoczi <stefanha@...il.com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, Jens Axboe <axboe@...com>,
Alex Williamson <alex.williamson@...hat.com>,
Keith Busch <keith.busch@...el.com>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>,
Kirti Wankhede <kwankhede@...dia.com>,
"David S . Miller" <davem@...emloft.net>,
Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Wolfram Sang <wsa@...-dreams.de>,
Nicolas Ferre <nicolas.ferre@...rochip.com>,
"Paul E . McKenney " <paulmck@...ux.ibm.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Liang Cunming <cunming.liang@...el.com>,
Liu Changpeng <changpeng.liu@...el.com>,
Fam Zheng <fam@...hon.net>, Amnon Ilan <ailan@...hat.com>,
John Ferlan <jferlan@...hat.com>
Subject: Re: your mail
On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
>
> The idea behind this driver is based on paper you can find at
> https://www.usenix.org/conference/atc18/presentation/peng,
>
> Although note that I stared the development prior to reading this paper,
> independently.
>
> In addition to that implementation is not based on code used in the paper as
> I wasn't being able at that time to make the source available to me.
>
> ***Key points about the implementation:***
>
> * Polling kernel thread is used. The polling is stopped after a
> predefined timeout (1/2 sec by default).
> Support for all interrupt driven mode is planned, and it shows promising results.
>
> * Guest sees a standard NVME device - this allows to run guest with
> unmodified drivers, for example windows guests.
>
> * The NVMe device is shared between host and guest.
> That means that even a single namespace can be split between host
> and guest based on different partitions.
>
> * Simple configuration
>
> *** Performance ***
>
> Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> and both latency and throughput is very similar to SPDK.
>
> Soon I will test this on a better server and nvme device and provide
> more formal performance numbers.
>
> Latency numbers:
> ~80ms - spdk with fio plugin on the host.
> ~84ms - nvme driver on the host
> ~87ms - mdev-nvme + nvme driver in the guest
You mentioned the spdk numbers are with vhost-user-nvme. Have you
measured SPDK's vhost-user-blk?
Stefan
Download attachment "signature.asc" of type "application/pgp-signature" (456 bytes)
Powered by blists - more mailing lists