lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1cc7efd1852f298b01f09955f2c4bf3b20cead13.camel@redhat.com>
Date:   Mon, 06 May 2019 11:31:27 +0300
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Christoph Hellwig <hch@....de>, Max Gurtovoy <maxg@...lanox.com>
Cc:     Fam Zheng <fam@...hon.net>, kvm@...r.kernel.org,
        Wolfram Sang <wsa@...-dreams.de>,
        linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
        Keith Busch <keith.busch@...el.com>,
        Kirti Wankhede <kwankhede@...dia.com>,
        Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
        "Paul E . McKenney" <paulmck@...ux.ibm.com>,
        Sagi Grimberg <sagi@...mberg.me>,
        Christoph Hellwig <hch@...radead.org>,
        Liang Cunming <cunming.liang@...el.com>,
        Jens Axboe <axboe@...com>,
        Alex Williamson <alex.williamson@...hat.com>,
        John Ferlan <jferlan@...hat.com>,
        Liu Changpeng <changpeng.liu@...el.com>,
        Jens Axboe <axboe@...nel.dk>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Nicolas Ferre <nicolas.ferre@...rochip.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Amnon Ilan <ailan@...hat.com>,
        "David S . Miller" <davem@...emloft.net>
Subject: Re: [PATCH v2 06/10] nvme/core: add mdev interfaces

On Sat, 2019-05-04 at 08:49 +0200, Christoph Hellwig wrote:
> On Fri, May 03, 2019 at 10:00:54PM +0300, Max Gurtovoy wrote:
> > Don't see a big difference of taking NVMe queue and namespace/partition to 
> > guest OS or to P2P since IO is issued by external entity and pooled outside 
> > the pci driver.
> 
> We are not going to the queue aside either way..  That is where the
> last patch in this series is already working to, and which would be
> the sensible vhost model to start with.

Why are you saying that? I actualy prefer to use a sepearate queue per software
nvme controller, tat because of lower overhead (about half than going through
the block layer) and it better at QoS as the separate queue (or even few queues
if needed) will give the guest a mostly guaranteed slice of the bandwidth of the
device.

The only drawback of this is some code duplication but that can be worked on
with some changes in the block layer.

The last patch in my series was done with 2 purposes in mind which are to
measure the overhead, and to maybe utilize that as a failback to non nvme
devices.

Best regards,
	Maxim Levitsky

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ