lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Mar 2019 08:28:06 -0700
From:   Bart Van Assche <bvanassche@....org>
To:     Maxim Levitsky <mlevitsk@...hat.com>,
        linux-nvme@...ts.infradead.org
Cc:     Fam Zheng <fam@...hon.net>, Keith Busch <keith.busch@...el.com>,
        Sagi Grimberg <sagi@...mberg.me>, kvm@...r.kernel.org,
        "David S . Miller" <davem@...emloft.net>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Liang Cunming <cunming.liang@...el.com>,
        Wolfram Sang <wsa@...-dreams.de>, linux-kernel@...r.kernel.org,
        Kirti Wankhede <kwankhede@...dia.com>,
        Jens Axboe <axboe@...com>,
        Alex Williamson <alex.williamson@...hat.com>,
        John Ferlan <jferlan@...hat.com>,
        Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Liu Changpeng <changpeng.liu@...el.com>,
        "Paul E . McKenney" <paulmck@...ux.ibm.com>,
        Amnon Ilan <ailan@...hat.com>, Christoph Hellwig <hch@....de>,
        Nicolas Ferre <nicolas.ferre@...rochip.com>
Subject: Re: [PATCH 0/9] RFC: NVME VFIO mediated device

On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> *  All guest memory is mapped into the physical nvme device 
>    but not 1:1 as vfio-pci would do this.
>    This allows very efficient DMA.
>    To support this, patch 2 adds ability for a mdev device to listen on 
>    guest's memory map events. 
>    Any such memory is immediately pinned and then DMA mapped.
>    (Support for fabric drivers where this is not possible exits too,
>     in which case the fabric driver will do its own DMA mapping)

Does this mean that all guest memory is pinned all the time? If so, are you
sure that's acceptable?

Additionally, what is the performance overhead of the IOMMU notifier added
by patch 8/9? How often was that notifier called per second in your tests
and how much time was spent per call in the notifier callbacks?

Thanks,

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ