lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170623041743.GC3936@pxdev.xzpeter.org>
Date:   Fri, 23 Jun 2017 12:17:43 +0800
From:   Peter Xu <peterx@...hat.com>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     Nitin Saxena <nitin.lnx@...il.com>, linux-kernel@...r.kernel.org,
        qemu-devel <qemu-devel@...gnu.org>
Subject: Re: Query on VFIO in Virtual machine

On Thu, Jun 22, 2017 at 11:27:09AM -0600, Alex Williamson wrote:
> On Thu, 22 Jun 2017 22:42:19 +0530
> Nitin Saxena <nitin.lnx@...il.com> wrote:
> 
> > Thanks Alex.
> > 
> > >> Without an iommu in the VM, you'd be limited to no-iommu support for VM userspace,  
> > So are you trying to say VFIO NO-IOMMU should work inside VM. Does
> > that mean VFIO NO-IOMMU in VM and VFIO IOMMU in host for same device
> > is a legitimate configuration? I did tried this configuration and the
> > application (in VM) seems to get container_fd, group_fd, device_fd
> > successfully but after VFIO_DEVICE_RESET ioctl the PCI link breaks
> > from VM as well as from host. This could be specific to PCI endpoint
> > device which I can dig.
> > 
> > I will be happy if VFIO NO-IOMMU in VM and IOMMU in host for same
> > device is legitimate configuration.
> 
> Using no-iommu in the guest should work in that configuration, however
> there's no isolation from the user to the rest of VM memory, so the VM
> kernel will be tainted.  Host memory does have iommu isolation.  Device
> reset from VM userspace sounds like another bug to investigate.  Thanks,
> 
> Alex

Besides what Alex has mentioned, there is a wiki page for the usage.
The command line will be slightly different on QEMU side comparing to
without vIOMMU:

  http://wiki.qemu.org/Features/VT-d#With_Assigned_Devices

One more thing to mention is that, when vfio-pci devices in the guest
are used with emulated VT-d, huge performance degradation will be
expected for dynamic allocations at least for now. While for mostly
static allocations (like DPDK) the performance should be merely the
same as no-IOMMU mode. It's just a hint on performance, and I believe
for your own case it should mostly depend on how the application is
managing DMA map/unmaps.

Thanks,

-- 
Peter Xu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ