lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1447963356.4697.184.camel@redhat.com>
Date:	Thu, 19 Nov 2015 13:02:36 -0700
From:	Alex Williamson <alex.williamson@...hat.com>
To:	"Tian, Kevin" <kevin.tian@...el.com>
Cc:	"Song, Jike" <jike.song@...el.com>,
	"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
	"igvt-g@...1.01.org" <igvt-g@...1.01.org>,
	"intel-gfx@...ts.freedesktop.org" <intel-gfx@...ts.freedesktop.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"White, Michael L" <michael.l.white@...el.com>,
	"Dong, Eddie" <eddie.dong@...el.com>,
	"Li, Susie" <susie.li@...el.com>,
	"Cowperthwaite, David J" <david.j.cowperthwaite@...el.com>,
	"Reddy, Raghuveer" <raghuveer.reddy@...el.com>,
	"Zhu, Libo" <libo.zhu@...el.com>,
	"Zhou, Chao" <chao.zhou@...el.com>,
	"Wang, Hongbo" <hongbo.wang@...el.com>,
	"Lv, Zhiyuan" <zhiyuan.lv@...el.com>,
	qemu-devel <qemu-devel@...gnu.org>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Gerd Hoffmann <kraxel@...hat.com>
Subject: Re: [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a
 Mediated Graphics Passthrough Solution from Intel

Hi Kevin,

On Thu, 2015-11-19 at 04:06 +0000, Tian, Kevin wrote:
> > From: Alex Williamson [mailto:alex.williamson@...hat.com]
> > Sent: Thursday, November 19, 2015 2:12 AM
> > 
> > [cc +qemu-devel, +paolo, +gerd]
> > 
> > On Tue, 2015-10-27 at 17:25 +0800, Jike Song wrote:
> > > Hi all,
> > >
> > > We are pleased to announce another update of Intel GVT-g for Xen.
> > >
> > > Intel GVT-g is a full GPU virtualization solution with mediated
> > > pass-through, starting from 4th generation Intel Core(TM) processors
> > > with Intel Graphics processors. A virtual GPU instance is maintained
> > > for each VM, with part of performance critical resources directly
> > > assigned. The capability of running native graphics driver inside a
> > > VM, without hypervisor intervention in performance critical paths,
> > > achieves a good balance among performance, feature, and sharing
> > > capability. Xen is currently supported on Intel Processor Graphics
> > > (a.k.a. XenGT); and the core logic can be easily ported to other
> > > hypervisors.
> > >
> > >
> > > Repositories
> > >
> > >      Kernel: https://github.com/01org/igvtg-kernel (2015q3-3.18.0 branch)
> > >      Xen: https://github.com/01org/igvtg-xen (2015q3-4.5 branch)
> > >      Qemu: https://github.com/01org/igvtg-qemu (xengt_public2015q3 branch)
> > >
> > >
> > > This update consists of:
> > >
> > >      - XenGT is now merged with KVMGT in unified repositories(kernel and qemu), but
> > currently
> > >        different branches for qemu.  XenGT and KVMGT share same iGVT-g core logic.
> > 
> > Hi!
> > 
> > At redhat we've been thinking about how to support vGPUs from multiple
> > vendors in a common way within QEMU.  We want to enable code sharing
> > between vendors and give new vendors an easy path to add their own
> > support.  We also have the complication that not all vGPU vendors are as
> > open source friendly as Intel, so being able to abstract the device
> > mediation and access outside of QEMU is a big advantage.
> > 
> > The proposal I'd like to make is that a vGPU, whether it is from Intel
> > or another vendor, is predominantly a PCI(e) device.  We have an
> > interface in QEMU already for exposing arbitrary PCI devices, vfio-pci.
> > Currently vfio-pci uses the VFIO API to interact with "physical" devices
> > and system IOMMUs.  I highlight /physical/ there because some of these
> > physical devices are SR-IOV VFs, which is somewhat of a fuzzy concept,
> > somewhere between fixed hardware and a virtual device implemented in
> > software.  That software just happens to be running on the physical
> > endpoint.
> 
> Agree. 
> 
> One clarification for rest discussion, is that we're talking about GVT-g vGPU 
> here which is a pure software GPU virtualization technique. GVT-d (note 
> some use in the text) refers to passing through the whole GPU or a specific 
> VF. GVT-d already falls into existing VFIO APIs nicely (though some on-going
> effort to remove Intel specific platform stickness from gfx driver). :-)
> 
> > 
> > vGPUs are similar, with the virtual device created at a different point,
> > host software.  They also rely on different IOMMU constructs, making use
> > of the MMU capabilities of the GPU (GTTs and such), but really having
> > similar requirements.
> 
> One important difference between system IOMMU and GPU-MMU here.
> System IOMMU is very much about translation from a DMA target
> (IOVA on native, or GPA in virtualization case) to HPA. However GPU
> internal MMUs is to translate from Graphics Memory Address (GMA)
> to DMA target (HPA if system IOMMU is disabled, or IOVA/GPA if system
> IOMMU is enabled). GMA is an internal addr space within GPU, not 
> exposed to Qemu and fully managed by GVT-g device model. Since it's 
> not a standard PCI defined resource, we don't need abstract this capability
> in VFIO interface.
> 
> > 
> > The proposal is therefore that GPU vendors can expose vGPUs to
> > userspace, and thus to QEMU, using the VFIO API.  For instance, vfio
> > supports modular bus drivers and IOMMU drivers.  An intel-vfio-gvt-d
> > module (or extension of i915) can register as a vfio bus driver, create
> > a struct device per vGPU, create an IOMMU group for that device, and
> > register that device with the vfio-core.  Since we don't rely on the
> > system IOMMU for GVT-d vGPU assignment, another vGPU vendor driver (or
> > extension of the same module) can register a "type1" compliant IOMMU
> > driver into vfio-core.  From the perspective of QEMU then, all of the
> > existing vfio-pci code is re-used, QEMU remains largely unaware of any
> > specifics of the vGPU being assigned, and the only necessary change so
> > far is how QEMU traverses sysfs to find the device and thus the IOMMU
> > group leading to the vfio group.
> 
> GVT-g requires to pin guest memory and query GPA->HPA information,
> upon which shadow GTTs will be updated accordingly from (GMA->GPA)
> to (GMA->HPA). So yes, here a dummy or simple "type1" compliant IOMMU 
> can be introduced just for this requirement.
> 
> However there's one tricky point which I'm not sure whether overall
> VFIO concept will be violated. GVT-g doesn't require system IOMMU
> to function, however host system may enable system IOMMU just for 
> hardening purpose. This means two-level translations existing (GMA->
> IOVA->HPA), so the dummy IOMMU driver has to request system IOMMU 
> driver to allocate IOVA for VMs and then setup IOVA->HPA mapping
> in IOMMU page table. In this case, multiple VM's translations are 
> multiplexed in one IOMMU page table.
> 
> We might need create some group/sub-group or parent/child concepts
> among those IOMMUs for thorough permission control.

My thought here is that this is all abstracted through the vGPU IOMMU
and device vfio backends.  It's the GPU driver itself, or some vfio
extension of that driver, mediating access to the device and deciding
when to configure GPU MMU mappings.  That driver has access to the GPA
to HVA translations thanks to the type1 complaint IOMMU it implements
and can pin pages as needed to create GPA to HPA mappings.  That should
give it all the pieces it needs to fully setup mappings for the vGPU.
Whether or not there's a system IOMMU is simply an exercise for that
driver.  It needs to do a DMA mapping operation through the system IOMMU
the same for a vGPU as if it was doing it for itself, because they are
in fact one in the same.  The GMA to IOVA mapping seems like an internal
detail.  I assume the IOVA is some sort of GPA, and the GMA is managed
through mediation of the device.


> > There are a few areas where we know we'll need to extend the VFIO API to
> > make this work, but it seems like they can all be done generically.  One
> > is that PCI BARs are described through the VFIO API as regions and each
> > region has a single flag describing whether mmap (ie. direct mapping) of
> > that region is possible.  We expect that vGPUs likely need finer
> > granularity, enabling some areas within a BAR to be trapped and fowarded
> > as a read or write access for the vGPU-vfio-device module to emulate,
> > while other regions, like framebuffers or texture regions, are directly
> > mapped.  I have prototype code to enable this already.
> 
> Yes in GVT-g one BAR resource might be partitioned among multiple vGPUs.
> If VFIO can support such partial resource assignment, it'd be great. Similar
> parent/child concept might also be required here, so any resource enumerated 
> on a vGPU shouldn't break limitations enforced on the physical device.

To be clear, I'm talking about partitioning of the BAR exposed to the
guest.  Partitioning of the physical BAR would be managed by the vGPU
vfio device driver.  For instance when the guest mmap's a section of the
virtual BAR, the vGPU device driver would map that to a portion of the
physical device BAR.

> One unique requirement for GVT-g here, though, is that vGPU device model
> need to know guest BAR configuration for proper emulation (e.g. register
> IO emulation handler to KVM). Similar is about guest MSI vector for virtual 
> interrupt injection. Not sure how this can be fit into common VFIO model. 
> Does VFIO allow vendor specific extension today?

As a vfio device driver all config accesses and interrupt configuration
would be forwarded to you, so I don't see this being a problem.

> > 
> > Another area is that we really don't want to proliferate each vGPU
> > needing a new IOMMU type within vfio.  The existing type1 IOMMU provides
> > potentially the most simple mapping and unmapping interface possible.
> > We'd therefore need to allow multiple "type1" IOMMU drivers for vfio,
> > making type1 be more of an interface specification rather than a single
> > implementation.  This is a trivial change to make within vfio and one
> > that I believe is compatible with the existing API.  Note that
> > implementing a type1-compliant vfio IOMMU does not imply pinning an
> > mapping every registered page.  A vGPU, with mediated device access, may
> > use this only to track the current HVA to GPA mappings for a VM.  Only
> > when a DMA is enabled for the vGPU instance is that HVA pinned and an
> > HPA to GPA translation programmed into the GPU MMU.
> > 
> > Another area of extension is how to expose a framebuffer to QEMU for
> > seamless integration into a SPICE/VNC channel.  For this I believe we
> > could use a new region, much like we've done to expose VGA access
> > through a vfio device file descriptor.  An area within this new
> > framebuffer region could be directly mappable in QEMU while a
> > non-mappable page, at a standard location with standardized format,
> > provides a description of framebuffer and potentially even a
> > communication channel to synchronize framebuffer captures.  This would
> > be new code for QEMU, but something we could share among all vGPU
> > implementations.
> 
> Now GVT-g already provides an interface to decode framebuffer information,
> w/ an assumption that the framebuffer will be further composited into 
> OpenGL APIs. So the format is defined according to OpenGL definition.
> Does that meet SPICE requirement?
> 
> Another thing to be added. Framebuffers are frequently switched in
> reality. So either Qemu needs to poll or a notification mechanism is required.
> And since it's dynamic, having framebuffer page directly exposed in the
> new region might be tricky. We can just expose framebuffer information
> (including base, format, etc.) and let Qemu to map separately out of VFIO
> interface.

Sure, we'll need to work out that interface, but it's also possible that
the framebuffer region is simply remapped to another area of the device
(ie. multiple interfaces mapping the same thing) by the vfio device
driver.  Whether it's easier to do that or make the framebuffer region
reference another region is something we'll need to see.

> And... this works fine with vGPU model since software knows all the
> detail about framebuffer. However in pass-through case, who do you expect
> to provide that information? Is it OK to introduce vGPU specific APIs in
> VFIO?

Yes, vGPU may have additional features, like a framebuffer area, that
aren't present or optional for direct assignment.  Obviously we support
direct assignment of GPUs for some vendors already without this feature.

> > Another obvious area to be standardized would be how to discover,
> > create, and destroy vGPU instances.  SR-IOV has a standard mechanism to
> > create VFs in sysfs and I would propose that vGPU vendors try to
> > standardize on similar interfaces to enable libvirt to easily discover
> > the vGPU capabilities of a given GPU and manage the lifecycle of a vGPU
> > instance.
> 
> Now there is no standard. We expose vGPU life-cycle mgmt. APIs through
> sysfs (under i915 node), which is very Intel specific. In reality different
> vendors have quite different capabilities for their own vGPUs, so not sure
> how standard we can define such a mechanism. But this code should be
> minor to be maintained in libvirt.

Every difference is a barrier.  I imagine we can come up with some basic
interfaces that everyone could use, even if they don't allow fine tuning
every detail specific to a vendor.

> > This is obviously a lot to digest, but I'd certainly be interested in
> > hearing feedback on this proposal as well as try to clarify anything
> > I've left out or misrepresented above.  Another benefit to this
> > mechanism is that direct GPU assignment and vGPU assignment use the same
> > code within QEMU and same API to the kernel, which should make debugging
> > and code support between the two easier.  I'd really like to start a
> > discussion around this proposal, and of course the first open source
> > implementation of this sort of model will really help to drive the
> > direction it takes.  Thanks!
> > 
> 
> Thanks for starting this discussion. Intel will definitely work with 
> community on this work. Based on earlier comments, I'm not sure
> whether we can exactly same code for direct GPU assignment and
> vGPU assignment, since even we extend VFIO some interfaces might
> be vGPU specific. Does this way still achieve your end goal?

The backends will certainly be different for vGPU vs direct assignment,
but hopefully the QEMU code is almost entirely reused, modulo some
features like framebuffers that are likely only to be seen on vGPU.
Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ