[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR11MB164539B8FDE63D5CBDA300E18CE30@MWHPR11MB1645.namprd11.prod.outlook.com>
Date: Mon, 16 Nov 2020 07:31:49 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: "Raj, Ashok" <ashok.raj@...el.com>,
Thomas Gleixner <tglx@...utronix.de>
CC: Christoph Hellwig <hch@...radead.org>,
"Wilk, Konrad" <konrad.wilk@...cle.com>,
Jason Gunthorpe <jgg@...dia.com>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
"Bjorn Helgaas" <helgaas@...nel.org>,
"vkoul@...nel.org" <vkoul@...nel.org>,
"Dey, Megha" <megha.dey@...el.com>,
"maz@...nel.org" <maz@...nel.org>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"Pan, Jacob jun" <jacob.jun.pan@...el.com>,
"Liu, Yi L" <yi.l.liu@...el.com>, "Lu, Baolu" <baolu.lu@...el.com>,
"Kumar, Sanjay K" <sanjay.k.kumar@...el.com>,
"Luck, Tony" <tony.luck@...el.com>,
"kwankhede@...dia.com" <kwankhede@...dia.com>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"parav@...lanox.com" <parav@...lanox.com>,
"rafael@...nel.org" <rafael@...nel.org>,
"netanelg@...lanox.com" <netanelg@...lanox.com>,
"shahafs@...lanox.com" <shahafs@...lanox.com>,
"yan.y.zhao@...ux.intel.com" <yan.y.zhao@...ux.intel.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"Ortiz, Samuel" <samuel.ortiz@...el.com>,
"Hossain, Mona" <mona.hossain@...el.com>,
"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Raj, Ashok" <ashok.raj@...el.com>
Subject: RE: [PATCH v4 06/17] PCI: add SIOV and IMS capability detection
> From: Raj, Ashok <ashok.raj@...el.com>
> Sent: Monday, November 16, 2020 8:23 AM
>
> On Sun, Nov 15, 2020 at 11:11:27PM +0100, Thomas Gleixner wrote:
> > On Sun, Nov 15 2020 at 11:31, Ashok Raj wrote:
> > > On Sun, Nov 15, 2020 at 12:26:22PM +0100, Thomas Gleixner wrote:
> > >> > opt-in by device or kernel? The way we are planning to support this is:
> > >> >
> > >> > Device support for IMS - Can discover in device specific means
> > >> > Kernel support for IMS. - Supported by IOMMU driver.
> > >>
> > >> And why exactly do we have to enforce IOMMU support? Please stop
> looking
> > >> at IMS purely from the IDXD perspective. We are talking about the
> > >> general concept here and not about the restricted Intel universe.
> > >
> > > I think you have mentioned it almost every reply :-)..Got that! Point taken
> > > several emails ago!! :-)
> >
> > You sure? I _try_ to not mention it again then. No promise though. :)
>
> Hey.. anything that's entertaining go for it :-)
>
> >
> > > I didn't mean just for idxd, I said for *ANY* device driver that wants to
> > > use IMS.
> >
> > Which is wrong. Again:
> >
> > A) For PF/VF on bare metal there is absolutely no IOMMU dependency
> > because it does not have a PASID requirement. It's just an
> > alternative solution to MSI[X], which allows optimizations like
> > storing the message in driver manages queue memory or lifting the
> > restriction of 2048 interrupts per device. Nothing else.
>
> You are right.. my eyes were clouded by virtualization.. no dependency for
> native absolutely.
>
> >
> > B) For PF/VF in a guest the IOMMU dependency of IMS is a red herring.
> > There is no direct dependency on the IOMMU.
> >
> > The problem is the inability of the VMM to trap the message write to
> > the IMS storage if the storage is in guest driver managed memory.
> > This can be solved with either
> >
> > - a hypercall which translates the guest MSI message
> > or
> > - a vIOMMU which uses a hypercall or whatever to translate the guest
> > MSI message
> >
> > C) Subdevices ala mdev are a different story. They require PASID which
> > enforces IOMMU and the IMS part is not managed by the users anyway.
>
> You are right again :)
>
> The subdevices require PASID & IOMMU in native, but inside the guest there
> is no
> need for IOMMU unless you want to build SVM on top. subdevices work
> without
> any vIOMMU or hypercall in the guest. Only because they look like normal
> PCI devices we could map interrupts to legacy MSIx.
Guest managed subdevices on PF/VF requires vIOMMU. Anyway I think
Thomas was just pointing out that subdevices are the only category out
of above three which may have business tied to IOMMU. 😊
>
> >
> > So we have a couple of problems to solve:
> >
> > 1) Figure out whether the OS runs on bare metal
> >
> > There is no reliable answer to that, so we either:
> >
> > - Use heuristics and assume that failure is unlikely and in case
> > of failure blame the incompetence of VMM authors and/or
> > sysadmins
> >
> > or
> >
> > - Default to IMS disabled and let the sysadmin enable it via
> > command line option.
> >
> > If the kernel detects to run in a VM it yells and disables it
> > unless the OS and the hypervisor agree to provide support for
> > that scenario (see #2).
> >
> > That's fails as well if the sysadmin does so when the OS runs on
> > a VMM which is not identifiable, but at least we can rightfully
> > blame the sysadmin in that case.
>
> cmdline isn't nice, best to have this functional out of box.
>
> >
> > or
> >
> > - Declare that IMS always depends on IOMMU
>
> As you had mentioned IMS has no real dependency on IOMMU in native.
>
> we just need to make sure if running in guest we have support for it
> plumbed.
>
> >
> > I personaly don't care, but people working on these kind of
> > device already said, that they want to avoid it when possible.
> >
> > If you want to go that route, then please talk to those folks
> > and ask them to agree in public.
> >
> > You also need to take into account that this must work on all
> > architectures which support virtualization because IMS is
> > architecture independent.
>
> What you suggest makes perfect sense. We can certainly get buy in from
> iommu list and have this co-ordinated between all existing iommu varients.
Does a hybrid scheme sound good here?
- Say a cmdline parameter: ims=[auto|on|off], with 'auto' as default;
- if ims=auto:
* If arch doesn't implement probably_on_bare_metal, disallow ims;
* If probably_on_bare_metal returns false, disallow ims;
# (future) if hypercall is supported, allow ims;
* If probably_on_bare_metal returns true, allow ims with caveat on
possible mis-interception of running on an old hypervisor. Sysadmin
may need to double-confirm in other means
# (future) if definitely_on_bare_metal is supported, no caveat;
- if ims=on:
* If probably_on_bare_metal return false, yell and disable it until
hypercall is supported;
* In all other cases allow ims. Sysadmin should be blamed if any
failure as doing so implies that extra confirmation has been done;
- if ims=off, then leave it off.
It's not necessary to claim strict dependency between ims and iommu.
Instead, we could leave iommu being an arch specific check when it
applies:
probably_on_bare_metal()
{
if (CPUID(FEATURE_HYPERVISOR))
return false;
if (dmi_match_hypervisor_vendor())
return false;
if (iommu_existing() && iommu_in_guest())
return false;
return PROBABLY_RUNNING_ON_BARE_METAL;
}
>
> >
> > 2) Guest support for PF/VF
> >
> > Again we have several scenarios depending on the IMS storage
> > type.
> >
> > - If the storage type is device memory then it's pretty much the
> > same as MSI[X] just a different location.
>
> True, but still need to have some special handling for trapping those mmio
> access. Unlike for MSIx VFIO already traps them and everything is
> pre-plummbed. It isn't seamless as its for MSIx.
yes. So what about tying guest IMS to hypercall even when emulation
is possible on some devices? It's difficult for the guest to know that
its IMS is emulated by hypervisor. Adopting an unified policy for all
IMS-capable devices might be an easier path.
>
> >
> > - If the storage is in driver managed memory then this needs
> > #1 plus guest OS and hypervisor support (hypercall/vIOMMU)
>
> Violent agreement here :-)
>
> >
> > 3) Guest support for PF/VF and guest managed subdevice (mdev)
> >
> > Depends on #1 and #2 and is an orthogonal problem if I'm not
> > missing something.
> >
> > To move forward we need to make a decision about #1 and #2 now.
>
> Mostly in agreement. Except for mdev (current considered use case) have no
> need for IMS in the guest. (Don't get me wrong, I'm not saying some odd
> device managing sub-devices would need IMS in addition and that the 2048
> MSIx emulation.
> >
> > This needs to be well thought out as changing it after the fact is
> > going to be a nightmare.
> >
> > /me grudgingly refrains from mentioning the obvious once more.
> >
>
> So this isn't an idxd and Intel only thing :-)...
>
> Cheers,
> Ashok
Thanks
Kevin
Powered by blists - more mailing lists