[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200930125733.GI816047@nvidia.com>
Date: Wed, 30 Sep 2020 09:57:33 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: "Derrick, Jonathan" <jonathan.derrick@...el.com>
CC: "tglx@...utronix.de" <tglx@...utronix.de>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"sivanich@....com" <sivanich@....com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"haiyangz@...rosoft.com" <haiyangz@...rosoft.com>,
"Dey, Megha" <megha.dey@...el.com>,
"Lu, Baolu" <baolu.lu@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
"Tian, Kevin" <kevin.tian@...el.com>,
"jgross@...e.com" <jgross@...e.com>,
"kys@...rosoft.com" <kys@...rosoft.com>,
"sstabellini@...nel.org" <sstabellini@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
"rafael@...nel.org" <rafael@...nel.org>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"maz@...nel.org" <maz@...nel.org>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"steve.wahl@....com" <steve.wahl@....com>,
"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"rja@....com" <rja@....com>, "joro@...tes.org" <joro@...tes.org>,
"sthemmin@...rosoft.com" <sthemmin@...rosoft.com>,
"Pan, Jacob jun" <jacob.jun.pan@...el.com>,
"lorenzo.pieralisi@....com" <lorenzo.pieralisi@....com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"baolu.lu@...ux.intel.com" <baolu.lu@...ux.intel.com>
Subject: Re: [patch V2 24/46] PCI: vmd: Mark VMD irqdomain with
DOMAIN_BUS_VMD_MSI
On Wed, Sep 30, 2020 at 12:45:30PM +0000, Derrick, Jonathan wrote:
> Hi Jason
>
> On Mon, 2020-08-31 at 11:39 -0300, Jason Gunthorpe wrote:
> > On Wed, Aug 26, 2020 at 01:16:52PM +0200, Thomas Gleixner wrote:
> > > From: Thomas Gleixner <tglx@...utronix.de>
> > >
> > > Devices on the VMD bus use their own MSI irq domain, but it is not
> > > distinguishable from regular PCI/MSI irq domains. This is required
> > > to exclude VMD devices from getting the irq domain pointer set by
> > > interrupt remapping.
> > >
> > > Override the default bus token.
> > >
> > > Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> > > Acked-by: Bjorn Helgaas <bhelgaas@...gle.com>
> > > drivers/pci/controller/vmd.c | 6 ++++++
> > > 1 file changed, 6 insertions(+)
> > >
> > > +++ b/drivers/pci/controller/vmd.c
> > > @@ -579,6 +579,12 @@ static int vmd_enable_domain(struct vmd_
> > > return -ENODEV;
> > > }
> > >
> > > + /*
> > > + * Override the irq domain bus token so the domain can be distinguished
> > > + * from a regular PCI/MSI domain.
> > > + */
> > > + irq_domain_update_bus_token(vmd->irq_domain, DOMAIN_BUS_VMD_MSI);
> > > +
> >
> > Having the non-transparent-bridge hold a MSI table and
> > multiplex/de-multiplex IRQs looks like another good use case for
> > something close to pci_subdevice_msi_create_irq_domain()?
> >
> > If each device could have its own internal MSI-X table programmed
> > properly things would work alot better. Disable capture/remap of the
> > MSI range in the NTB.
> We can disable the capture and remap in newer devices so we don't even
> need the irq domain.
You'd still need an irq domain, it just comes from
pci_subdevice_msi_create_irq_domain() instead of internal to this
driver.
> I would however like to determine if the MSI data bits could be used
> for individual devices to have the IRQ domain subsystem demultiplex the
> virq from that and eliminate the IRQ list iteration.
Yes, exactly. This new pci_subdevice_msi_create_irq_domain() creates
*proper* fully functional interrupts, no need for any IRQ handler in
this driver.
> A concern I have about that scheme is virtualization as I think those
> bits are used to route to the virtual vector.
It should be fine with this patch series, consult with Megha how
virtualization is working with IDXD/etc
Jason
Powered by blists - more mailing lists