[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200826112333.714566121@linutronix.de>
Date: Wed, 26 Aug 2020 13:16:59 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: x86@...nel.org, Joerg Roedel <joro@...tes.org>,
iommu@...ts.linux-foundation.org, linux-hyperv@...r.kernel.org,
Haiyang Zhang <haiyangz@...rosoft.com>,
Jon Derrick <jonathan.derrick@...el.com>,
Lu Baolu <baolu.lu@...ux.intel.com>,
Wei Liu <wei.liu@...nel.org>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Steve Wahl <steve.wahl@....com>,
Dimitri Sivanich <sivanich@....com>,
Russ Anderson <rja@....com>, linux-pci@...r.kernel.org,
Bjorn Helgaas <bhelgaas@...gle.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
xen-devel@...ts.xenproject.org, Juergen Gross <jgross@...e.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Stefano Stabellini <sstabellini@...nel.org>,
Marc Zyngier <maz@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Megha Dey <megha.dey@...el.com>,
Jason Gunthorpe <jgg@...lanox.com>,
Dave Jiang <dave.jiang@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Jacob Pan <jacob.jun.pan@...el.com>,
Baolu Lu <baolu.lu@...el.com>,
Kevin Tian <kevin.tian@...el.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: [patch V2 31/46] iommm/vt-d: Store irq domain in struct device
From: Thomas Gleixner <tglx@...utronix.de>
As a first step to make X86 utilize the direct MSI irq domain operations
store the irq domain pointer in the device struct when a device is probed.
This is done from dmar_pci_bus_add_dev() because it has to work even when
DMA remapping is disabled. It only overrides the irqdomain of devices which
are handled by a regular PCI/MSI irq domain which protects PCI devices
behind special busses like VMD which have their own irq domain.
No functional change. It just avoids the redirection through
arch_*_msi_irqs() and allows the PCI/MSI core to directly invoke the irq
domain alloc/free functions instead of having to look up the irq domain for
every single MSI interupt.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
V2: Add missing forward declaration
---
drivers/iommu/intel/dmar.c | 3 +++
drivers/iommu/intel/irq_remapping.c | 16 ++++++++++++++++
include/linux/intel-iommu.h | 7 +++++++
3 files changed, 26 insertions(+)
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -316,6 +316,9 @@ static int dmar_pci_bus_add_dev(struct d
if (ret < 0 && dmar_dev_scope_status == 0)
dmar_dev_scope_status = ret;
+ if (ret >= 0)
+ intel_irq_remap_add_device(info);
+
return ret;
}
--- a/drivers/iommu/intel/irq_remapping.c
+++ b/drivers/iommu/intel/irq_remapping.c
@@ -1086,6 +1086,22 @@ static int reenable_irq_remapping(int ei
return -1;
}
+/*
+ * Store the MSI remapping domain pointer in the device if enabled.
+ *
+ * This is called from dmar_pci_bus_add_dev() so it works even when DMA
+ * remapping is disabled. Only update the pointer if the device is not
+ * already handled by a non default PCI/MSI interrupt domain. This protects
+ * e.g. VMD devices.
+ */
+void intel_irq_remap_add_device(struct dmar_pci_notify_info *info)
+{
+ if (!irq_remapping_enabled || pci_dev_has_special_msi_domain(info->dev))
+ return;
+
+ dev_set_msi_domain(&info->dev->dev, map_dev_to_ir(info->dev));
+}
+
static void prepare_irte(struct irte *irte, int vector, unsigned int dest)
{
memset(irte, 0, sizeof(*irte));
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -425,6 +425,8 @@ struct q_inval {
int free_cnt;
};
+struct dmar_pci_notify_info;
+
#ifdef CONFIG_IRQ_REMAP
/* 1MB - maximum possible interrupt remapping table size */
#define INTR_REMAP_PAGE_ORDER 8
@@ -439,6 +441,11 @@ struct ir_table {
struct irte *base;
unsigned long *bitmap;
};
+
+void intel_irq_remap_add_device(struct dmar_pci_notify_info *info);
+#else
+static inline void
+intel_irq_remap_add_device(struct dmar_pci_notify_info *info) { }
#endif
struct iommu_flush {
Powered by blists - more mailing lists