[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875z6rmmla.fsf@nanos.tec.linutronix.de>
Date: Fri, 30 Oct 2020 22:26:57 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Dave Jiang <dave.jiang@...el.com>, vkoul@...nel.org,
megha.dey@...el.com, maz@...nel.org, bhelgaas@...gle.com,
alex.williamson@...hat.com, jacob.jun.pan@...el.com,
ashok.raj@...el.com, jgg@...lanox.com, yi.l.liu@...el.com,
baolu.lu@...el.com, kevin.tian@...el.com, sanjay.k.kumar@...el.com,
tony.luck@...el.com, jing.lin@...el.com, dan.j.williams@...el.com,
kwankhede@...dia.com, eric.auger@...hat.com, parav@...lanox.com,
jgg@...lanox.com, rafael@...nel.org, netanelg@...lanox.com,
shahafs@...lanox.com, yan.y.zhao@...ux.intel.com,
pbonzini@...hat.com, samuel.ortiz@...el.com, mona.hossain@...el.com
Cc: Megha Dey <megha.dey@...ux.intel.com>, dmaengine@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH v4 13/17] dmaengine: idxd: ims setup for the vdcm
On Fri, Oct 30 2020 at 11:52, Dave Jiang wrote:
> Add setup for IMS enabling for the mediated device.
....
> Register with the irq bypass manager in order to allow the IMS interrupt be
> injected into the guest and bypass the host.
Why is this part of the patch which adds IMS support? This are two
completely different things.
Again, Documentation/process/submitting-patches.rst is very clear about
this:
Solve only one problem per patch.
You want me to review the IMS related things. Why are you mixing that
completely unrelated bypass stuff to it?
> +void vidxd_free_ims_entries(struct vdcm_idxd *vidxd)
> +{
> + struct irq_domain *irq_domain;
> + struct mdev_device *mdev = vidxd->vdev.mdev;
> + struct device *dev = mdev_dev(mdev);
> + int i;
> +
> + for (i = 0; i < VIDXD_MAX_MSIX_VECS; i++)
> + vidxd->irq_entries[i].entry = NULL;
See below.
> + irq_domain = dev_get_msi_domain(dev);
> + if (irq_domain)
> + msi_domain_free_irqs(irq_domain, dev);
> + else
> + dev_warn(dev, "No IMS irq domain.\n");
How is the code even getting to this point if the domain allocation
failed in the first place?
> +int vidxd_setup_ims_entries(struct vdcm_idxd *vidxd)
> +{
> + struct irq_domain *irq_domain;
> + struct idxd_device *idxd = vidxd->idxd;
> + struct mdev_device *mdev = vidxd->vdev.mdev;
> + struct device *dev = mdev_dev(mdev);
> + int vecs = VIDXD_MAX_MSIX_VECS - 1;
Some sensible comment about the -1 is missing here.
> + struct msi_desc *entry;
> + struct ims_irq_entry *irq_entry;
> + int rc, i = 0;
> +
> + irq_domain = idxd->ims_domain;
> + dev_set_msi_domain(dev, irq_domain);
> + rc = msi_domain_alloc_irqs(irq_domain, dev, vecs);
> + if (rc < 0)
> + return rc;
> +
> + for_each_msi_entry(entry, dev) {
> + irq_entry = &vidxd->irq_entries[i];
> + irq_entry->vidxd = vidxd;
> + irq_entry->entry = entry;
What's the business with storing the MSI entry here? Just to do this:
ims_idx = vidxd->irq_entries[vidx - 1].entry->device_msi.hwirq;
and this:
if (vidxd->irq_entries[i].entry->device_msi.hwirq == handle) {
What's wrong with storing the hardware interrupt index right here
instead of handing that pointer around? The usage sites have no reason
to know about the entry itself.
> + irq_entry->id = i;
Again, what is the point of storing the array offset in the array slot?
If it _is_ useful then adding a comment is not too much asked for.
So the place I found which uses it cannot compute the index obviously,
but this:
vidxd_send_interrupt(irq_entry->vidxd, irq_entry->id + 1);
is again just voodoo programming. Why can't you just provide a data set
which contains data ready for consumption at the usage site?
> diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
> index c7e47c26cd90..89cf60a30803 100644
> --- a/kernel/irq/msi.c
> +++ b/kernel/irq/msi.c
> @@ -536,6 +536,7 @@ int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
>
> return ops->domain_alloc_irqs(domain, dev, nvec);
> }
> +EXPORT_SYMBOL(msi_domain_alloc_irqs);
Sigh... This want's to be a preperatory patch and the export wants to be
EXPORT_SYMBOL_GPL
Thanks,
tglx
Powered by blists - more mailing lists