[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230412161951.GA894@skinsburskii.localdomain>
Date: Wed, 12 Apr 2023 09:19:51 -0700
From: Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Stanislav Kinsburskii <stanislav.kinsburskii@...il.com>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Wei Liu <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, linux-hyperv@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] x86/hyperv: Expose an helper to map PCI interrupts
On Thu, Apr 13, 2023 at 03:51:09PM +0200, Thomas Gleixner wrote:
> On Thu, Apr 06 2023 at 09:33, Stanislav Kinsburskii wrote:
> > This patch moves
>
> https://www.kernel.org/doc/html/latest/process/submitting-patches.html#submittingpatches
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#changelog
>
Thanks. I'll elaborate on the reationale in the next revision.
> > a part of currently internal logic into the new
> > hv_map_msi_interrupt function and makes it globally available helper,
> > which will be used to map PCI interrupts in case of root partition.
>
> > -static int hv_map_msi_interrupt(struct pci_dev *dev, int cpu, int vector,
> > - struct hv_interrupt_entry *entry)
> > +/**
> > + * hv_map_msi_interrupt() - "Map" the MSI IRQ in the hypervisor.
>
> So if you need to put "" on Map then maybe your function is
> misnomed. Either it maps or it does not, right?
>
Thanks, I'll remove the quotation marks in the next update.
> > + * @data: Describes the IRQ
> > + * @out_entry: Hypervisor (MSI) interrupt entry (can be NULL)
> > + *
> > + * Map the IRQ in the hypervisor by issuing a MAP_DEVICE_INTERRUPT hypercall.
> > + */
> > +int hv_map_msi_interrupt(struct irq_data *data,
> > + struct hv_interrupt_entry *out_entry)
> > {
> > - union hv_device_id device_id = hv_build_pci_dev_id(dev);
> > + struct msi_desc *msidesc;
> > + struct pci_dev *dev;
> > + union hv_device_id device_id;
> > + struct hv_interrupt_entry dummy, *entry;
> > + struct irq_cfg *cfg = irqd_cfg(data);
> > + const cpumask_t *affinity;
> > + int cpu, vector;
> > +
> > + msidesc = irq_data_get_msi_desc(data);
> > + dev = msi_desc_to_pci_dev(msidesc);
> > + device_id = hv_build_pci_dev_id(dev);
> > + affinity = irq_data_get_effective_affinity_mask(data);
> > + cpu = cpumask_first_and(affinity, cpu_online_mask);
>
> The effective affinity mask of MSI interrupts consists only of online
> CPUs, to be accurate: it has exactly one online CPU set.
>
> But even if it would have only offline CPUs then the result would be:
>
> cpu = nr_cpu_ids
>
> which is definitely invalid. While a disabled vector targeted to an
> offline CPU is not necessarily invalid.
>
Thank you for diving into this logic.
Although this patch only tosses the code and doens't make any functional
changes, I guess if the fix for the the cpu is found has is required, it
has to be in a separated patch.
Would you mind to elaborate more of the problem(s)?
Do you mean that the result of cpumask_first_and has to be checked for not
being >= nr_cpus_ids?
Or do you mean there is no need to check the affinity against
cpu_online_mask at all ans we can simply take any first bit from the
effective affinity mask?
Also, could ou elaborate more on the disabled vector target to an
offline CPU? Is there any use case for such scenario (in this case we
might want to support it)?
I guess the goal of this code is to make sure that hypervisor on't be
configured to deliver MSI to an online CPU.
Thanks,
Stanislav
> Thanks,
>
> tglx
Powered by blists - more mailing lists