[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87v8u9rwce.ffs@tglx>
Date: Fri, 13 May 2022 22:50:09 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
Cc: x86@...nel.org, Tony Luck <tony.luck@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Stephane Eranian <eranian@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Joerg Roedel <joro@...tes.org>,
Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>,
David Woodhouse <dwmw2@...radead.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
Nicholas Piggin <npiggin@...il.com>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>,
Ricardo Neri <ricardo.neri@...el.com>,
iommu@...ts.linux-foundation.org, linuxppc-dev@...ts.ozlabs.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 05/29] x86/apic/vector: Do not allocate vectors for NMIs
On Fri, May 13 2022 at 11:03, Ricardo Neri wrote:
> On Fri, May 06, 2022 at 11:12:20PM +0200, Thomas Gleixner wrote:
>> Why would a NMI ever end up in this code? There is no vector management
>> required and this find cpu exercise is pointless.
>
> But even if the NMI has a fixed vector, it is still necessary to determine
> which CPU will get the NMI. It is still necessary to determine what to
> write in the Destination ID field of the MSI message.
>
> irq_matrix_find_best_cpu() would find the CPU with the lowest number of
> managed vectors so that the NMI is directed to that CPU.
What's the point to send it to the CPU with the lowest number of
interrupts. It's not that this NMI happens every 50 microseconds.
We pick one online CPU and are done.
> In today's code, an NMI would end up here because we rely on the existing
> interrupt management infrastructure... Unless, the check is done the entry
> points as you propose.
Correct. We don't want to call into functions which are not designed for
NMIs.
>> > +
>> > + if (apicd->hw_irq_cfg.delivery_mode == APIC_DELIVERY_MODE_NMI) {
>> > + cpu = irq_matrix_find_best_cpu_managed(vector_matrix, dest);
>> > + apicd->cpu = cpu;
>> > + vector = 0;
>> > + goto no_vector;
>> > + }
>>
>> This code can never be reached for a NMI delivery. If so, then it's a
>> bug.
>>
>> This all is special purpose for that particular HPET NMI watchdog use
>> case and we are not exposing this to anything else at all.
>>
>> So why are you sprinkling this NMI nonsense all over the place? Just
>> because? There are well defined entry points to all of this where this
>> can be fenced off.
>
> I put the NMI checks in these points because assign_vector_locked() and
> assign_managed_vector() are reached through multiple paths and these are
> the two places where the allocation of the vector is requested and the
> destination CPU is determined.
>
> I do observe this code being reached for an NMI, but that is because this
> code still does not know about NMIs... Unless the checks for NMI are put
> in the entry points as you pointed out.
>
> The intent was to refactor the code in a generic manner and not to focus
> only in the NMI watchdog. That would have looked hacky IMO.
We don't want to have more of this really. Supporting NMIs on x86 in a
broader way is simply not reasonable because there is only one NMI
vector and we have no sensible way to get to the cause of the NMI
without a massive overhead.
Even if we get multiple NMI vectors some shiny day, this will be
fundamentally different than regular interrupts and certainly not
exposed broadly. There will be 99.99% fixed vectors for simplicity sake.
>> + if (info->flags & X86_IRQ_ALLOC_AS_NMI) {
>> + /*
>> + * NMIs have a fixed vector and need their own
>> + * interrupt chip so nothing can end up in the
>> + * regular local APIC management code except the
>> + * MSI message composing callback.
>> + */
>> + irqd->chip = &lapic_nmi_controller;
>> + /*
>> + * Don't allow affinity setting attempts for NMIs.
>> + * This cannot work with the regular affinity
>> + * mechanisms and for the intended HPET NMI
>> + * watchdog use case it's not required.
>
> But we do need the ability to set affinity, right? As stated above, we need
> to know what Destination ID to write in the MSI message or in the interrupt
> remapping table entry.
>
> It cannot be any CPU because only one specific CPU is supposed to handle the
> NMI from the HPET channel.
>
> We cannot hard-code a CPU for that because it may go offline (and ignore NMIs)
> or not be part of the monitored CPUs.
>
> Also, if lapic_nmi_controller.irq_set_affinity() is NULL, then irq_chips
> INTEL-IR, AMD-IR, those using msi_domain_set_affinity() need to check for NULL.
> They currently unconditionally call the parent irq_chip's irq_set_affinity().
> I see that there is a irq_chip_set_affinity_parent() function. Perhaps it can
> be used for this check?
Yes, this lacks obviously a NMI specific set_affinity callback and this
can be very trivial and does not have any of the complexity of interrupt
affinity assignment. First online CPU in the mask with a fallback to any
online CPU.
I did not claim that this is complete. This was for illustration.
>> + */
>> + irqd_set_no_balance(irqd);
>
> This code does not set apicd->hw_irq_cfg.delivery_mode as NMI, right?
> I had to add that to make it work.
I assumed you can figure that out on your own :)
Thanks,
tglx
Powered by blists - more mailing lists