[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250226090525.231882-10-Neeraj.Upadhyay@amd.com>
Date: Wed, 26 Feb 2025 14:35:17 +0530
From: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
To: <linux-kernel@...r.kernel.org>
CC: <bp@...en8.de>, <tglx@...utronix.de>, <mingo@...hat.com>,
<dave.hansen@...ux.intel.com>, <Thomas.Lendacky@....com>, <nikunj@....com>,
<Santosh.Shukla@....com>, <Vasant.Hegde@....com>,
<Suravee.Suthikulpanit@....com>, <David.Kaplan@....com>, <x86@...nel.org>,
<hpa@...or.com>, <peterz@...radead.org>, <seanjc@...gle.com>,
<pbonzini@...hat.com>, <kvm@...r.kernel.org>,
<kirill.shutemov@...ux.intel.com>, <huibo.wang@....com>, <naveen.rao@....com>
Subject: [RFC v2 09/17] x86/apic: Add support to send NMI IPI for Secure AVIC
From: Kishon Vijay Abraham I <kvijayab@....com>
Secure AVIC has introduced a new field in the APIC backing page
"NmiReq" that has to be set by the guest to request a NMI IPI.
Add support to set NmiReq appropriately to send NMI IPI.
This also requires Virtual NMI feature to be enabled in VINTRL_CTRL
field in the VMSA. However this would be added by a later commit
after adding support for injecting NMI from the hypervisor.
Signed-off-by: Kishon Vijay Abraham I <kvijayab@....com>
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
---
Changes since v1:
- Do not set APIC_IRR for NMI IPI.
arch/x86/kernel/apic/x2apic_savic.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kernel/apic/x2apic_savic.c b/arch/x86/kernel/apic/x2apic_savic.c
index af46e1b57017..0067fc5c4ef3 100644
--- a/arch/x86/kernel/apic/x2apic_savic.c
+++ b/arch/x86/kernel/apic/x2apic_savic.c
@@ -162,28 +162,34 @@ static void x2apic_savic_write(u32 reg, u32 data)
}
}
-static void send_ipi(int cpu, int vector)
+static void send_ipi(int cpu, int vector, bool nmi)
{
void *backing_page;
int reg_off;
backing_page = per_cpu(apic_backing_page, cpu);
reg_off = APIC_IRR + REG_POS(vector);
- /*
- * Use test_and_set_bit() to ensure that IRR updates are atomic w.r.t. other
- * IRR updates such as during VMRUN and during CPU interrupt handling flow.
- */
- test_and_set_bit(VEC_POS(vector), (unsigned long *)((char *)backing_page + reg_off));
+ if (!nmi)
+ /*
+ * Use test_and_set_bit() to ensure that IRR updates are atomic w.r.t. other
+ * IRR updates such as during VMRUN and during CPU interrupt handling flow.
+ * */
+ test_and_set_bit(VEC_POS(vector),
+ (unsigned long *)((char *)backing_page + reg_off));
+ else
+ set_reg(backing_page, SAVIC_NMI_REQ_OFFSET, nmi);
}
static void send_ipi_dest(u64 icr_data)
{
int vector, cpu;
+ bool nmi;
vector = icr_data & APIC_VECTOR_MASK;
cpu = icr_data >> 32;
+ nmi = ((icr_data & APIC_DM_FIXED_MASK) == APIC_DM_NMI);
- send_ipi(cpu, vector);
+ send_ipi(cpu, vector, nmi);
}
static void send_ipi_target(u64 icr_data)
@@ -201,11 +207,13 @@ static void send_ipi_allbut(u64 icr_data)
const struct cpumask *self_cpu_mask = get_cpu_mask(smp_processor_id());
unsigned long flags;
int vector, cpu;
+ bool nmi;
vector = icr_data & APIC_VECTOR_MASK;
+ nmi = ((icr_data & APIC_DM_FIXED_MASK) == APIC_DM_NMI);
local_irq_save(flags);
for_each_cpu_andnot(cpu, cpu_present_mask, self_cpu_mask)
- send_ipi(cpu, vector);
+ send_ipi(cpu, vector, nmi);
savic_ghcb_msr_write(APIC_ICR, icr_data);
local_irq_restore(flags);
}
--
2.34.1
Powered by blists - more mailing lists