[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240815071545.925867-2-maobibo@loongson.cn>
Date: Thu, 15 Aug 2024 15:15:44 +0800
From: Bibo Mao <maobibo@...ngson.cn>
To: Tianrui Zhao <zhaotianrui@...ngson.cn>,
Huacai Chen <chenhuacai@...nel.org>
Cc: WANG Xuerui <kernel@...0n.name>,
kvm@...r.kernel.org,
loongarch@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: [PATCH v2 1/2] LoongArch: Fix AP booting issue in VM mode
Native IPI is used for AP booting, it is booting interface between
OS and BIOS firmware. The paravirt ipi is only used inside OS, native
IPI is necessary to boot AP.
When booting AP, BP writes kernel entry address in the HW mailbox of
AP and send IPI interrupt to AP. AP executes idle instruction and
waits for interrupt or SW events, and clears IPI interrupt and jumps
to kernel entry from HW mailbox.
Between BP writes HW mailbox and is ready to send IPI to AP, AP is woken
up by SW events and jumps to kernel entry, so ACTION_BOOT_CPU IPI
interrupt will keep pending during AP booting. And native IPI interrupt
handler needs be registered so that it can clear pending native IPI, else
there will be endless IRQ handling during AP booting stage.
Here native ipi interrupt is initialized even if paravirt IPI is used.
Fixes: 74c16b2e2b0c ("LoongArch: KVM: Add PV IPI support on guest side")
Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
---
arch/loongarch/kernel/paravirt.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/paravirt.c
index 9c9b75b76f62..348920b25460 100644
--- a/arch/loongarch/kernel/paravirt.c
+++ b/arch/loongarch/kernel/paravirt.c
@@ -13,6 +13,9 @@ static int has_steal_clock;
struct static_key paravirt_steal_enabled;
struct static_key paravirt_steal_rq_enabled;
static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64);
+#ifdef CONFIG_SMP
+static struct smp_ops old_ops;
+#endif
static u64 native_steal_clock(int cpu)
{
@@ -55,6 +58,11 @@ static void pv_send_ipi_single(int cpu, unsigned int action)
int min, old;
irq_cpustat_t *info = &per_cpu(irq_stat, cpu);
+ if (unlikely(action == ACTION_BOOT_CPU)) {
+ old_ops.send_ipi_single(cpu, action);
+ return;
+ }
+
old = atomic_fetch_or(BIT(action), &info->message);
if (old)
return;
@@ -71,6 +79,12 @@ static void pv_send_ipi_mask(const struct cpumask *mask, unsigned int action)
__uint128_t bitmap = 0;
irq_cpustat_t *info;
+ if (unlikely(action == ACTION_BOOT_CPU)) {
+ /* Use native IPI to boot AP */
+ old_ops.send_ipi_mask(mask, action);
+ return;
+ }
+
if (cpumask_empty(mask))
return;
@@ -141,6 +155,8 @@ static void pv_init_ipi(void)
{
int r, swi;
+ /* Init native ipi irq since AP booting uses it */
+ old_ops.init_ipi();
swi = get_percpu_irq(INT_SWI0);
if (swi < 0)
panic("SWI0 IRQ mapping failed\n");
@@ -179,6 +195,9 @@ int __init pv_ipi_init(void)
return 0;
#ifdef CONFIG_SMP
+ old_ops.init_ipi = mp_ops.init_ipi;
+ old_ops.send_ipi_single = mp_ops.send_ipi_single;
+ old_ops.send_ipi_mask = mp_ops.send_ipi_mask;
mp_ops.init_ipi = pv_init_ipi;
mp_ops.send_ipi_single = pv_send_ipi_single;
mp_ops.send_ipi_mask = pv_send_ipi_mask;
--
2.39.3
Powered by blists - more mailing lists