[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240701072305.4129823-1-tangnianyao@huawei.com>
Date: Mon, 1 Jul 2024 07:23:05 +0000
From: Nianyao Tang <tangnianyao@...wei.com>
To: <maz@...nel.org>, <tglx@...utronix.de>,
<linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>
CC: <guoyang2@...wei.com>, <wangwudi@...ilicon.com>, <tangnianyao@...wei.com>
Subject: [PATCH] irqchip/gic-v4: Fix vcpus racing for vpe->col_idx in vmapp and vmovp
its_map_vm may modify vpe->col_idx without holding vpe->vpe_lock.
It would result in a vpe resident on one RD after vmovp to a different RD.
Or, a vpe maybe vmovp to a RD same as it is current mapped in vpe table.
On a 2-ITS, GICv4 enabled system, 32 vcpus deployed on cpu of collection 0
and 1. Two pci devices route VLPIs, using each of the ITS.
VPE ready to reside on RD1 may have such unexpected case because another
vcpu on other cpu is doing vmapp and modify his vpe->col_idx.
Unexpected Case 1:
RD 0 1
vcpu_load
lock vpe_lock
vpe->col_idx = 1
its_map_vm
lock vmovp_lock
waiting vmovp_lock
vpe->col_idx = 0
(cpu0 is first online cpu)
vmapp vpe on col0
unlock vmovp_lock
lock vmovp_lock
vmovp vpe to col0
unlock vmovp_lock
vpe resident here fail to
receive VLPI!
Unexpected Case 2:
RD 0 1
its_map_vm vcpu_load
lock vmovp_lock lock vpe_lock
vpe->col_idx = 0
vpe->col_idx = 1
vmapp vpe on col1 waiting vmovp_lock
unlock vmovp_lock
lock vmovp_lock
vmovp vpe to col1
(target RD == source RD!)
unlock vmovp_lock
Signed-off-by: Nianyao Tang <tangnianyao@...wei.com>
---
drivers/irqchip/irq-gic-v3-its.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index f99c0a86320b..adda9824e0e7 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -1794,11 +1794,15 @@ static bool gic_requires_eager_mapping(void)
static void its_map_vm(struct its_node *its, struct its_vm *vm)
{
unsigned long flags;
+ bool vm_mapped_on_any_its = false;
+ int i;
if (gic_requires_eager_mapping())
return;
- raw_spin_lock_irqsave(&vmovp_lock, flags);
+ for (i = 0; i < GICv4_ITS_LIST_MAX; i++)
+ if (vm->vlpi_count[i] > 0)
+ vm_mapped_on_any_its = true;
/*
* If the VM wasn't mapped yet, iterate over the vpes and get
@@ -1813,15 +1817,19 @@ static void its_map_vm(struct its_node *its, struct its_vm *vm)
struct its_vpe *vpe = vm->vpes[i];
struct irq_data *d = irq_get_irq_data(vpe->irq);
- /* Map the VPE to the first possible CPU */
- vpe->col_idx = cpumask_first(cpu_online_mask);
+ raw_spin_lock_irqsave(&vpe->vpe_lock, flags);
+
+ if (!vm_mapped_on_any_its) {
+ /* Map the VPE to the first possible CPU */
+ vpe->col_idx = cpumask_first(cpu_online_mask);
+ }
its_send_vmapp(its, vpe, true);
its_send_vinvall(its, vpe);
irq_data_update_effective_affinity(d, cpumask_of(vpe->col_idx));
+
+ raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);
}
}
-
- raw_spin_unlock_irqrestore(&vmovp_lock, flags);
}
static void its_unmap_vm(struct its_node *its, struct its_vm *vm)
--
2.30.0
Powered by blists - more mailing lists