[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86h6cl39ff.wl-maz@kernel.org>
Date: Fri, 19 Jul 2024 12:31:16 +0100
From: Marc Zyngier <maz@...nel.org>
To: Zhou Wang <wangzhou1@...ilicon.com>
Cc: <kvmarm@...ts.linux.dev>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Nianyao Tang
<tangnianyao@...wei.com>
Subject: Re: [PATCH 3/3] irqchip/gic-v4: Make sure a VPE is locked when VMAPP is issued
On Fri, 19 Jul 2024 10:42:02 +0100,
Zhou Wang <wangzhou1@...ilicon.com> wrote:
>
> On 2024/7/5 17:31, Marc Zyngier wrote:
> > In order to make sure that vpe->col_idx is correctly sampled
> > when a VMAPP command is issued, we must hold the lock for the
> > VPE. This is now possible since the introduction of the per-VM
> > vmapp_lock, which can be taken before vpe_lock in the locking
> > order.
> >
> > Signed-off-by: Marc Zyngier <maz@...nel.org>
> > ---
> > drivers/irqchip/irq-gic-v3-its.c | 8 ++++++--
> > 1 file changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> > index b52d60097cad5..951ec140bcea2 100644
> > --- a/drivers/irqchip/irq-gic-v3-its.c
> > +++ b/drivers/irqchip/irq-gic-v3-its.c
> > @@ -1810,7 +1810,9 @@ static void its_map_vm(struct its_node *its, struct its_vm *vm)
> > for (i = 0; i < vm->nr_vpes; i++) {
> > struct its_vpe *vpe = vm->vpes[i];
> >
> > - its_send_vmapp(its, vpe, true);
> > + scoped_guard(raw_spinlock, &vpe->vpe_lock)
> > + its_send_vmapp(its, vpe, true);
> > +
> > its_send_vinvall(its, vpe);
> > }
> > }
> > @@ -1827,8 +1829,10 @@ static void its_unmap_vm(struct its_node *its, struct its_vm *vm)
> > if (!--vm->vlpi_count[its->list_nr]) {
> > int i;
> >
> > - for (i = 0; i < vm->nr_vpes; i++)
> > + for (i = 0; i < vm->nr_vpes; i++) {
> > + guard(raw_spinlock)(&vm->vpes[i]->vpe_lock);
> > its_send_vmapp(its, vm->vpes[i], false);
> > + }
> > }
> > }
> >
>
> Hi Marc,
>
> It looks like there is ABBA deadlock after applying this series:
>
> In its_map_vm: vmapp_lock -> vpe_lock
> In its_vpe_set_affinity: vpe_to_cpuid_lock(vpe_lock) -> its_send_vmovp(vmapp_lock)
>
> Any idea about this?
Hmmm, well spotted. That's an annoying one.
Can you give the below hack a go? I've only lightly tested it, as my
D05 box is on its last leg (it is literally falling apart) and I don't
have any other GICv4.x box to test on.
Thanks,
M.
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 951ec140bcea2..b88c6011c8771 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -1328,12 +1328,6 @@ static void its_send_vmovp(struct its_vpe *vpe)
return;
}
- /*
- * Protect against concurrent updates of the mapping state on
- * individual VMs.
- */
- guard(raw_spinlock_irqsave)(&vpe->its_vm->vmapp_lock);
-
/*
* Yet another marvel of the architecture. If using the
* its_list "feature", we need to make sure that all ITSs
@@ -3808,7 +3802,7 @@ static int its_vpe_set_affinity(struct irq_data *d,
struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
unsigned int from, cpu = nr_cpu_ids;
struct cpumask *table_mask;
- unsigned long flags;
+ unsigned long flags, vmapp_flags;
/*
* Changing affinity is mega expensive, so let's be as lazy as
@@ -3822,7 +3816,14 @@ static int its_vpe_set_affinity(struct irq_data *d,
* protect us, and that we must ensure nobody samples vpe->col_idx
* during the update, hence the lock below which must also be
* taken on any vLPI handling path that evaluates vpe->col_idx.
+ *
+ * Finally, we must protect ourselves against concurrent
+ * updates of the mapping state on this VM should the ITS list
+ * be in use.
*/
+ if (its_list_map)
+ raw_spin_lock_irqsave(&vpe->its_vm->vmapp_lock, vmapp_flags);
+
from = vpe_to_cpuid_lock(vpe, &flags);
table_mask = gic_data_rdist_cpu(from)->vpe_table_mask;
@@ -3852,6 +3853,9 @@ static int its_vpe_set_affinity(struct irq_data *d,
irq_data_update_effective_affinity(d, cpumask_of(cpu));
vpe_to_cpuid_unlock(vpe, flags);
+ if (its_list_map)
+ raw_spin_unlock_irqrestore(&vpe->its_vm->vmapp_lock, vmapp_flags);
+
return IRQ_SET_MASK_OK_DONE;
}
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists