[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220412063000.988387489@linuxfoundation.org>
Date: Tue, 12 Apr 2022 08:32:33 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Jingyi Wang <wangjingyi11@...wei.com>,
Nianyao Tang <tangnianyao@...wei.com>,
Marc Zyngier <maz@...nel.org>
Subject: [PATCH 5.17 335/343] irqchip/gic-v4: Wait for GICR_VPENDBASER.Dirty to clear before descheduling
From: Marc Zyngier <maz@...nel.org>
commit af27e41612ec7e5b4783f589b753a7c31a37aac8 upstream.
The way KVM drives GICv4.{0,1} is as follows:
- vcpu_load() makes the VPE resident, instructing the RD to start
scanning for interrupts
- just before entering the guest, we check that the RD has finished
scanning and that we can start running the vcpu
- on preemption, we deschedule the VPE by making it invalid on
the RD
However, we are preemptible between the first two steps. If it so
happens *and* that the RD was still scanning, we nonetheless write
to the GICR_VPENDBASER register while Dirty is set, and bad things
happen (we're in UNPRED land).
This affects both the 4.0 and 4.1 implementations.
Make sure Dirty is cleared before performing the deschedule,
meaning that its_clear_vpend_valid() becomes a sort of full VPE
residency barrier.
Reported-by: Jingyi Wang <wangjingyi11@...wei.com>
Tested-by: Nianyao Tang <tangnianyao@...wei.com>
Signed-off-by: Marc Zyngier <maz@...nel.org>
Fixes: 57e3cebd022f ("KVM: arm64: Delay the polling of the GICR_VPENDBASER.Dirty bit")
Link: https://lore.kernel.org/r/4aae10ba-b39a-5f84-754b-69c2eb0a2c03@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/irqchip/irq-gic-v3-its.c | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -3011,18 +3011,12 @@ static int __init allocate_lpi_tables(vo
return 0;
}
-static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
+static u64 read_vpend_dirty_clear(void __iomem *vlpi_base)
{
u32 count = 1000000; /* 1s! */
bool clean;
u64 val;
- val = gicr_read_vpendbaser(vlpi_base + GICR_VPENDBASER);
- val &= ~GICR_VPENDBASER_Valid;
- val &= ~clr;
- val |= set;
- gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
-
do {
val = gicr_read_vpendbaser(vlpi_base + GICR_VPENDBASER);
clean = !(val & GICR_VPENDBASER_Dirty);
@@ -3033,10 +3027,26 @@ static u64 its_clear_vpend_valid(void __
}
} while (!clean && count);
- if (unlikely(val & GICR_VPENDBASER_Dirty)) {
+ if (unlikely(!clean))
pr_err_ratelimited("ITS virtual pending table not cleaning\n");
+
+ return val;
+}
+
+static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
+{
+ u64 val;
+
+ /* Make sure we wait until the RD is done with the initial scan */
+ val = read_vpend_dirty_clear(vlpi_base);
+ val &= ~GICR_VPENDBASER_Valid;
+ val &= ~clr;
+ val |= set;
+ gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
+
+ val = read_vpend_dirty_clear(vlpi_base);
+ if (unlikely(val & GICR_VPENDBASER_Dirty))
val |= GICR_VPENDBASER_PendingLast;
- }
return val;
}
Powered by blists - more mailing lists