[<prev] [next>] [day] [month] [year] [list]
Message-ID: <159351190922.4006.6997590439954706407.tip-bot2@tip-bot2>
Date: Tue, 30 Jun 2020 10:11:49 -0000
From: "tip-bot2 for Marc Zyngier" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: stable@...r.kernel.org, Marc Zyngier <maz@...nel.org>,
x86 <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: [tip: irq/urgent] irqchip/gic: Atomically update affinity
The following commit has been merged into the irq/urgent branch of tip:
Commit-ID: 005c34ae4b44f085120d7f371121ec7ded677761
Gitweb: https://git.kernel.org/tip/005c34ae4b44f085120d7f371121ec7ded677761
Author: Marc Zyngier <maz@...nel.org>
AuthorDate: Sun, 21 Jun 2020 14:43:15 +01:00
Committer: Marc Zyngier <maz@...nel.org>
CommitterDate: Sun, 21 Jun 2020 15:24:46 +01:00
irqchip/gic: Atomically update affinity
The GIC driver uses a RMW sequence to update the affinity, and
relies on the gic_lock_irqsave/gic_unlock_irqrestore sequences
to update it atomically.
But these sequences only expand into anything meaningful if
the BL_SWITCHER option is selected, which almost never happens.
It also turns out that using a RMW and locks is just as silly,
as the GIC distributor supports byte accesses for the GICD_TARGETRn
registers, which when used make the update atomic by definition.
Drop the terminally broken code and replace it by a byte write.
Fixes: 04c8b0f82c7d ("irqchip/gic: Make locking a BL_SWITCHER only feature")
Cc: stable@...r.kernel.org
Signed-off-by: Marc Zyngier <maz@...nel.org>
---
drivers/irqchip/irq-gic.c | 14 +++-----------
1 file changed, 3 insertions(+), 11 deletions(-)
diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 00de05a..c17fabd 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -329,10 +329,8 @@ static int gic_irq_set_vcpu_affinity(struct irq_data *d, void *vcpu)
static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
bool force)
{
- void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3);
- unsigned int cpu, shift = (gic_irq(d) % 4) * 8;
- u32 val, mask, bit;
- unsigned long flags;
+ void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + gic_irq(d);
+ unsigned int cpu;
if (!force)
cpu = cpumask_any_and(mask_val, cpu_online_mask);
@@ -342,13 +340,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids)
return -EINVAL;
- gic_lock_irqsave(flags);
- mask = 0xff << shift;
- bit = gic_cpu_map[cpu] << shift;
- val = readl_relaxed(reg) & ~mask;
- writel_relaxed(val | bit, reg);
- gic_unlock_irqrestore(flags);
-
+ writeb_relaxed(gic_cpu_map[cpu], reg);
irq_data_update_effective_affinity(d, cpumask_of(cpu));
return IRQ_SET_MASK_OK_DONE;
Powered by blists - more mailing lists