[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241028175935.51250-14-arikalo@gmail.com>
Date: Mon, 28 Oct 2024 18:59:35 +0100
From: Aleksandar Rikalo <arikalo@...il.com>
To: Thomas Bogendoerfer <tsbogend@...ha.franken.de>
Cc: Rob Herring <robh@...nel.org>,
Krzysztof Kozlowski <krzk+dt@...nel.org>,
Conor Dooley <conor+dt@...nel.org>,
Vladimir Kondratiev <vladimir.kondratiev@...ileye.com>,
Gregory CLEMENT <gregory.clement@...tlin.com>,
Theo Lebrun <theo.lebrun@...tlin.com>,
Arnd Bergmann <arnd@...db.de>,
devicetree@...r.kernel.org,
Djordje Todorovic <djordje.todorovic@...cgroup.com>,
Chao-ying Fu <cfu@...ecomp.com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Greg Ungerer <gerg@...nel.org>,
Hauke Mehrtens <hauke@...ke-m.de>,
Ilya Lipnitskiy <ilya.lipnitskiy@...il.com>,
Jiaxun Yang <jiaxun.yang@...goat.com>,
linux-kernel@...r.kernel.org,
linux-mips@...r.kernel.org,
Marc Zyngier <maz@...nel.org>,
Paul Burton <paulburton@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Serge Semin <fancer.lancer@...il.com>,
Tiezhu Yang <yangtiezhu@...ngson.cn>,
Aleksandar Rikalo <arikalo@...il.com>
Subject: [PATCH v8 13/13] irqchip: mips-gic: Handle case with cluster without CPU cores
From: Gregory CLEMENT <gregory.clement@...tlin.com>
It is possible to have no CPU cores in a cluster; in such cases, it is
not possible to access the GIC, and any indirect access leads to an
exception. This patch dynamically skips the indirect access in such
situations.
Signed-off-by: Gregory CLEMENT <gregory.clement@...tlin.com>
Signed-off-by: Aleksandar Rikalo <arikalo@...il.com>
Tested-by: Gregory CLEMENT <gregory.clement@...tlin.com>
---
drivers/irqchip/irq-mips-gic.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index f42f69bbd6fb..bca8053864b2 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -141,7 +141,8 @@ static bool gic_irq_lock_cluster(struct irq_data *d)
cl = cpu_cluster(&cpu_data[cpu]);
if (cl == cpu_cluster(¤t_cpu_data))
return false;
-
+ if (mips_cps_numcores(cl) == 0)
+ return false;
mips_cm_lock_other(cl, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
return true;
}
@@ -507,6 +508,9 @@ static void gic_mask_local_irq_all_vpes(struct irq_data *d)
struct gic_all_vpes_chip_data *cd;
int intr, cpu;
+ if (!mips_cps_multicluster_cpus())
+ return;
+
intr = GIC_HWIRQ_TO_LOCAL(d->hwirq);
cd = irq_data_get_irq_chip_data(d);
cd->mask = false;
@@ -520,6 +524,9 @@ static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
struct gic_all_vpes_chip_data *cd;
int intr, cpu;
+ if (!mips_cps_multicluster_cpus())
+ return;
+
intr = GIC_HWIRQ_TO_LOCAL(d->hwirq);
cd = irq_data_get_irq_chip_data(d);
cd->mask = true;
@@ -687,8 +694,10 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
if (!gic_local_irq_is_routable(intr))
return -EPERM;
- for_each_online_cpu_gic(cpu, &gic_lock)
- write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
+ if (mips_cps_multicluster_cpus()) {
+ for_each_online_cpu_gic(cpu, &gic_lock)
+ write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
+ }
return 0;
}
@@ -982,7 +991,7 @@ static int __init gic_of_init(struct device_node *node,
change_gic_trig(i, GIC_TRIG_LEVEL);
write_gic_rmask(i);
}
- } else {
+ } else if (mips_cps_numcores(cl) != 0) {
mips_cm_lock_other(cl, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
for (i = 0; i < gic_shared_intrs; i++) {
change_gic_redir_pol(i, GIC_POL_ACTIVE_HIGH);
@@ -990,6 +999,9 @@ static int __init gic_of_init(struct device_node *node,
write_gic_redir_rmask(i);
}
mips_cm_unlock_other();
+
+ } else {
+ pr_warn("No CPU cores on the cluster %d skip it\n", cl);
}
}
--
2.25.1
Powered by blists - more mailing lists