[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z2A502_EpqvLYN8g@J2N7QTR9R3.cambridge.arm.com>
Date: Mon, 16 Dec 2024 14:31:47 +0000
From: Mark Rutland <mark.rutland@....com>
To: Mark Brown <broonie@...nel.org>, Marc Zyngier <maz@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, Peter Collingbourne <pcc@...gle.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH] arm64/sme: Move storage of reg_smidr to
__cpuinfo_store_cpu()
On Mon, Dec 16, 2024 at 01:23:55PM +0000, Mark Brown wrote:
> On Mon, Dec 16, 2024 at 12:44:14PM +0000, Mark Rutland wrote:
>
> > ... didn't matter either way, and using '&boot_cpu_data' was intended to
> > make it clear that the features were based on the boot CPU's info, even
> > if you just grepped for that and didn't see the surrounding context.
>
> Right, that was my best guess as to what was supposed to be going on
> but it wasn't super clear. The code could use some more comments.
>
> > I think the real fix here is to move the reading back into
> > __cpuinfo_store_cpu(), but to have an explicit check that SME has been
> > disabled on the commandline, with a comment explaining that this is a
> > bodge for broken FW which traps the SME ID regs.
>
> That should be doable.
>
> There's a few other similar ID registers (eg, we already read GMID_EL1
> and MPAMIDR_EL1) make me a bit nervous that we might need to generalise
> it a bit, but we can deal with that if it comes up. Even for SME the
> disable was added speculatively, the factors that made this come up for
> SVE are less likely to be an issue with SME.
FWIW, I had a quick go (with only the SME case), and I think the shape
that we want is roughly as below, which I think is easy to generalise to
those other cases.
MarcZ, thoughts?
Mark.
---->8----
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8b4e5a3cd24c8..f16eb99c10723 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -91,6 +91,16 @@ struct arm64_ftr_override {
u64 mask;
};
+static inline u64
+arm64_ftr_override_apply(const struct arm64_ftr_override *override,
+ u64 val)
+{
+ val &= ~override->mask;
+ val |= override->val & override->mask;
+
+ return val;
+}
+
/*
* @arm64_ftr_reg - Feature register
* @strict_mask Bits which should match across all CPUs for sanity.
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6ce71f444ed84..faad7d3e4cf5f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1167,12 +1167,6 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
unsigned long cpacr = cpacr_save_enable_kernel_sme();
- /*
- * We mask out SMPS since even if the hardware
- * supports priorities the kernel does not at present
- * and we block access to them.
- */
- info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
vec_init_vq_map(ARM64_VEC_SME);
cpacr_restore(cpacr);
@@ -1550,10 +1544,8 @@ u64 __read_sysreg_by_encoding(u32 sys_id)
}
regp = get_arm64_ftr_reg(sys_id);
- if (regp) {
- val &= ~regp->override->mask;
- val |= (regp->override->val & regp->override->mask);
- }
+ if (regp)
+ val = arm64_ftr_override_apply(regp->override, val);
return val;
}
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index d79e88fccdfce..1460e3541132f 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -439,6 +439,24 @@ static void __cpuinfo_store_cpu_32bit(struct cpuinfo_32bit *info)
info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
}
+static void __cpuinfo_store_cpu_sme(struct cpuinfo_arm64 *info)
+{
+ /*
+ * TODO: explain that this bodge is to avoid trapping.
+ */
+ u64 pfr1 = info->reg_id_aa64pfr1;
+ pfr1 = arm64_ftr_override_apply(&id_aa64pfr1_override, pfr1);
+ if (!id_aa64pfr1_sme(pfr1))
+ return;
+
+ /*
+ * We mask out SMPS since even if the hardware
+ * supports priorities the kernel does not at present
+ * and we block access to them.
+ */
+ info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
+}
+
static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
{
info->reg_cntfrq = arch_timer_get_cntfrq();
@@ -482,6 +500,8 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
+ __cpuinfo_store_cpu_sme(info);
+
cpuinfo_detect_icache_policy(info);
}
Powered by blists - more mailing lists