lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Mon, 5 Feb 2024 17:27:20 +0000
From: Dave Martin <Dave.Martin@....com>
To: linux-arm-kernel@...ts.infradead.org
Cc: Mark Brown <broonie@...nel.org>, Will Deacon <will@...nel.org>,
	Catalin Marinas <catalin.marinas@....com>,
	Oleg Nesterov <oleg@...hat.com>, Al Viro <viro@...iv.linux.org.uk>,
	linux-kernel@...r.kernel.org, Doug Anderson <dianders@...omium.org>
Subject: [RFC PATCH] arm64/sve,sme: Refine scalable regset sizes at boot

Since [1] and [2], the ptrace core has used the static values in
struct user_regset to preallocate memory when reading a regset.
This results in allocating excessive memory for SVE and related
regsets which are not a fixed size, since the theoretical max size
of those regsets was deliberately made huge in case of future
expansion.

In practice, the regsets can be smaller -- usually _much_ smaller.

Since the max possible size of these regsets depends on how big the
CPUs' registers actually are, clamp the affected regset sizes once
the kernel has probed all boot-time CPUs.

This doesn't make memory allocation failures impossible on the
affected paths, but at least avoids stupidly large allocations.

[1]: commit b4e9c9549f62 ("introduction of regset ->get() wrappers,
switching ELF coredumps to those")

[2]: commit 7717cb9bdd04 ("regset: new method and helpers for it")

Reported-by: Douglas Anderson <dianders@...omium.org>
Link: https://lore.kernel.org/lkml/20240201171159.1.Id9ad163b60d21c9e56c2d686b0cc9083a8ba7924@changeid/
Signed-off-by: Dave Martin <Dave.Martin@....com>
---

Only build-tested for now.

If a short-term fix is needed, Mark Brown's patch [3] looks like the
lower-risk option, but there seems to be outstanding discussion about
whether this change will actually fix the issue reported above or just
make it less likely to fire.  See the Link above.

This patch duplicates logic between the compiled-in regset->n values and
those computed after boot.  It might be better to compile in junk
values, since they should never get used anyway...

[3] https://lore.kernel.org/all/20240203-arm64-sve-ptrace-regset-size-v1-1-2c3ba1386b9e@kernel.org/


 arch/arm64/include/asm/ptrace.h | 12 ++++++++++++
 arch/arm64/kernel/fpsimd.c      |  3 +++
 arch/arm64/kernel/ptrace.c      | 22 +++++++++++++++++++++-
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 47ec58031f11..609b963a05e0 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -389,5 +389,17 @@ static inline void procedure_link_pointer_set(struct pt_regs *regs,
 
 extern unsigned long profile_pc(struct pt_regs *regs);
 
+#ifdef CONFIG_ARM64_SVE
+void __init arch_ptrace_sve_init(unsigned int vq_max);
+#else
+static inline void __init arch_ptrace_sve_init(unsigned int vq_max) { }
+#endif
+
+#ifdef CONFIG_ARM64_SME
+void __init arch_ptrace_sme_init(unsigned int vq_max);
+#else
+static inline void __init arch_ptrace_sme_init(unsigned int vq_max) { }
+#endif
+
 #endif /* __ASSEMBLY__ */
 #endif
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index a5dc6f764195..5c2f91f84c31 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1189,6 +1189,7 @@ void __init sve_setup(void)
 		pr_warn("%s: unvirtualisable vector lengths present\n",
 			info->name);
 
+	arch_ptrace_sve_init(sve_vq_from_vl(info->max_vl));
 	sve_efi_setup();
 }
 
@@ -1309,6 +1310,8 @@ void __init sme_setup(void)
 		info->max_vl);
 	pr_info("SME: default vector length %u bytes per vector\n",
 		get_sme_default_vl());
+
+	arch_ptrace_sme_init(sve_vq_from_vl(info->max_vl));
 }
 
 #endif /* CONFIG_ARM64_SME */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index dc6cf0e37194..466a0eb93123 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/audit.h>
+#include <linux/cache.h>
 #include <linux/compat.h>
 #include <linux/kernel.h>
 #include <linux/sched/signal.h>
@@ -1441,7 +1442,7 @@ enum aarch64_regset {
 #endif
 };
 
-static const struct user_regset aarch64_regsets[] = {
+static struct user_regset aarch64_regsets[] __ro_after_init = {
 	[REGSET_GPR] = {
 		.core_note_type = NT_PRSTATUS,
 		.n = sizeof(struct user_pt_regs) / sizeof(u64),
@@ -1596,6 +1597,25 @@ static const struct user_regset_view user_aarch64_view = {
 	.regsets = aarch64_regsets, .n = ARRAY_SIZE(aarch64_regsets)
 };
 
+#ifdef CONFIG_ARM64_SVE
+void __init arch_ptrace_sve_init(unsigned int vq_max)
+{
+	aarch64_regsets[REGSET_SVE].n = DIV_ROUND_UP(
+		SVE_PT_SIZE(vq_max, SVE_PT_REGS_SVE), SVE_VQ_BYTES);
+}
+#endif /* CONFIG_ARM64_SVE */
+
+#ifdef CONFIG_ARM64_SME
+void __init arch_ptrace_sme_init(unsigned int vq_max)
+{
+	aarch64_regsets[REGSET_SSVE].n = DIV_ROUND_UP(
+		SVE_PT_SIZE(vq_max, SVE_PT_REGS_SVE), SVE_VQ_BYTES);
+
+	aarch64_regsets[REGSET_ZA].n = DIV_ROUND_UP(
+		ZA_PT_SIZE(vq_max), SVE_VQ_BYTES);
+}
+#endif /* CONFIG_ARM64_SME */
+
 #ifdef CONFIG_COMPAT
 enum compat_regset {
 	REGSET_COMPAT_GPR,

base-commit: 54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ