lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 10 Feb 2022 14:41:44 -0800 From: Kalesh Singh <kaleshsingh@...gle.com> To: unlisted-recipients:; (no To-header on input) Cc: will@...nel.org, maz@...nel.org, qperret@...gle.com, tabba@...gle.com, surenb@...gle.com, kernel-team@...roid.com, Kalesh Singh <kaleshsingh@...gle.com>, Catalin Marinas <catalin.marinas@....com>, James Morse <james.morse@....com>, Alexandru Elisei <alexandru.elisei@....com>, Suzuki K Poulose <suzuki.poulose@....com>, Ard Biesheuvel <ardb@...nel.org>, Mark Rutland <mark.rutland@....com>, Pasha Tatashin <pasha.tatashin@...een.com>, Joey Gouly <joey.gouly@....com>, Peter Collingbourne <pcc@...gle.com>, Andrew Walbran <qwandor@...gle.com>, Andrew Scull <ascull@...gle.com>, linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org, kvmarm@...ts.cs.columbia.edu Subject: [PATCH 3/7] arm64: asm: Introduce test_sp_overflow macro From: Quentin Perret <qperret@...gle.com> The asm entry code in the kernel uses a trick to check if VMAP'd stacks have overflowed by aligning them at THREAD_SHIFT * 2 granularity and checking the SP's THREAD_SHIFT bit. Protected KVM will soon make use of a similar trick to detect stack overflows, so factor out the asm code in a re-usable macro. Signed-off-by: Quentin Perret <qperret@...gle.com> [Kalesh - Resolve minor conflicts] Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com> --- arch/arm64/include/asm/assembler.h | 11 +++++++++++ arch/arm64/kernel/entry.S | 9 ++------- 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index e8bd0af0141c..ad40eb0eee83 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -850,4 +850,15 @@ alternative_endif #endif /* GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT */ +/* + * Test whether the SP has overflowed, without corrupting a GPR. + */ +.macro test_sp_overflow shift, label + add sp, sp, x0 // sp' = sp + x0 + sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp + tbnz x0, #\shift, \label + sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 + sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp +.endm + #endif /* __ASM_ASSEMBLER_H */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 772ec2ecf488..2632bc47b348 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -53,16 +53,11 @@ alternative_else_nop_endif sub sp, sp, #PT_REGS_SIZE #ifdef CONFIG_VMAP_STACK /* - * Test whether the SP has overflowed, without corrupting a GPR. * Task and IRQ stacks are aligned so that SP & (1 << THREAD_SHIFT) * should always be zero. */ - add sp, sp, x0 // sp' = sp + x0 - sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp - tbnz x0, #THREAD_SHIFT, 0f - sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 - sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp - b el\el\ht\()_\regsize\()_\label + test_sp_overflow THREAD_SHIFT, 0f + b el\el\ht\()_\regsize\()_\label 0: /* -- 2.35.1.265.g69c8d7142f-goog
Powered by blists - more mailing lists