[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250923174903.76283-13-ada.coupriediaz@arm.com>
Date: Tue, 23 Sep 2025 18:48:59 +0100
From: Ada Couprie Diaz <ada.coupriediaz@....com>
To: linux-arm-kernel@...ts.infradead.org
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Marc Zyngier <maz@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>,
Ard Biesheuvel <ardb@...nel.org>,
Joey Gouly <joey.gouly@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Zenghui Yu <yuzenghui@...wei.com>,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
linux-kernel@...r.kernel.org,
kvmarm@...ts.linux.dev,
kasan-dev@...glegroups.com,
Mark Rutland <mark.rutland@....com>,
Ada Couprie Diaz <ada.coupriediaz@....com>
Subject: [RFC PATCH 12/16] kvm/arm64: make alternative callbacks safe
Alternative callback functions are regular functions, which means they
or any function they call could get patched or instrumented
by alternatives or other parts of the kernel.
Given that applying alternatives does not guarantee a consistent state
while patching, only once done, and handles cache maintenance manually,
it could lead to nasty corruptions and execution of bogus code.
Make the KVM alternative callbacks safe by marking them `noinstr` and
`__always_inline`'ing their helpers.
This is possible thanks to previous commits making `aarch64_insn_...`
functions used in the callbacks safe to inline.
`kvm_update_va_mask()` is already marked as `__init`, which has its own
text section conflicting with the `noinstr` one.
Instead, use `__no_instr_section(".noinstr.text")` to add
all the function attributes added by `noinstr`, without the section
conflict.
This can be an issue, as kprobes seems to only block the text sections,
not based on function attributes.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@....com>
---
This is missing `kvm_patch_vector_branch()`, which could receive the same
treatment, but the `WARN_ON_ONCE` in the early-exit check would make it
call into instrumentable code.
I do not currently know if this `WARN` can safely be removed or if it
has some importance.
---
arch/arm64/kvm/va_layout.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 91b22a014610..3ebb7e0074f6 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -109,7 +109,7 @@ __init void kvm_apply_hyp_relocations(void)
}
}
-static u32 compute_instruction(int n, u32 rd, u32 rn)
+static __always_inline u32 compute_instruction(int n, u32 rd, u32 rn)
{
u32 insn = AARCH64_BREAK_FAULT;
@@ -151,6 +151,7 @@ static u32 compute_instruction(int n, u32 rd, u32 rn)
return insn;
}
+__noinstr_section(".init.text")
void __init kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst)
{
@@ -241,7 +242,8 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
*updptr++ = cpu_to_le32(insn);
}
-static void generate_mov_q(u64 val, __le32 *origptr, __le32 *updptr, int nr_inst)
+static __always_inline void generate_mov_q(u64 val, __le32 *origptr,
+ __le32 *updptr, int nr_inst)
{
u32 insn, oinsn, rd;
@@ -284,15 +286,15 @@ static void generate_mov_q(u64 val, __le32 *origptr, __le32 *updptr, int nr_inst
*updptr++ = cpu_to_le32(insn);
}
-void kvm_get_kimage_voffset(struct alt_instr *alt,
+noinstr void kvm_get_kimage_voffset(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst)
{
generate_mov_q(kimage_voffset, origptr, updptr, nr_inst);
}
-void kvm_compute_final_ctr_el0(struct alt_instr *alt,
+noinstr void kvm_compute_final_ctr_el0(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst)
{
- generate_mov_q(read_sanitised_ftr_reg(SYS_CTR_EL0),
+ generate_mov_q(arm64_ftr_reg_ctrel0.sys_val,
origptr, updptr, nr_inst);
}
--
2.43.0
Powered by blists - more mailing lists