[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86jz0pwmc4.wl-maz@kernel.org>
Date: Mon, 20 Oct 2025 17:48:43 +0100
From: Marc Zyngier <maz@...nel.org>
To: Ada Couprie Diaz <ada.coupriediaz@....com>
Cc: linux-arm-kernel@...ts.infradead.org,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>,
Ard Biesheuvel <ardb@...nel.org>,
Joey Gouly <joey.gouly@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Zenghui Yu <yuzenghui@...wei.com>,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
linux-kernel@...r.kernel.org,
kvmarm@...ts.linux.dev,
kasan-dev@...glegroups.com,
Mark Rutland <mark.rutland@....com>
Subject: Re: [RFC PATCH 06/16] arm64/insn: always inline aarch64_insn_gen_movewide()
On Tue, 23 Sep 2025 18:48:53 +0100,
Ada Couprie Diaz <ada.coupriediaz@....com> wrote:
>
> As it is always called with an explicit movewide type, we can
> check for its validity at compile time and remove the runtime error print.
>
> The other error prints cannot be verified at compile time, but should not
> occur in practice and will still lead to a fault BRK, so remove them.
>
> This makes `aarch64_insn_gen_movewide()` safe for inlining
> and usage from patching callbacks, as both
> `aarch64_insn_encode_register()` and `aarch64_insn_encode_immediate()`
> have been made safe in previous commits.
>
> Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@....com>
> ---
> arch/arm64/include/asm/insn.h | 58 ++++++++++++++++++++++++++++++++---
> arch/arm64/lib/insn.c | 56 ---------------------------------
> 2 files changed, 54 insertions(+), 60 deletions(-)
>
> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> index 5f5f6a125b4e..5a25e311717f 100644
> --- a/arch/arm64/include/asm/insn.h
> +++ b/arch/arm64/include/asm/insn.h
> @@ -624,6 +624,8 @@ static __always_inline bool aarch64_get_imm_shift_mask(
> #define ADR_IMM_LOSHIFT 29
> #define ADR_IMM_HISHIFT 5
>
> +#define AARCH64_INSN_SF_BIT BIT(31)
> +
> enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn);
> u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn);
>
> @@ -796,10 +798,58 @@ u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst,
> int immr, int imms,
> enum aarch64_insn_variant variant,
> enum aarch64_insn_bitfield_type type);
> -u32 aarch64_insn_gen_movewide(enum aarch64_insn_register dst,
> - int imm, int shift,
> - enum aarch64_insn_variant variant,
> - enum aarch64_insn_movewide_type type);
> +
> +static __always_inline u32 aarch64_insn_gen_movewide(
> + enum aarch64_insn_register dst,
> + int imm, int shift,
> + enum aarch64_insn_variant variant,
> + enum aarch64_insn_movewide_type type)
nit: I personally find this definition style pretty unreadable, and
would rather see the "static __always_inline" stuff put on a line of
its own:
static __always_inline
u32 aarch64_insn_gen_movewide(enum aarch64_insn_register dst,
int imm, int shift,
enum aarch64_insn_variant variant,
enum aarch64_insn_movewide_type type)
But again, that's a personal preference, nothing else.
> +{
> + compiletime_assert(type >= AARCH64_INSN_MOVEWIDE_ZERO &&
> + type <= AARCH64_INSN_MOVEWIDE_INVERSE, "unknown movewide encoding");
> + u32 insn;
> +
> + switch (type) {
> + case AARCH64_INSN_MOVEWIDE_ZERO:
> + insn = aarch64_insn_get_movz_value();
> + break;
> + case AARCH64_INSN_MOVEWIDE_KEEP:
> + insn = aarch64_insn_get_movk_value();
> + break;
> + case AARCH64_INSN_MOVEWIDE_INVERSE:
> + insn = aarch64_insn_get_movn_value();
> + break;
> + default:
> + return AARCH64_BREAK_FAULT;
Similar request to one of the previous patches: since you can check
the validity at compile time, place it in the default: case, and drop
the return statement.
> + }
> +
> + if (imm & ~(SZ_64K - 1)) {
> + return AARCH64_BREAK_FAULT;
> + }
> +
> + switch (variant) {
> + case AARCH64_INSN_VARIANT_32BIT:
> + if (shift != 0 && shift != 16) {
> + return AARCH64_BREAK_FAULT;
> + }
> + break;
> + case AARCH64_INSN_VARIANT_64BIT:
> + insn |= AARCH64_INSN_SF_BIT;
> + if (shift != 0 && shift != 16 && shift != 32 && shift != 48) {
> + return AARCH64_BREAK_FAULT;
> + }
> + break;
> + default:
> + return AARCH64_BREAK_FAULT;
You could also check the variant.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists