[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZmnsDqP9T0b1z6ML@J2N7QTR9R3.cambridge.arm.com>
Date: Wed, 12 Jun 2024 19:42:22 +0100
From: Mark Rutland <mark.rutland@....com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Anvin <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Thomas Gleixner <tglx@...utronix.de>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
the arch/x86 maintainers <x86@...nel.org>,
linux-arm-kernel@...ts.infradead.org,
linux-arch <linux-arch@...r.kernel.org>
Subject: Re: [PATCH 4/7 v2] arm64: add 'runtime constant' support
On Tue, Jun 11, 2024 at 10:20:10AM -0700, Linus Torvalds wrote:
> This implements the runtime constant infrastructure for arm64, allowing
> the dcache d_hash() function to be generated using as a constant for
> hash table address followed by shift by a constant of the hash index.
>
> Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
> ---
> v2: updates as per Mark Rutland
Sorry, I just realised I got the cache maintenance slightly wrong below.
> +static inline void __runtime_fixup_ptr(void *where, unsigned long val)
> +{
> + __le32 *p = lm_alias(where);
> + __runtime_fixup_16(p, val);
> + __runtime_fixup_16(p+1, val >> 16);
> + __runtime_fixup_16(p+2, val >> 32);
> + __runtime_fixup_16(p+3, val >> 48);
> + caches_clean_inval_pou((unsigned long)p, (unsigned long)(p + 4));
> +}
We need to do the I$ maintenance on the VA that'll be executed (to
handle systems with a VIPT I$), so we'll need to use 'where' rather than
'p', e.g.
caches_clean_inval_pou((unsigned long)where,
(unsigned long)where + 4 * AARCH64_INSN_SIZE);
Note: the D$ and I$ maintenance instruction (DC CVAU and IC IVAU) only
require read permissions, so those can be used on the kernel's
executable alias even though that's mapped without write permissions.
> +/* Immediate value is 6 bits starting at bit #16 */
> +static inline void __runtime_fixup_shift(void *where, unsigned long val)
> +{
> + __le32 *p = lm_alias(where);
> + u32 insn = le32_to_cpu(*p);
> + insn &= 0xffc0ffff;
> + insn |= (val & 63) << 16;
> + *p = cpu_to_le32(insn);
> + caches_clean_inval_pou((unsigned long)p, (unsigned long)(p + 1));
> +}
Likewise:
caches_clean_inval_pou((unsigned long)where,
(unsigned long)where + AARCH64_INSN_SIZE);
Mark.
Powered by blists - more mailing lists