[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e8cf21cc5d246e73154217639adfafe5@kernel.org>
Date: Tue, 24 Nov 2020 13:24:42 +0000
From: Marc Zyngier <maz@...nel.org>
To: David Brazdil <dbrazdil@...gle.com>
Cc: kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Andrew Scull <ascull@...gle.com>,
Ard Biesheuvel <ardb@...nel.org>, kernel-team@...roid.com
Subject: Re: [RFC PATCH 3/6] kvm: arm64: Fix up RELR relocation in hyp
code/data
On 2020-11-19 16:25, David Brazdil wrote:
> The arm64 kernel also supports packing of relocation data using the
> RELR
> format. Implement a parser of RELR data and fixup the relocations using
> the same infra as RELA relocs.
>
> Signed-off-by: David Brazdil <dbrazdil@...gle.com>
> ---
> arch/arm64/kvm/va_layout.c | 41 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 41 insertions(+)
>
> diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
> index b80fab974896..7f45a98eacfd 100644
> --- a/arch/arm64/kvm/va_layout.c
> +++ b/arch/arm64/kvm/va_layout.c
> @@ -145,6 +145,43 @@ static void __fixup_hyp_rela(void)
> __fixup_hyp_rel(rel[i].r_offset);
> }
>
> +#ifdef CONFIG_RELR
> +static void __fixup_hyp_relr(void)
> +{
> + u64 *rel, *end;
> +
> + rel = (u64*)(kimage_vaddr + __load_elf_u64(__relr_offset));
> + end = rel + (__load_elf_u64(__relr_size) / sizeof(*rel));
> +
> + while (rel < end) {
> + unsigned n;
> + u64 addr = *(rel++);
> +
> + /* Address must not have the LSB set. */
> + BUG_ON(addr & BIT(0));
> +
> + /* Fix up the first address of the chain. */
> + __fixup_hyp_rel(addr);
> +
> + /*
> + * Loop over bitmaps, i.e. as long as words' LSB is 1.
> + * Each bit (ordered from LSB to MSB) represents one word from
> + * the last full address (exclusive). If the corresponding bit
> + * is 1, there is a relative relocation on that word.
> + */
What is the endianness of this bitmap? Is it guaranteed to be in
CPU-endian format?
> + for (n = 0; rel < end && (*rel & BIT(0)); n++) {
> + unsigned i;
> + u64 bitmap = *(rel++);
nit: if you change this u64 for an unsigned long...
> +
> + for (i = 1; i < 64; ++i) {
> + if ((bitmap & BIT(i)))
> + __fixup_hyp_rel(addr + 8 * (63 * n + i));
> + }
... this can be written as:
i = 1;
for_each_set_bit_from(i, &bitmap, 64)
__fixup_hyp_rel(addr + 8 * (63 * n + i));
> + }
> + }
> +}
> +#endif
> +
> /*
> * The kernel relocated pointers to kernel VA. Iterate over
> relocations in
> * the hypervisor ELF sections and convert them to hyp VA. This avoids
> the
> @@ -156,6 +193,10 @@ __init void kvm_fixup_hyp_relocations(void)
> return;
>
> __fixup_hyp_rela();
> +
> +#ifdef CONFIG_RELR
> + __fixup_hyp_relr();
> +#endif
> }
>
> static u32 compute_instruction(int n, u32 rd, u32 rn)
Thanks,
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists