[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aW-1mv3OU-GcRyAQ@google.com>
Date: Tue, 20 Jan 2026 09:04:26 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Ard Biesheuvel <ardb@...nel.org>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>,
Josh Poimboeuf <jpoimboe@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Kees Cook <kees@...nel.org>,
Uros Bizjak <ubizjak@...il.com>, Brian Gerst <brgerst@...il.com>, linux-hardening@...r.kernel.org
Subject: Re: [RFC/RFT PATCH 10/19] x86/kvm: Use RIP-relative addressing
On Thu, Jan 08, 2026, Ard Biesheuvel wrote:
> Replace absolute references in inline asm with RIP-relative ones, to
> avoid the need for relocation fixups at boot time. This is a
> prerequisite for PIE linking, which only permits 64-bit wide
> loader-visible absolute references.
>
> Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
> ---
> arch/x86/kernel/kvm.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index df78ddee0abb..1a0335f328e1 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -807,8 +807,9 @@ extern bool __raw_callee_save___kvm_vcpu_is_preempted(long);
> * restoring to/from the stack.
> */
> #define PV_VCPU_PREEMPTED_ASM \
> - "movq __per_cpu_offset(,%rdi,8), %rax\n\t" \
> - "cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax)\n\t" \
> + "0:leaq 0b(%rip), %rax\n\t" \
Please use something other than '0' for the label, it took me forever (and looking
at disassembly) to realize "0b" was just a backwards label and not some fancy
syntax I didn't know.
It might also be worth calling out in the changelog that this function is called
across CPUs, e.g. from kvm_smp_send_call_func_ipi(), and thus can't use gs:
or any other "normal" method for accessing per-CPU data.
> + "addq __per_cpu_offset - 0b(%rax,%rdi,8), %rax\n\t" \
> + "cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time-0b(%rax)\n\t" \
> "setne %al\n\t"
>
> DEFINE_ASM_FUNC(__raw_callee_save___kvm_vcpu_is_preempted,
> --
> 2.47.3
>
Powered by blists - more mailing lists