[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f6d61324-4243-e5ed-9450-6ee8f9b1f44b@redhat.com>
Date: Mon, 21 Dec 2020 19:12:33 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Nathan Chancellor <natechancellor@...il.com>
Cc: Tom Lendacky <thomas.lendacky@....com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
clang-built-linux@...glegroups.com,
Nick Desaulniers <ndesaulniers@...gle.com>,
Sami Tolvanen <samitolvanen@...gle.com>
Subject: Re: [PATCH] KVM: SVM: Add register operand to vmsave call in
sev_es_vcpu_load
On 19/12/20 07:37, Nathan Chancellor wrote:
> When using LLVM's integrated assembler (LLVM_IAS=1) while building
> x86_64_defconfig + CONFIG_KVM=y + CONFIG_KVM_AMD=y, the following build
> error occurs:
>
> $ make LLVM=1 LLVM_IAS=1 arch/x86/kvm/svm/sev.o
> arch/x86/kvm/svm/sev.c:2004:15: error: too few operands for instruction
> asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> ^
> arch/x86/kvm/svm/sev.c:28:17: note: expanded from macro '__ex'
> #define __ex(x) __kvm_handle_fault_on_reboot(x)
> ^
> ./arch/x86/include/asm/kvm_host.h:1646:10: note: expanded from macro '__kvm_handle_fault_on_reboot'
> "666: \n\t" \
> ^
> <inline asm>:2:2: note: instantiated into assembly here
> vmsave
> ^
> 1 error generated.
>
> This happens because LLVM currently does not support calling vmsave
> without the fixed register operand (%rax for 64-bit and %eax for
> 32-bit). This will be fixed in LLVM 12 but the kernel currently supports
> LLVM 10.0.1 and newer so this needs to be handled.
>
> Add the proper register using the _ASM_AX macro, which matches the
> vmsave call in vmenter.S.
>
> Fixes: 861377730aa9 ("KVM: SVM: Provide support for SEV-ES vCPU loading")
> Link: https://reviews.llvm.org/D93524
> Link: https://github.com/ClangBuiltLinux/linux/issues/1216
> Signed-off-by: Nathan Chancellor <natechancellor@...il.com>
> ---
> arch/x86/kvm/svm/sev.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index e57847ff8bd2..958370758ed0 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2001,7 +2001,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
> * of which one step is to perform a VMLOAD. Since hardware does not
> * perform a VMSAVE on VMRUN, the host savearea must be updated.
> */
> - asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> + asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (__sme_page_pa(sd->save_area)) : "memory");
>
> /*
> * Certain MSRs are restored on VMEXIT, only save ones that aren't
>
Queued, thanks.
Paolo
Powered by blists - more mailing lists