lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1dc56110-5f1b-6140-937c-bf4a28ddbe87@redhat.com>
Date:   Fri, 18 Mar 2022 17:29:20 +0100
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Maxim Levitsky <mlevitsk@...hat.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>
Cc:     seanjc@...gle.com
Subject: Re: [PATCH v3 6/6] KVM: x86: allow defining return-0 static calls

On 3/17/22 18:43, Maxim Levitsky wrote:
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 20f64e07e359..3388072b2e3b 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -88,7 +88,7 @@ KVM_X86_OP(deliver_interrupt)
>   KVM_X86_OP_OPTIONAL(sync_pir_to_irr)
>   KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
>   KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
> -KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
> +KVM_X86_OP(get_mt_mask)
>   KVM_X86_OP(load_mmu_pgd)
>   KVM_X86_OP(has_wbinvd_exit)
>   KVM_X86_OP(get_l2_tsc_offset)
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index a09b4f1a18f6..0c09292b0611 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4057,6 +4057,11 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index)
>          return true;
>   }
>   
> +static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> +{
> +       return 0;
> +}
> +
>   static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
>   {
>          struct vcpu_svm *svm = to_svm(vcpu);
> @@ -4718,6 +4723,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>          .check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons,
>          .apicv_post_state_restore = avic_apicv_post_state_restore,
>   
> +       .get_mt_mask = svm_get_mt_mask,
>          .get_exit_info = svm_get_exit_info,
>   
>          .vcpu_after_set_cpuid = svm_vcpu_after_set_cpuid,

Thanks, I'll send it as a complete patch.  Please reply there with your 
Signed-off-by.

Related to this, I don't see anything in arch/x86/kernel/static_call.c 
that limits this code to x86-64:

                 if (func == &__static_call_return0) {
                         emulate = code;
                         code = &xor5rax;
                 }


On 32-bit, it will be patched as "dec ax; xor eax, eax" or something 
like that.  Fortunately it doesn't corrupt any callee-save register but 
it is not just a bit funky, it's also not a single instruction.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ