lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 Sep 2020 15:13:57 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Tom Lendacky <thomas.lendacky@....com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org, x86@...nel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Borislav Petkov <bp@...en8.de>, Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Brijesh Singh <brijesh.singh@....com>
Subject: Re: [RFC PATCH 22/35] KVM: SVM: Add support for CR0 write traps for
 an SEV-ES guest

On Mon, Sep 14, 2020 at 03:15:36PM -0500, Tom Lendacky wrote:
> From: Tom Lendacky <thomas.lendacky@....com>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index b65bd0c986d4..6f5988c305e1 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -799,11 +799,29 @@ bool pdptrs_changed(struct kvm_vcpu *vcpu)
>  }
>  EXPORT_SYMBOL_GPL(pdptrs_changed);
>  
> +static void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0,
> +			     unsigned long cr0)

What about using __kvm_set_cr*() instead of kvm_post_set_cr*()?  That would
show that __kvm_set_cr*() is a subordinate of kvm_set_cr*(), and from the
SVM side would provide the hint that the code is skipping the front end of
kvm_set_cr*().

> +{
> +	unsigned long update_bits = X86_CR0_PG | X86_CR0_WP;
> +
> +	if ((cr0 ^ old_cr0) & X86_CR0_PG) {
> +		kvm_clear_async_pf_completion_queue(vcpu);
> +		kvm_async_pf_hash_reset(vcpu);
> +	}
> +
> +	if ((cr0 ^ old_cr0) & update_bits)
> +		kvm_mmu_reset_context(vcpu);
> +
> +	if (((cr0 ^ old_cr0) & X86_CR0_CD) &&
> +	    kvm_arch_has_noncoherent_dma(vcpu->kvm) &&
> +	    !kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
> +		kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL);
> +}
> +
>  int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
>  {
>  	unsigned long old_cr0 = kvm_read_cr0(vcpu);
>  	unsigned long pdptr_bits = X86_CR0_CD | X86_CR0_NW | X86_CR0_PG;
> -	unsigned long update_bits = X86_CR0_PG | X86_CR0_WP;
>  
>  	cr0 |= X86_CR0_ET;
>  
> @@ -842,22 +860,23 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
>  
>  	kvm_x86_ops.set_cr0(vcpu, cr0);
>  
> -	if ((cr0 ^ old_cr0) & X86_CR0_PG) {
> -		kvm_clear_async_pf_completion_queue(vcpu);
> -		kvm_async_pf_hash_reset(vcpu);
> -	}
> +	kvm_post_set_cr0(vcpu, old_cr0, cr0);
>  
> -	if ((cr0 ^ old_cr0) & update_bits)
> -		kvm_mmu_reset_context(vcpu);
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(kvm_set_cr0);
>  
> -	if (((cr0 ^ old_cr0) & X86_CR0_CD) &&
> -	    kvm_arch_has_noncoherent_dma(vcpu->kvm) &&
> -	    !kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
> -		kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL);
> +int kvm_track_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)

I really dislike the "track" terminology.  For me, using "track" as the verb
in a function implies the function activates tracking.  But it's probably a
moot point, because similar to EFER, I don't see any reason to put the front
end of the emulation into x86.c.  Both getting old_cr0 and setting
vcpu->arch.cr0 can be done in svm.c

> +{
> +	unsigned long old_cr0 = kvm_read_cr0(vcpu);
> +
> +	vcpu->arch.cr0 = cr0;
> +
> +	kvm_post_set_cr0(vcpu, old_cr0, cr0);
>  
>  	return 0;
>  }
> -EXPORT_SYMBOL_GPL(kvm_set_cr0);
> +EXPORT_SYMBOL_GPL(kvm_track_cr0);
>  
>  void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
>  {
> -- 
> 2.28.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ