lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230207150535.00004453@gmail.com>
Date:   Tue, 7 Feb 2023 15:05:35 +0200
From:   Zhi Wang <zhi.wang.linux@...il.com>
To:     Mathias Krause <minipli@...ecurity.net>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v3 4/6] KVM: x86: Make use of kvm_read_cr*_bits() when
 testing bits

On Wed,  1 Feb 2023 20:46:02 +0100
Mathias Krause <minipli@...ecurity.net> wrote:

> Make use of the kvm_read_cr{0,4}_bits() helper functions when we only
> want to know the state of certain bits instead of the whole register.
> 
> This not only makes the intend cleaner, it also avoids a VMREAD in case
> the tested bits aren't guest owned.
                    ^
The patch comment is a little confusing. Not sure if I misunderstood here:

Check the code of kvm_read_cr0_bits

static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
{
        ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
        if ((tmask & vcpu->arch.cr0_guest_owned_bits) &&
            !kvm_register_is_available(vcpu, VCPU_EXREG_CR0))
                static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR0);
        return vcpu->arch.cr0 & mask;
}

I suppose the conditions that can avoids a VMREAD is to avoid the vmread in
static_call(kvm_x86_cache_reg):

Conditions are not triggering vmread:

1) The test bits are guest_owned_bits and cache register is available.
2) The test bits are *not* guest_owned bits.

I agree that this makes the intend cleaner, but not sure the later statement
is true in the patch comment. If the test bits are not guest owned, it will
not reach static_call(kvm_x86_cache_reg).
> 
> Signed-off-by: Mathias Krause <minipli@...ecurity.net>
> ---
>  arch/x86/kvm/pmu.c     | 4 ++--
>  arch/x86/kvm/vmx/vmx.c | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index d939d3b84e6f..d9922277df67 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -439,9 +439,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
>  	if (!pmc)
>  		return 1;
>  
> -	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
> +	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&
>  	    (static_call(kvm_x86_get_cpl)(vcpu) != 0) &&
> -	    (kvm_read_cr0(vcpu) & X86_CR0_PE))
> +	    (kvm_read_cr0_bits(vcpu, X86_CR0_PE)))
>  		return 1;
>  
>  	*data = pmc_read_counter(pmc) & mask;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index c8198c8a9b55..d3b49e0b6c32 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -5487,7 +5487,7 @@ static int handle_cr(struct kvm_vcpu *vcpu)
>  		break;
>  	case 3: /* lmsw */
>  		val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f;
> -		trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val);
> +		trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val));
>  		kvm_lmsw(vcpu, val);
>  
>  		return kvm_skip_emulated_instruction(vcpu);
> @@ -7547,7 +7547,7 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
>  	if (!kvm_arch_has_noncoherent_dma(vcpu->kvm))
>  		return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
>  
> -	if (kvm_read_cr0(vcpu) & X86_CR0_CD) {
> +	if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) {
>  		if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
>  			cache = MTRR_TYPE_WRBACK;
>  		else

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ