lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YYlaw6v6GOgFUQ/Z@google.com>
Date:   Mon, 8 Nov 2021 17:13:39 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Chenyi Qiang <chenyi.qiang@...el.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Xiaoyao Li <xiaoyao.li@...el.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 2/7] KVM: VMX: Add proper cache tracking for PKRS

On Wed, Aug 11, 2021, Chenyi Qiang wrote:
> Add PKRS caching into the standard register caching mechanism in order
> to take advantage of the availability checks provided by regs_avail.
> 
> This is because vcpu->arch.pkrs will be rarely acceesed by KVM, only in
> the case of host userspace MSR reads and GVA->GPA translation in
> following patches. It is unnecessary to keep it up-to-date at all times.
> 
> Signed-off-by: Chenyi Qiang <chenyi.qiang@...el.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 ++
>  arch/x86/kvm/kvm_cache_regs.h   | 7 +++++++
>  arch/x86/kvm/vmx/vmx.c          | 4 ++++
>  arch/x86/kvm/vmx/vmx.h          | 3 ++-
>  4 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 974cbfb1eefe..c2bcb88781b3 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -173,6 +173,7 @@ enum kvm_reg {
>  	VCPU_EXREG_SEGMENTS,
>  	VCPU_EXREG_EXIT_INFO_1,
>  	VCPU_EXREG_EXIT_INFO_2,
> +	VCPU_EXREG_PKRS,
>  };
>  
>  enum {
> @@ -620,6 +621,7 @@ struct kvm_vcpu_arch {
>  	unsigned long cr8;
>  	u32 host_pkru;
>  	u32 pkru;
> +	u32 pkrs;
>  	u32 hflags;
>  	u64 efer;
>  	u64 apic_base;
> diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
> index 90e1ffdc05b7..da014b1be874 100644
> --- a/arch/x86/kvm/kvm_cache_regs.h
> +++ b/arch/x86/kvm/kvm_cache_regs.h
> @@ -171,6 +171,13 @@ static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
>  		| ((u64)(kvm_rdx_read(vcpu) & -1u) << 32);
>  }
>  
> +static inline ulong kvm_read_pkrs(struct kvm_vcpu *vcpu)

Return value should be u32 (or u64 if we decide to track PKRS as a 64-bit value).

> +{
> +	if (!kvm_register_is_available(vcpu, VCPU_EXREG_PKRS))
> +		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_PKRS);
> +	return vcpu->arch.pkrs;
> +}
> +
>  static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
>  {
>  	vcpu->arch.hflags |= HF_GUEST_MASK;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 927a552393b9..bf911029aa35 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2273,6 +2273,10 @@ static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
>  		vcpu->arch.cr4 &= ~guest_owned_bits;
>  		vcpu->arch.cr4 |= vmcs_readl(GUEST_CR4) & guest_owned_bits;
>  		break;
> +	case VCPU_EXREG_PKRS:
> +		if (kvm_cpu_cap_has(X86_FEATURE_PKS))

Peeking ahead, the next patch rejects RDMSR(MSR_IA32_PKRS) if X86_FEATURE_PKS isn't
supported in KVM, i.e. this is WARN-worthy as KVM should PKRS if and only if PKS is
supported.  Since KVM will WARN if VMREAD fails, just omit this check and let VMREAD
handle any errors.  That won't detect the scenario where PKRS is supported in hardware
but disabled by the kernel/KVM, but that's an acceptable risk since any buggy path is
all but guaranteed to be reachable if PKRS isn't supported at all, i.e. the WARN will
fire and detect any bug in the more common case.

> +			vcpu->arch.pkrs = vmcs_read64(GUEST_IA32_PKRS);

Hrm.  I agree that it's extremely unlikely that IA32_PKRS will ever allow software
to set bits 63:32, but at the same time there's no real advantage to KVM it as a u32,
e.g. the extra 4 bytes per vCPU is a non-issue, and could be avoided by shuffling
kvm_vcpu_arch to get a more efficient layout.  On the flip side, using a 32 means
code like this _looks_ buggy because it's silently dropping bits 63:32, and if the
architecture ever does get updated, we'll have to modify a bunch of KVM code.

TL;DR: I vote to track PRKS as a u64 even though the kernel tracks it as a u32.

> +		break;
>  	default:
>  		WARN_ON_ONCE(1);
>  		break;
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index db88ed4f2121..18039eb6efb0 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -447,7 +447,8 @@ static inline void vmx_register_cache_reset(struct kvm_vcpu *vcpu)
>  				  | (1 << VCPU_EXREG_CR3)
>  				  | (1 << VCPU_EXREG_CR4)
>  				  | (1 << VCPU_EXREG_EXIT_INFO_1)
> -				  | (1 << VCPU_EXREG_EXIT_INFO_2));
> +				  | (1 << VCPU_EXREG_EXIT_INFO_2)
> +				  | (1 << VCPU_EXREG_PKRS));
>  	vcpu->arch.regs_dirty = 0;
>  }
>  
> -- 
> 2.17.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ