lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 Jan 2024 15:55:20 +0800
From: Yuan Yao <yuan.yao@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, Like Xu <like.xu.linux@...il.com>
Subject: Re: [PATCH 2/4] KVM: x86: Rely solely on preempted_in_kernel flag
 for directed yield

On Tue, Jan 09, 2024 at 04:39:36PM -0800, Sean Christopherson wrote:
> Snapshot preempted_in_kernel using kvm_arch_vcpu_in_kernel() so that the
> flag is "accurate" (or rather, consistent and deterministic within KVM)
> for guest with protected state, and explicitly use preempted_in_kernel
> when checking if a vCPU was preempted in kernel mode instead of bouncing
> through kvm_arch_vcpu_in_kernel().
>
> Drop the gnarly logic in kvm_arch_vcpu_in_kernel() that redirects to
> preempted_in_kernel if the target vCPU is not the "running", i.e. loaded,
> vCPU, as the only reason that code existed was for the directed yield case
> where KVM wants to check the CPL of a vCPU that may or may not be loaded
> on the current pCPU.
>
> Cc: Like Xu <like.xu.linux@...il.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
>  arch/x86/kvm/x86.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 415509918c7f..77494f9c8d49 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5062,8 +5062,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>  	int idx;
>
>  	if (vcpu->preempted) {
> -		if (!vcpu->arch.guest_state_protected)
> -			vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
> +		vcpu->arch.preempted_in_kernel = kvm_arch_vcpu_in_kernel(vcpu);
>
>  		/*
>  		 * Take the srcu lock as memslots will be accessed to check the gfn
> @@ -13093,7 +13092,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
>
>  bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
>  {
> -	return kvm_arch_vcpu_in_kernel(vcpu);
> +	return vcpu->arch.preempted_in_kernel;
>  }
>
>  bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
> @@ -13116,9 +13115,6 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
>  	if (vcpu->arch.guest_state_protected)
>  		return true;
>
> -	if (vcpu != kvm_get_running_vcpu())
> -		return vcpu->arch.preempted_in_kernel;
> -

Now this function accepts vcpu parameter but can only get
information from "current" vcpu loaded on hardware for VMX.
I'm not sure whether need "WARN_ON(vcpu != kvm_get_running_vcpu())"
here to guard it. i.e. kvm_guest_state() still
uses this function (although it did chekcing before).

>  	return static_call(kvm_x86_get_cpl)(vcpu) == 0;
>  }
>
> --
> 2.43.0.472.g3155946c3a-goog
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ