[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YUEVQDEvLbdJF+sj@google.com>
Date: Tue, 14 Sep 2021 21:33:52 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Peter Gonda <pgonda@...gle.com>
Cc: kvm@...r.kernel.org, Marc Orr <marcorr@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Brijesh Singh <brijesh.singh@....com>, stable@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: SEV: Acquire vcpu mutex when updating VMSA
On Tue, Sep 14, 2021, Peter Gonda wrote:
> Adds mutex guard to the VMSA updating code. Also adds a check to skip a
> vCPU if it has already been LAUNCH_UPDATE_VMSA'd which should allow
> userspace to retry this ioctl until all the vCPUs can be successfully
> LAUNCH_UPDATE_VMSA'd. Because this operation cannot be undone we cannot
> unwind if one vCPU fails.
>
> Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest")
>
> Signed-off-by: Peter Gonda <pgonda@...gle.com>
> Cc: Marc Orr <marcorr@...gle.com>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Sean Christopherson <seanjc@...gle.com>
> Cc: Brijesh Singh <brijesh.singh@....com>
> Cc: kvm@...r.kernel.org
> Cc: stable@...r.kernel.org
> Cc: linux-kernel@...r.kernel.org
> ---
> arch/x86/kvm/svm/sev.c | 24 +++++++++++++++++++-----
> 1 file changed, 19 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 75e0b21ad07c..9a2ebd0328ca 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -598,22 +598,29 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
> static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
> {
> struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> - struct sev_data_launch_update_vmsa vmsa;
> + struct sev_data_launch_update_vmsa vmsa = {0};
> struct kvm_vcpu *vcpu;
> int i, ret;
>
> if (!sev_es_guest(kvm))
> return -ENOTTY;
>
> - vmsa.reserved = 0;
> -
Zeroing all of 'vmsa' is an unrelated chagne and belongs in a separate patch. I
would even go so far as to say it's unnecessary, even field of the struct is
explicitly written before it's consumed.
> kvm_for_each_vcpu(i, vcpu, kvm) {
> struct vcpu_svm *svm = to_svm(vcpu);
>
> + ret = mutex_lock_killable(&vcpu->mutex);
> + if (ret)
> + goto out_unlock;
Rather than multiple unlock labels, move the guts of the loop to a wrapper.
As discussed off list, this really should be a vCPU-scoped ioctl, but that ship
has sadly sailed :-( We can at least imitate that by making the VM-scoped ioctl
nothing but a wrapper.
> +
> + /* Skip to the next vCPU if this one has already be updated. */
s/be/been
Uber nit, there may not be a next vCPU. It'd be more slightly more accurate to
say something like "Do nothing if this vCPU has already been updated".
> + ret = sev_es_sync_vmsa(svm);
> + if (svm->vcpu.arch.guest_state_protected)
> + goto unlock;
This belongs in a separate patch, too. It also introduces a bug (arguably two)
in that it adds a duplicate call to sev_es_sync_vmsa(). The second bug is that
if sev_es_sync_vmsa() fails _and_ the vCPU is already protected, this will cause
that failure to be squashed.
In the end, I think the least gross implementation will look something like this,
implemented over two patches (one for the lock, one for the protected check).
static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
int *error)
{
struct sev_data_launch_update_vmsa vmsa;
struct vcpu_svm *svm = to_svm(vcpu);
int ret;
/*
* Do nothing if this vCPU has already been updated. This is allowed
* to let userspace retry LAUNCH_UPDATE_VMSA if the command fails on a
* later vCPU.
*/
if (svm->vcpu.arch.guest_state_protected)
return 0;
/* Perform some pre-encryption checks against the VMSA */
ret = sev_es_sync_vmsa(svm);
if (ret)
return ret;
/*
* The LAUNCH_UPDATE_VMSA command will perform in-place
* encryption of the VMSA memory content (i.e it will write
* the same memory region with the guest's key), so invalidate
* it first.
*/
clflush_cache_range(svm->vmsa, PAGE_SIZE);
vmsa.reserved = 0;
vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
vmsa.address = __sme_pa(svm->vmsa);
vmsa.len = PAGE_SIZE;
return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
}
static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_vcpu *vcpu;
int i, ret;
if (!sev_es_guest(kvm))
return -ENOTTY;
kvm_for_each_vcpu(i, vcpu, kvm) {
ret = mutex_lock_killable(&vcpu->mutex);
if (ret)
return ret;
ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error);
mutex_unlock(&vcpu->mutex);
if (ret)
return ret;
}
return 0;
}
Powered by blists - more mailing lists