lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA03e5E=qi4+4c0SUv7u4P4ouJOTN9XUmD_Q2h-kBtBhwKLVDA@mail.gmail.com>
Date:   Wed, 15 Sep 2021 15:40:01 -0700
From:   Marc Orr <marcorr@...gle.com>
To:     Peter Gonda <pgonda@...gle.com>
Cc:     kvm list <kvm@...r.kernel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <seanjc@...gle.com>,
        Brijesh Singh <brijesh.singh@....com>, stable@...r.kernel.org,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V2] KVM: SEV: Acquire vcpu mutex when updating VMSA

On Wed, Sep 15, 2021 at 10:18 AM Peter Gonda <pgonda@...gle.com> wrote:
>
> Adds vcpu mutex guard to the VMSA updating code. Refactors out
> __sev_launch_update_vmsa() function to deal with per vCPU parts
> of sev_launch_update_vmsa().

Can you expand the changelog, and perhaps add a comment into the
source code as well, to explain what grabbing the mutex protects us
from? I assume that it's a poorly behaved user-space, rather than a
race condition in a well-behaved user-space VMM, but I'm not certain.

Other than that, the patch itself seems fine to me.

>
> Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest")
>
> Signed-off-by: Peter Gonda <pgonda@...gle.com>
> Cc: Marc Orr <marcorr@...gle.com>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Sean Christopherson <seanjc@...gle.com>
> Cc: Brijesh Singh <brijesh.singh@....com>
> Cc: kvm@...r.kernel.org
> Cc: stable@...r.kernel.org
> Cc: linux-kernel@...r.kernel.org
> ---
>
> V2
>  * Refactor per vcpu work to separate function.
>  * Remove check to skip already initialized VMSAs.
>  * Removed vmsa struct zeroing.
>
> ---
>  arch/x86/kvm/svm/sev.c | 53 ++++++++++++++++++++++++------------------
>  1 file changed, 30 insertions(+), 23 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 75e0b21ad07c..766510fe3abb 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -595,43 +595,50 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
>         return 0;
>  }
>
> -static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
> +                                   int *error)
>  {
> -       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>         struct sev_data_launch_update_vmsa vmsa;
> +       struct vcpu_svm *svm = to_svm(vcpu);
> +       int ret;
> +
> +       /* Perform some pre-encryption checks against the VMSA */
> +       ret = sev_es_sync_vmsa(svm);
> +       if (ret)
> +               return ret;
> +
> +       /*
> +        * The LAUNCH_UPDATE_VMSA command will perform in-place encryption of
> +        * the VMSA memory content (i.e it will write the same memory region
> +        * with the guest's key), so invalidate it first.
> +        */
> +       clflush_cache_range(svm->vmsa, PAGE_SIZE);
> +
> +       vmsa.reserved = 0;
> +       vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
> +       vmsa.address = __sme_pa(svm->vmsa);
> +       vmsa.len = PAGE_SIZE;
> +       return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
> +}
> +
> +static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
>         struct kvm_vcpu *vcpu;
>         int i, ret;
>
>         if (!sev_es_guest(kvm))
>                 return -ENOTTY;
>
> -       vmsa.reserved = 0;
> -
> -       kvm_for_each_vcpu(i, vcpu, kvm) {
> -               struct vcpu_svm *svm = to_svm(vcpu);
> -
> -               /* Perform some pre-encryption checks against the VMSA */
> -               ret = sev_es_sync_vmsa(svm);
> +       kvm_for_each_vcpu(i, vcpu, kvm) {
> +               ret = mutex_lock_killable(&vcpu->mutex);
>                 if (ret)
>                         return ret;
>
> -               /*
> -                * The LAUNCH_UPDATE_VMSA command will perform in-place
> -                * encryption of the VMSA memory content (i.e it will write
> -                * the same memory region with the guest's key), so invalidate
> -                * it first.
> -                */
> -               clflush_cache_range(svm->vmsa, PAGE_SIZE);
> +               ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error);
>
> -               vmsa.handle = sev->handle;
> -               vmsa.address = __sme_pa(svm->vmsa);
> -               vmsa.len = PAGE_SIZE;
> -               ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa,
> -                                   &argp->error);
> +               mutex_unlock(&vcpu->mutex);
>                 if (ret)
>                         return ret;
> -
> -               svm->vcpu.arch.guest_state_protected = true;
>         }
>
>         return 0;
> --
> 2.33.0.464.g1972c5931b-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ