[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f6e6f61-0d34-e640-caea-ff71ac1563d8@amd.com>
Date: Fri, 8 Oct 2021 10:38:09 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Peter Gonda <pgonda@...gle.com>, kvm@...r.kernel.org
Cc: Marc Orr <marcorr@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
David Rientjes <rientjes@...gle.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Brijesh Singh <brijesh.singh@....com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4 V9] KVM: SEV: Add support for SEV-ES intra host
migration
On 10/5/21 9:13 AM, Peter Gonda wrote:
> For SEV-ES to work with intra host migration the VMSAs, GHCB metadata,
> and other SEV-ES info needs to be preserved along with the guest's
> memory.
>
> Signed-off-by: Peter Gonda <pgonda@...gle.com>
> Reviewed-by: Marc Orr <marcorr@...gle.com>
> Cc: Marc Orr <marcorr@...gle.com>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Sean Christopherson <seanjc@...gle.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Dr. David Alan Gilbert <dgilbert@...hat.com>
> Cc: Brijesh Singh <brijesh.singh@....com>
> Cc: Vitaly Kuznetsov <vkuznets@...hat.com>
> Cc: Wanpeng Li <wanpengli@...cent.com>
> Cc: Jim Mattson <jmattson@...gle.com>
> Cc: Joerg Roedel <joro@...tes.org>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Borislav Petkov <bp@...en8.de>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: kvm@...r.kernel.org
> Cc: linux-kernel@...r.kernel.org
> ---
> arch/x86/kvm/svm/sev.c | 53 +++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 52 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 6fc1935b52ea..321b55654f36 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1576,6 +1576,51 @@ static void sev_migrate_from(struct kvm_sev_info *dst,
> list_replace_init(&src->regions_list, &dst->regions_list);
> }
>
> +static int sev_es_migrate_from(struct kvm *dst, struct kvm *src)
> +{
> + int i;
> + struct kvm_vcpu *dst_vcpu, *src_vcpu;
> + struct vcpu_svm *dst_svm, *src_svm;
> +
> + if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus))
> + return -EINVAL;
> +
> + kvm_for_each_vcpu(i, src_vcpu, src) {
> + if (!src_vcpu->arch.guest_state_protected)
> + return -EINVAL;
> + }
> +
> + kvm_for_each_vcpu(i, src_vcpu, src) {
> + src_svm = to_svm(src_vcpu);
> + dst_vcpu = dst->vcpus[i];
> + dst_vcpu = kvm_get_vcpu(dst, i);
One of these assignments of dst_vcpu can be deleted.
> + dst_svm = to_svm(dst_vcpu);
> +
> + /*
> + * Transfer VMSA and GHCB state to the destination. Nullify and
> + * clear source fields as appropriate, the state now belongs to
> + * the destination.
> + */
> + dst_vcpu->vcpu_id = src_vcpu->vcpu_id;
> + dst_svm->vmsa = src_svm->vmsa;
> + src_svm->vmsa = NULL;
> + dst_svm->ghcb = src_svm->ghcb;
> + src_svm->ghcb = NULL;
> + dst_svm->vmcb->control.ghcb_gpa = src_svm->vmcb->control.ghcb_gpa;
> + dst_svm->ghcb_sa = src_svm->ghcb_sa;
> + src_svm->ghcb_sa = NULL;
> + dst_svm->ghcb_sa_len = src_svm->ghcb_sa_len;
> + src_svm->ghcb_sa_len = 0;
> + dst_svm->ghcb_sa_sync = src_svm->ghcb_sa_sync;
> + src_svm->ghcb_sa_sync = false;
> + dst_svm->ghcb_sa_free = src_svm->ghcb_sa_free;
> + src_svm->ghcb_sa_free = false;
Would it make sense to have a pre-patch that puts these fields into a
struct? Then you can just copy the struct and zero it after. If anything
is ever added for any reason, then it could/should be added to the struct
and this code wouldn't have to change. It might be more churn than it's
worth, just a thought.
Thanks,
Tom
> + }
> + to_kvm_svm(src)->sev_info.es_active = false;
> +
> + return 0;
> +}
> +
> int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
> {
> struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info;
> @@ -1604,7 +1649,7 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
> if (ret)
> goto out_fput;
>
> - if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) {
> + if (!sev_guest(source_kvm)) {
> ret = -EINVAL;
> goto out_source;
> }
> @@ -1615,6 +1660,12 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
> if (ret)
> goto out_source_vcpu;
>
> + if (sev_es_guest(source_kvm)) {
> + ret = sev_es_migrate_from(kvm, source_kvm);
> + if (ret)
> + goto out_source_vcpu;
> + }
> +
> sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info);
> kvm_for_each_vcpu (i, vcpu, source_kvm) {
> kvm_vcpu_reset(vcpu, /* init_event= */ false);
>
Powered by blists - more mailing lists