[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7c51d4ae251323ce8c224aa362a4be616b4cfeba.1747368093.git.afranji@google.com>
Date: Fri, 16 May 2025 19:19:30 +0000
From: Ryan Afranji <afranji@...gle.com>
To: afranji@...gle.com, ackerleytng@...gle.com, pbonzini@...hat.com,
seanjc@...gle.com, tglx@...utronix.de, x86@...nel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
tabba@...gle.com
Cc: mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
shuah@...nel.org, andrew.jones@...ux.dev, ricarkol@...gle.com,
chao.p.peng@...ux.intel.com, jarkko@...nel.org, yu.c.zhang@...ux.intel.com,
vannapurve@...gle.com, erdemaktas@...gle.com, mail@...iej.szmigiero.name,
vbabka@...e.cz, david@...hat.com, qperret@...gle.com, michael.roth@....com,
wei.w.wang@...el.com, liam.merwick@...cle.com, isaku.yamahata@...il.com,
kirill.shutemov@...ux.intel.com, sagis@...gle.com, jthoughton@...gle.com
Subject: [RFC PATCH v2 10/13] KVM: x86: Let moving encryption context be configurable
From: Ackerley Tng <ackerleytng@...gle.com>
SEV-capable VMs may also use the KVM_X86_SW_PROTECTED_VM type, but
they will still need architecture-specific handling to move encryption
context. Hence, we let moving of encryption context be configurable
and store that configuration in a flag.
Co-developed-by: Vishal Annapurve <vannapurve@...gle.com>
Signed-off-by: Vishal Annapurve <vannapurve@...gle.com>
Signed-off-by: Ackerley Tng <ackerleytng@...gle.com>
Signed-off-by: Ryan Afranji <afranji@...gle.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm/sev.c | 2 ++
arch/x86/kvm/x86.c | 9 ++++++++-
3 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 179618300270..db37ce814611 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1576,6 +1576,7 @@ struct kvm_arch {
#define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1)
struct kvm_mmu_memory_cache split_desc_cache;
+ bool use_vm_enc_ctxt_op;
gfn_t gfn_direct_bits;
/*
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 689521d9e26f..95083556d321 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -442,6 +442,8 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
if (ret)
goto e_no_asid;
+ kvm->arch.use_vm_enc_ctxt_op = true;
+
init_args.probe = false;
ret = sev_platform_init(&init_args);
if (ret)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 637540309456..3a7e05c47aa8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6624,7 +6624,14 @@ static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
if (r)
goto out_mark_migration_done;
- r = kvm_x86_call(vm_move_enc_context_from)(kvm, source_kvm);
+ /*
+ * Different types of VMs will allow userspace to define if moving
+ * encryption context should be required.
+ */
+ if (kvm->arch.use_vm_enc_ctxt_op &&
+ kvm_x86_ops.vm_move_enc_context_from) {
+ r = kvm_x86_call(vm_move_enc_context_from)(kvm, source_kvm);
+ }
kvm_unlock_two_vms(kvm, source_kvm);
out_mark_migration_done:
--
2.49.0.1101.gccaa498523-goog
Powered by blists - more mailing lists