lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250326193619.3714986-7-yosry.ahmed@linux.dev>
Date: Wed, 26 Mar 2025 19:36:01 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
	Jim Mattson <jmattson@...gle.com>,
	Maxim Levitsky <mlevitsk@...hat.com>,
	Vitaly Kuznetsov <vkuznets@...hat.com>,
	Rik van Riel <riel@...riel.com>,
	Tom Lendacky <thomas.lendacky@....com>,
	x86@...nel.org,
	kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Yosry Ahmed <yosry.ahmed@...ux.dev>
Subject: [RFC PATCH 06/24] KVM: SEV: Track ASID->vCPU instead of ASID->VMCB

SEV currently tracks the ASID to VMCB mapping for each physical CPU.
This is required to flush the ASID when a new VMCB using the same ASID
is run on the same CPU. Practically, there is a single VMCB for each
vCPU using SEV. Furthermore, TLB flushes on nested transitions between
VMCB01 and VMCB02 are handled separately (see
nested_svm_transition_tlb_flush()).

In preparation for generalizing the tracking and making the tracking
more expensive, start tracking the ASID to vCPU mapping instead. This
will allow for the tracking to be moved to a cheaper code path when
vCPUs are switched.

Signed-off-by: Yosry Ahmed <yosry.ahmed@...ux.dev>
---
 arch/x86/kvm/svm/sev.c | 12 ++++++------
 arch/x86/kvm/svm/svm.c |  2 +-
 arch/x86/kvm/svm/svm.h |  4 ++--
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index d613f81addf1c..ddb4d5b211ed7 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -240,7 +240,7 @@ static void sev_asid_free(struct kvm_sev_info *sev)
 
 	for_each_possible_cpu(cpu) {
 		sd = per_cpu_ptr(&svm_data, cpu);
-		sd->sev_vmcbs[sev->asid] = NULL;
+		sd->sev_vcpus[sev->asid] = NULL;
 	}
 
 	mutex_unlock(&sev_bitmap_lock);
@@ -3081,8 +3081,8 @@ int sev_cpu_init(struct svm_cpu_data *sd)
 	if (!sev_enabled)
 		return 0;
 
-	sd->sev_vmcbs = kcalloc(nr_asids, sizeof(void *), GFP_KERNEL);
-	if (!sd->sev_vmcbs)
+	sd->sev_vcpus = kcalloc(nr_asids, sizeof(void *), GFP_KERNEL);
+	if (!sd->sev_vcpus)
 		return -ENOMEM;
 
 	return 0;
@@ -3471,14 +3471,14 @@ int pre_sev_run(struct vcpu_svm *svm, int cpu)
 	/*
 	 * Flush guest TLB:
 	 *
-	 * 1) when different VMCB for the same ASID is to be run on the same host CPU.
+	 * 1) when different vCPU for the same ASID is to be run on the same host CPU.
 	 * 2) or this VMCB was executed on different host CPU in previous VMRUNs.
 	 */
-	if (sd->sev_vmcbs[asid] == svm->vmcb &&
+	if (sd->sev_vcpus[asid] == &svm->vcpu &&
 	    svm->vcpu.arch.last_vmentry_cpu == cpu)
 		return 0;
 
-	sd->sev_vmcbs[asid] = svm->vmcb;
+	sd->sev_vcpus[asid] = &svm->vcpu;
 	vmcb_set_flush_asid(svm->vmcb);
 	vmcb_mark_dirty(svm->vmcb, VMCB_ASID);
 	return 0;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 18bfc3d3f9ba1..1156ca97fd798 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -694,7 +694,7 @@ static void svm_cpu_uninit(int cpu)
 	if (!sd->save_area)
 		return;
 
-	kfree(sd->sev_vmcbs);
+	kfree(sd->sev_vcpus);
 	__free_page(__sme_pa_to_page(sd->save_area_pa));
 	sd->save_area_pa = 0;
 	sd->save_area = NULL;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 843a29a6d150e..4ea6c61c3b048 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -340,8 +340,8 @@ struct svm_cpu_data {
 
 	struct vmcb *current_vmcb;
 
-	/* index = sev_asid, value = vmcb pointer */
-	struct vmcb **sev_vmcbs;
+	/* index = sev_asid, value = vcpu pointer */
+	struct kvm_vcpu **sev_vcpus;
 };
 
 DECLARE_PER_CPU(struct svm_cpu_data, svm_data);
-- 
2.49.0.395.g12beb8f557-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ