lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f1de1b7b6512280fae4ac05e77ced80a585971b.1712785629.git.isaku.yamahata@intel.com>
Date: Wed, 10 Apr 2024 15:07:33 -0700
From: isaku.yamahata@...el.com
To: kvm@...r.kernel.org
Cc: isaku.yamahata@...el.com,
	isaku.yamahata@...il.com,
	linux-kernel@...r.kernel.org,
	Sean Christopherson <seanjc@...gle.com>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Michael Roth <michael.roth@....com>,
	David Matlack <dmatlack@...gle.com>,
	Federico Parola <federico.parola@...ito.it>,
	Kai Huang <kai.huang@...el.com>
Subject: [PATCH v2 07/10] KVM: x86: Always populate L1 GPA for KVM_MAP_MEMORY

From: Isaku Yamahata <isaku.yamahata@...el.com>

Forcibly switch vCPU mode out from guest mode and SMM mode before calling
KVM page fault handler for KVM_MAP_MEMORY.

KVM_MAP_MEMORY populates guest memory with guest physical address (GPA).
If the vCPU is in guest mode, it populates with L2 GPA.  If vCPU is in SMM
mode, it populates the SMM address pace.  The API would be difficult to use
as such.  Change vCPU MMU mode around populating the guest memory to always
populate with L1 GPA.

There are several options to populate L1 GPA irrelevant to vCPU mode.
- Switch vCPU MMU only: This patch.
  Pros: Concise implementation.
  Cons: Heavily dependent on the KVM MMU implementation.
- Use kvm_x86_nested_ops.get/set_state() to switch to/from guest mode.
  Use __get/set_sregs2() to switch to/from SMM mode.
  Pros: straightforward.
  Cons: This may cause unintended side effects.
- Refactor KVM page fault handler not to pass vCPU. Pass around necessary
  parameters and struct kvm.
  Pros: The end result will have clearly no side effects.
  Cons: This will require big refactoring.
- Return error on guest mode or SMM mode:  Without this patch.
  Pros: No additional patch.
  Cons: Difficult to use.

Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
---
v2:
- Newly added.
---
 arch/x86/kvm/x86.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2c765de3531e..8ba9c1720ac9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5871,8 +5871,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu,
 			     struct kvm_memory_mapping *mapping)
 {
+	struct kvm_mmu *mmu = NULL, *walk_mmu = NULL;
 	u64 end, error_code = 0;
 	u8 level = PG_LEVEL_4K;
+	bool is_smm;
 	int r;
 
 	/*
@@ -5882,18 +5884,40 @@ int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu,
 	if (!tdp_enabled)
 		return -EOPNOTSUPP;
 
+	/* Force to use L1 GPA despite of vcpu MMU mode. */
+	is_smm = !!(vcpu->arch.hflags & HF_SMM_MASK);
+	if (is_smm ||
+	    vcpu->arch.mmu != &vcpu->arch.root_mmu ||
+	    vcpu->arch.walk_mmu != &vcpu->arch.root_mmu) {
+		vcpu->arch.hflags &= ~HF_SMM_MASK;
+		mmu = vcpu->arch.mmu;
+		walk_mmu = vcpu->arch.walk_mmu;
+		vcpu->arch.mmu = &vcpu->arch.root_mmu;
+		vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
+		kvm_mmu_reset_context(vcpu);
+	}
+
 	/* reload is optimized for repeated call. */
 	kvm_mmu_reload(vcpu);
 
 	r = kvm_tdp_map_page(vcpu, mapping->base_address, error_code, &level);
 	if (r)
-		return r;
+		goto out;
 
 	/* mapping->base_address is not necessarily aligned to level-hugepage. */
 	end = (mapping->base_address & KVM_HPAGE_MASK(level)) +
 		KVM_HPAGE_SIZE(level);
 	mapping->size -= end - mapping->base_address;
 	mapping->base_address = end;
+
+out:
+	/* Restore MMU state. */
+	if (is_smm || mmu) {
+		vcpu->arch.hflags |= is_smm ? HF_SMM_MASK : 0;
+		vcpu->arch.mmu = mmu;
+		vcpu->arch.walk_mmu = walk_mmu;
+		kvm_mmu_reset_context(vcpu);
+	}
 	return r;
 }
 
-- 
2.43.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ