[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1557758315-12667-5-git-send-email-alexandre.chartre@oracle.com>
Date: Mon, 13 May 2019 16:38:12 +0200
From: Alexandre Chartre <alexandre.chartre@...cle.com>
To: pbonzini@...hat.com, rkrcmar@...hat.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
kvm@...r.kernel.org, x86@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: konrad.wilk@...cle.com, jan.setjeeilers@...cle.com,
liran.alon@...cle.com, jwadams@...gle.com,
alexandre.chartre@...cle.com
Subject: [RFC KVM 04/27] KVM: x86: Switch to KVM address space on entry to guest
From: Liran Alon <liran.alon@...cle.com>
Switch to KVM address space on entry to guest and switch
out on immediately at exit (before enabling host interrupts).
For now, this is not effectively switching, we just remain on
the kernel address space. In addition, we switch back as soon
as we exit guest, which makes KVM #VMExit handlers still run
with full host address space.
However, this introduces the entry points and places for switching.
Next commits will change switch to happen only when necessary.
Signed-off-by: Liran Alon <liran.alon@...cle.com>
Signed-off-by: Alexandre Chartre <alexandre.chartre@...cle.com>
---
arch/x86/kvm/isolation.c | 20 ++++++++++++++++++++
arch/x86/kvm/isolation.h | 2 ++
arch/x86/kvm/x86.c | 8 ++++++++
3 files changed, 30 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/isolation.c b/arch/x86/kvm/isolation.c
index 74bc0cd..35aa659 100644
--- a/arch/x86/kvm/isolation.c
+++ b/arch/x86/kvm/isolation.c
@@ -119,3 +119,23 @@ void kvm_isolation_uninit(void)
kvm_isolation_uninit_mm();
pr_info("KVM: x86: End of isolated address space\n");
}
+
+void kvm_isolation_enter(void)
+{
+ if (address_space_isolation) {
+ /*
+ * Switches to kvm_mm should happen from vCPU thread,
+ * which should not be a kernel thread with no mm
+ */
+ BUG_ON(current->active_mm == NULL);
+ /* TODO: switch to kvm_mm */
+ }
+}
+
+void kvm_isolation_exit(void)
+{
+ if (address_space_isolation) {
+ /* TODO: Kick sibling hyperthread before switch to host mm */
+ /* TODO: switch back to original mm */
+ }
+}
diff --git a/arch/x86/kvm/isolation.h b/arch/x86/kvm/isolation.h
index cf8c7d4..595f62c 100644
--- a/arch/x86/kvm/isolation.h
+++ b/arch/x86/kvm/isolation.h
@@ -4,5 +4,7 @@
extern int kvm_isolation_init(void);
extern void kvm_isolation_uninit(void);
+extern void kvm_isolation_enter(void);
+extern void kvm_isolation_exit(void);
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4b7cec2..85700e0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7896,6 +7896,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
goto cancel_injection;
}
+ kvm_isolation_enter();
+
if (req_immediate_exit) {
kvm_make_request(KVM_REQ_EVENT, vcpu);
kvm_x86_ops->request_immediate_exit(vcpu);
@@ -7946,6 +7948,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
vcpu->arch.last_guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());
+ /*
+ * TODO: Move this to where we architectually need to access
+ * host (or other VM) sensitive data
+ */
+ kvm_isolation_exit();
+
vcpu->mode = OUTSIDE_GUEST_MODE;
smp_wmb();
--
1.7.1
Powered by blists - more mailing lists