[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11F6D766-EC47-4283-8797-68A1405511B0@intel.com>
Date: Mon, 13 May 2019 19:31:10 +0000
From: "Nakajima, Jun" <jun.nakajima@...el.com>
To: Alexandre Chartre <alexandre.chartre@...cle.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"rkrcmar@...hat.com" <rkrcmar@...hat.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>, "hpa@...or.com" <hpa@...or.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"luto@...nel.org" <luto@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"jan.setjeeilers@...cle.com" <jan.setjeeilers@...cle.com>,
"liran.alon@...cle.com" <liran.alon@...cle.com>,
"jwadams@...gle.com" <jwadams@...gle.com>
Subject: Re: [RFC KVM 00/27] KVM Address Space Isolation
On 5/13/19, 7:43 AM, "kvm-owner@...r.kernel.org on behalf of Alexandre Chartre" wrote:
Proposal
========
To handle both these points, this series introduce the mechanism of KVM
address space isolation. Note that this mechanism completes (a)+(b) and
don't contradict. In case this mechanism is also applied, (a)+(b) should
still be applied to the full virtual address space as a defence-in-depth).
The idea is that most of KVM #VMExit handlers code will run in a special
KVM isolated address space which maps only KVM required code and per-VM
information. Only once KVM needs to architectually access other (sensitive)
data, it will switch from KVM isolated address space to full standard
host address space. At this point, KVM will also need to kick all sibling
hyperthreads to get out of guest (note that kicking all sibling hyperthreads
is not implemented in this serie).
Basically, we will have the following flow:
- qemu issues KVM_RUN ioctl
- KVM handles the ioctl and calls vcpu_run():
. KVM switches from the kernel address to the KVM address space
. KVM transfers control to VM (VMLAUNCH/VMRESUME)
. VM returns to KVM
. KVM handles VM-Exit:
. if handling need full kernel then switch to kernel address space
. else continues with KVM address space
. KVM loops in vcpu_run() or return
- KVM_RUN ioctl returns
So, the KVM_RUN core function will mainly be executed using the KVM address
space. The handling of a VM-Exit can require access to the kernel space
and, in that case, we will switch back to the kernel address space.
Once all sibling hyperthreads are in the host (either using the full kernel address space or user address space), what happens to the other sibling hyperthreads if one of them tries to do VM entry? That VCPU will switch to the KVM address space prior to VM entry, but others continue to run? Do you think (a) + (b) would be sufficient for that case?
---
Jun
Intel Open Source Technology Center
Powered by blists - more mailing lists