[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyChprt9LvLXXDeu1KwS4_V5mqhUTwJyDvqca-S_PSy6zg@mail.gmail.com>
Date: Fri, 1 Mar 2024 22:00:16 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
Lai Jiangshan <jiangshan.ljs@...group.com>, Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, Ingo Molnar <mingo@...hat.com>, kvm@...r.kernel.org, x86@...nel.org,
Kees Cook <keescook@...omium.org>, Juergen Gross <jgross@...e.com>,
Hou Wenlong <houwenlong.hwl@...group.com>
Subject: Re: [RFC PATCH 00/73] KVM: x86/PVM: Introduce a new hypervisor
Hello, Sean
On Wed, Feb 28, 2024 at 1:27 AM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Mon, Feb 26, 2024, Paolo Bonzini wrote:
> > On Mon, Feb 26, 2024 at 3:34 PM Lai Jiangshan <jiangshanlai@...il.com> wrote:
> > > - Full control: In XENPV/Lguest, the host Linux (dom0) entry code is
> > > subordinate to the hypervisor/switcher, and the host Linux kernel
> > > loses control over the entry code. This can cause inconvenience if
> > > there is a need to update something when there is a bug in the
> > > switcher or hardware. Integral entry gives the control back to the
> > > host kernel.
> > >
> > > - Zero overhead incurred: The integrated entry code doesn't cause any
> > > overhead in host Linux entry path, thanks to the discreet design with
> > > PVM code in the switcher, where the PVM path is bypassed on host events.
> > > While in XENPV/Lguest, host events must be handled by the
> > > hypervisor/switcher before being processed.
> >
> > Lguest... Now that's a name I haven't heard in a long time. :) To be
> > honest, it's a bit weird to see yet another PV hypervisor. I think
> > what really killed Xen PV was the impossibility to protect from
> > various speculation side channel attacks, and I would like to
> > understand how PVM fares here.
> >
> > You obviously did a great job in implementing this within the KVM
> > framework; the changes in arch/x86/ are impressively small. On the
> > other hand this means it's also not really my call to decide whether
> > this is suitable for merging upstream. The bulk of the changes are
> > really in arch/x86/kernel/ and arch/x86/entry/, and those are well
> > outside my maintenance area.
>
> The bulk of changes in _this_ patchset are outside of arch/x86/kvm, but there are
> more changes on the horizon:
>
> : To mitigate the performance problem, we designed several optimizations
> : for the shadow MMU (not included in the patchset) and also planning to
> : build a shadow EPT in L0 for L2 PVM guests.
>
> : - Parallel Page fault for SPT and Paravirtualized MMU Optimization.
>
> And even absent _new_ shadow paging functionality, merging PVM would effectively
> shatter any hopes of ever removing KVM's existing, complex shadow paging code.
>
> Specifically, unsync 4KiB PTE support in KVM provides almost no benefit for nested
> TDP. So if we can ever drop support for legacy shadow paging, which is a big if,
> but not completely impossible, then we could greatly simplify KVM's shadow MMU.
>
One of the important goals of open-sourcing PVM is to allow for the
optimization of shadow paging, especially through paravirtualization
methods, and potentially even to eliminate the need for shadow paging.
1) Technology: Shadow paging is a technique for page table compaction in
the category of "one-dimensional paging", which includes the direct
paging technology in XenPV. When the page tables are stable,
one-dimensional paging can outperform TDP because it saves on TLB
resources. Another one-dimensional paging technology would be better
to be introduced before shadow paging is removed for performance.
2) Naming: The reason we use the name shadowpage in our paper and the
cover letter is that this term is more widely recognized and makes it
easier for people to understand how PVM implements its page tables.
It also demonstrates that PVM is able to implement a paging mechanism
with very little code on top of KVM. However, this does not mean we
adhere to shadow paging. Any one-dimensional paging technology can
work here too.
3) Paravirt: As you mentioned, the best way to eliminate shadow paging
is by using a paravirtualization (PV) approach. PVM is inherently
suitable for having PV since it is a paravirt solution and has a
corresponding framework. However, PV pagetables leads to a complex
patchset, which we prefer not to include in the initial PVM patchset
introduction.
4) Pave the path: One of the purposes of open-sourcing PVM is to bring
in a new scenario for possibly introducing PV pagetable interfaces
and optimizing shadow paging. Moreover, investing development effort
in shadow paging is the only way to ultimately remove it.
5) Optimizations: We have experimented with numerous optimizations
including at least two categories: parallel-pagetable and
enlightened-pagetable. The parallel pagetable overhauls the locking
mechanism within the shadow paging. The enlightened-pagetable
introduces PVOPS in the guest to modify the page tables. One set of
PVOPS, used on 4KiB PTEs, queues the pointers of the modified GPTEs
in a hypervisor-guest shared ring buffer. Although the overall
mechanism, including TLB handling, is not simple, the hypervisor
portion is simpler than the unsync-sp method, and it bypasses many
unsync-sp related code paths. The other set of PVOPS targets larger
page table entries and directly issues hypercalls. Should both sets
of PVOPS be utilized, write-protect for SPs is unneeded and shadow
paging could be considered as being removed.
> Which is a good segue into my main question: was there any one thing that was
> _the_ motivating factor for taking on the cost+complexity of shadow paging? And
> as alluded to be Paolo, taking on the downsides of reduced isolation?
>
> It doesn't seem like avoiding L0 changes was the driving decision, since IIUC
> you have plans to make changes there as well.
>
> : To mitigate the performance problem, we designed several optimizations
> : for the shadow MMU (not included in the patchset) and also planning to
> : build a shadow EPT in L0 for L2 PVM guests.
>
Getting every cloud provider to adopt a technology is more challenging
than developing the technology itself. It is easy to compile a list that
includes many technologies for L0 that have been merged into upstream
KVM for quite some time, yet not all major cloud providers use or
support them.
The purpose of PVM includes enabling the use of KVM within various cloud
VMs, allowing for easy operation of businesses with secure containers.
Therefore, it cannot rely on whether cloud providers make such changes
to L0.
The reason we are experimenting with modifications to L0 is because we
have many physical machines. Developing this technology getting help from
L0 for L2 paging could provide us and others who have their own physical
machines with an additional option.
> Performance I can kinda sorta understand, but my gut feeling is that the problems
> with nested virtualization are solvable by adding nested paravirtualization between
> L0<=>L1, with likely lower overall cost+complexity than paravirtualizing L1<=>L2.
>
> The bulk of the pain with nested hardware virtualization lies in having to emulate
> VMX/SVM, and shadow L1's TDP page tables. Hyper-V's eVMCS takes some of the sting
> off nVMX in particular, but eVMCS is still hobbled by its desire to be almost
> drop-in compatible with VMX.
>
> If we're willing to define a fully PV interface between L0 and L1 hypervisors, I
> suspect we provide performance far, far better than nVMX/nSVM. E.g. if L0 provides
> a hypercall to map an L2=>L1 GPA, then L0 doesn't need to shadow L1 TDP, and L1
> doesn't even need to maintain hardware-defined page tables, it can use whatever
> software-defined data structure best fits it needs.
>
> And if we limit support to 64-bit L2 kernels and drop support for unnecessary cruft,
> the L1<=>L2 entry/exit paths could be drastically simplified and streamlined. And
> it should be very doable to concoct an ABI between L0 and L2 that allows L0 to
> directly emulate "hot" instructions from L2, e.g. CPUID, common MSRs, etc I/O
> would likely be solvable too, e.g. maybe with a mediated device type solution that
> allows L0 to handle the data path for L2?
>
> The one thing that I don't see line of sight to supporting is taking L0 out of the
> TCB, i.e. running L2 VMs inside TDX/SNP guests. But for me at least, that alone
> isn't sufficient justification for adding a PV flavor of KVM.
I didn't want to suggest that running PVM inside TDX is an important use
case, but I just used it to emphasize PVM's universally accessibility in
all environments, including inside the notoriously otherwise impossible
environment as TDX as Paolo said in a LWN comment:
https://lwn.net/Articles/865807/
: TDX cannot be used in a nested VM, and you cannot use nested
: virtualization inside a TDX virtual machine.
(and actually the support for PVM in TDX/SNP is not completed yet)
Thanks
Lai
Powered by blists - more mailing lists