lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 6 Mar 2024 19:05:53 +0800
From: Like Xu <like.xu.linux@...il.com>
To: Lai Jiangshan <jiangshanlai@...il.com>
Cc: Lai Jiangshan <jiangshan.ljs@...group.com>,
 Sean Christopherson <seanjc@...gle.com>, Borislav Petkov <bp@...en8.de>,
 kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>, x86@...nel.org,
 Hou Wenlong <houwenlong.hwl@...group.com>,
 "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 00/73] KVM: x86/PVM: Introduce a new hypervisor

Hi Jiangshan,

On 26/2/2024 10:35 pm, Lai Jiangshan wrote:
> Performance drawback
> ====================
> The most significant drawback of PVM is shadowpaging. Shadowpaging
> results in very bad performance when guest applications frequently
> modify pagetable, including excessive processes forking.

Some numbers are needed here to show how bad this RFC virt-pvm version
without SPT optimization is in terms of performance. Compared to L2-VM
based on nested EPT-on-EPT, the following benchmarks show a significant
performance loss in PVM-based L2-VM (per pvm-get-started-with-kata.md):

- byte/UnixBench-shell1: -67%
- pts/sysbench-1.1.0 [Test: RAM / Memory]: -55%
- Mmap Latency [lmbench]: -92%
- Context switching [lmbench]: -83%
- syscall_get_pid_latency: -77%

Not sure if these performance conclusions are reproducible on your VM,
but it reveals the concern of potential users that there is not a strong
enough incentive to offload the burden of maintaining kvm-pvm.ko to the
upstream community until there is a public available SPT optimization
based on your or any state-of-art MMU-PV-ops impl. brought to the ring.

There are other kernel technologies used by PVM that have user scenarios
outside of PVM (e.g. unikernel/kernel-level sandbox), and it seems to me
that there's opportunities for all of them to be absorbed by upstream
individually and sequentially, but getting the KVM community to take
kvm-pvm.ko seriously may be more dependent on how much room there can
be for performance optimization based on your "Parallel Page fault for SPT
and Paravirtualized MMU Optimization" implementation, and the optimizing
space developers can squeeze out of legacy EPT-on-EPT solution.

> 
> However, many long-running cloud services, such as Java, modify
> pagetables less frequently and can perform very well with shadowpaging.
> In some cases, they can even outperform EPT since they can avoid EPT TLB
> entries. Furthermore, PVM can utilize host PCIDs for guest processes,
> providing a finer-grained approach compared to VPID/ASID.
> 
> To mitigate the performance problem, we designed several optimizations
> for the shadow MMU (not included in the patchset) and also planning to
> build a shadow EPT in L0 for L2 PVM guests.
> 
> See the paper for more optimizations and the performance details.
> 
> Future plans
> ============
> Some optimizations are not covered in this series now.
> 
> - Parallel Page fault for SPT and Paravirtualized MMU Optimization.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ