[<prev] [next>] [day] [month] [year] [list]
Message-ID: <BLU436-SMTP14307CB34C2C944CD61C9ED805B0@phx.gbl>
Date: Wed, 16 Sep 2015 11:51:54 +0800
From: Wanpeng Li <wanpeng.li@...mail.com>
To: Paolo Bonzini <pbonzini@...hat.com>
CC: Jan Kiszka <jan.kiszka@...mens.com>, Bandan Das <bsd@...hat.com>,
Wincy Van <fanwenyi0529@...il.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Wanpeng Li <wanpeng.li@...mail.com>
Subject: [PATCH v3 0/2] KVM: nested VPID emulation
v2 -> v3:
* enhance allocate/free_vpid as Jan's suggestion
* add more comments to 2/2
v1 -> v2:
* enhance allocate/free_vpid to handle shadow vpid
* drop empty space
* allocate shadow vpid during initialization
* For each nested vmentry, if vpid12 is changed, reuse shadow vpid w/ an
invvpid.
VPID is used to tag address space and avoid a TLB flush. Currently L0 use
the same VPID to run L1 and all its guests. KVM flushes VPID when switching
between L1 and L2.
This patch advertises VPID to the L1 hypervisor, then address space of L1 and
L2 can be separately treated and avoid TLB flush when swithing between L1 and
L2.
Performance:
run lmbench on L2 w/ 3.5 kernel.
Context switching - times in microseconds - smaller is better
-------------------------------------------------------------------------
Host OS 2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
--------- ------------- ------ ------ ------ ------ ------ ------- -------
kernel Linux 3.5.0-1 1.2200 1.3700 1.4500 4.7800 2.3300 5.60000 2.88000 nested VPID
kernel Linux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000 vanilla
Wanpeng Li (2):
KVM: nVMX: enhance allocate/free_vpid to handle shadow vpid
KVM: nVMX: nested VPID emulation
arch/x86/kvm/vmx.c | 61 +++++++++++++++++++++++++++++++++++++-----------------
1 file changed, 42 insertions(+), 19 deletions(-)
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists