lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Sep 2015 10:36:04 +0800
From:	Wanpeng Li <wanpeng.li@...mail.com>
To:	Jan Kiszka <jan.kiszka@...mens.com>,
	Paolo Bonzini <pbonzini@...hat.com>
CC:	kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: nVMX: nested VPID emulation

On 9/16/15 1:32 AM, Jan Kiszka wrote:
> On 2015-09-15 12:14, Wanpeng Li wrote:
>> On 9/14/15 10:54 PM, Jan Kiszka wrote:
>>> Last but not least: the guest can now easily exhaust the host's pool of
>>> vpid by simply spawning plenty of VCPUs for L2, no? Is this acceptable
>>> or should there be some limit?
>> I reuse the value of vpid02 while vpid12 changed w/ one invvpid in v2,
>> and the scenario which you pointed out can be avoid.
> I cannot yet follow why there is no chance for L1 to consume all vpids
> that the host manages in that single, global bitmap by simply spawning a
> lot of nested VCPUs for some L2. What is enforcing L1 to call nested
> vmclear - apparently the only way, besides destructing nested VCPUs, to
> release such vpids again?

In v2, there is no direct mapping between vpid02 and vpid12, the vpid02 
is per-vCPU for L0 and reused while the value of vpid12 is changed w/ 
one invvpid during nested vmentry. The vpid12 is allocated by L1 for L2, 
so it will not influence global bitmap(for vpid01 and vpid02 allocation) 
even if spawn a lot of nested vCPUs.

Regards,
Wanpeng Li

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ