lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55F8FC13.8070608@siemens.com>
Date:	Wed, 16 Sep 2015 07:20:19 +0200
From:	Jan Kiszka <jan.kiszka@...mens.com>
To:	Wanpeng Li <wanpeng.li@...mail.com>,
	Paolo Bonzini <pbonzini@...hat.com>
Cc:	kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: nVMX: nested VPID emulation

On 2015-09-16 04:36, Wanpeng Li wrote:
> On 9/16/15 1:32 AM, Jan Kiszka wrote:
>> On 2015-09-15 12:14, Wanpeng Li wrote:
>>> On 9/14/15 10:54 PM, Jan Kiszka wrote:
>>>> Last but not least: the guest can now easily exhaust the host's pool of
>>>> vpid by simply spawning plenty of VCPUs for L2, no? Is this acceptable
>>>> or should there be some limit?
>>> I reuse the value of vpid02 while vpid12 changed w/ one invvpid in v2,
>>> and the scenario which you pointed out can be avoid.
>> I cannot yet follow why there is no chance for L1 to consume all vpids
>> that the host manages in that single, global bitmap by simply spawning a
>> lot of nested VCPUs for some L2. What is enforcing L1 to call nested
>> vmclear - apparently the only way, besides destructing nested VCPUs, to
>> release such vpids again?
> 
> In v2, there is no direct mapping between vpid02 and vpid12, the vpid02
> is per-vCPU for L0 and reused while the value of vpid12 is changed w/
> one invvpid during nested vmentry. The vpid12 is allocated by L1 for L2,
> so it will not influence global bitmap(for vpid01 and vpid02 allocation)
> even if spawn a lot of nested vCPUs.

Ah, I see, you limit allocation to one additional host-side vpid per
VCPU, for nesting. That looks better. That also means all vpids for L2
will be folded on that single vpid in hardware, right? So the major
benefit comes from having separate vpids when switching between L1 and
L2, in fact.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ