lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8b344a16-b28a-4f75-9c1a-a4edf2aa4a11@intel.com>
Date: Thu, 23 May 2024 10:27:53 +1200
From: "Huang, Kai" <kai.huang@...el.com>
To: Sean Christopherson <seanjc@...gle.com>, Paolo Bonzini
	<pbonzini@...hat.com>
CC: <kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Chao Gao
	<chao.gao@...el.com>
Subject: Re: [PATCH v2 3/6] KVM: Add a module param to allow enabling
 virtualization when KVM is loaded



On 22/05/2024 2:28 pm, Sean Christopherson wrote:
> Add an off-by-default module param, enable_virt_at_load, to let userspace
> force virtualization to be enabled in hardware when KVM is initialized,
> i.e. just before /dev/kvm is exposed to userspace.  Enabling virtualization
> during KVM initialization allows userspace to avoid the additional latency
> when creating/destroying the first/last VM.  Now that KVM uses the cpuhp
> framework to do per-CPU enabling, the latency could be non-trivial as the
> cpuhup bringup/teardown is serialized across CPUs, e.g. the latency could
> be problematic for use case that need to spin up VMs quickly.

How about we defer this until there's a real complain that this isn't 
acceptable?  To me it doesn't sound "latency of creating the first VM" 
matters a lot in the real CSP deployments.

The concern of adding a new module param is once we add it, we need to 
maintain it even it is no longer needed in the future for backward 
compatibility.  Especially this param is in kvm.ko, and for all ARCHs.

E.g., I think _IF_ the core cpuhp code is enhanced to call those 
callbacks in parallel in cpuhp_setup_state(), then this issue could be 
mitigated to an unnoticeable level.

Or we just still do:

	cpus_read_lock();
	on_each_cpu(hardware_enable_nolock, ...);
	cpuhp_setup_state_nocalls_cpuslocked(...);
	cpus_read_unlock();

I think the main benefit of series is to put all virtualization enabling 
related things into one single function.  Whether using 
cpuhp_setup_state() or using on_each_cpu() shouldn't be the main point.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ