lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZRStOxiGwvDwGlNq@google.com>
Date:   Wed, 27 Sep 2023 15:31:23 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     Kyle Meyer <kyle.meyer@....com>
Cc:     pbonzini@...hat.com, tglx@...utronix.de, mingo@...hat.com,
        bp@...en8.de, dave.hansen@...el.com, x86@...nel.org, hpa@...or.com,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        vkuznets@...hat.com, dmatlack@...gle.com, russ.anderson@....com,
        dimitri.sivanich@....com, steve.wahl@....com
Subject: Re: [PATCH v3] KVM: x86: Add CONFIG_KVM_MAX_NR_VCPUS

On Thu, Aug 24, 2023, Kyle Meyer wrote:
> Add a Kconfig entry to set the maximum number of vCPUs per KVM guest and
> set the default value to 4096 when MAXSMP is enabled.

I'd like to capture why the max is set to 4096, both the justification and why
we don't want to go further at this point.

If you've no objection, I'll massage the changelog to this when applying:

  Add a Kconfig entry to set the maximum number of vCPUs per KVM guest and
  set the default value to 4096 when MAXSMP is enabled, as there are use
  cases that want to create more than the currently allow 1024 vCPUs and
  are more than happy to eat the memory overhead.

  The Hyper-V TLFS doesn't allow more than 64 sparse banks, i.e. allows a
  maximum of 4096 virtual CPUs. Cap KVM's maximum number of virtual CPUs
  to 4096 to avoid exceeding Hyper-V's limit as KVM support for Hyper-V is
  unconditional, and alternatives like dynamically disabling Hyper-V
  enlightenments that rely on sparse banks would require non-trivial code
  changes.

> Suggested-by: Sean Christopherson <seanjc@...gle.com>
> Signed-off-by: Kyle Meyer <kyle.meyer@....com>
> ---
> v2 -> v3: Default KVM_MAX_VCPUS to 1024 when CONFIG_KVM_MAX_NR_VCPUS is not
> defined. This prevents build failures in arch/x86/events/intel/core.c and
> drivers/vfio/vfio_main.c when KVM is disabled.
> 
>  arch/x86/include/asm/kvm_host.h |  4 ++++
>  arch/x86/kvm/Kconfig            | 11 +++++++++++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 3bc146dfd38d..cd27e0a00765 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -39,7 +39,11 @@
>  
>  #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
>  

And another thing I'll add if you don't object is a comment to explain that this
is purely to play nice with CONFIG_KVM=n.  And FWIW, I hope to make this go away
entirely: https://lore.kernel.org/all/20230916003118.2540661-27-seanjc@google.com

/*
 * CONFIG_KVM_MAX_NR_VCPUS is defined iff CONFIG_KVM!=n, provide a dummy max if
 * KVM is disabled (arbitrarily use default from CONFIG_KVM_MAX_NR_VCPUS).
 */ 

> +#ifdef CONFIG_KVM_MAX_NR_VCPUS
> +#define KVM_MAX_VCPUS CONFIG_KVM_MAX_NR_VCPUS
> +#else
>  #define KVM_MAX_VCPUS 1024
> +#endif
>  
>  /*
>   * In x86, the VCPU ID corresponds to the APIC ID, and APIC IDs
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index 89ca7f4c1464..e730e8255e22 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -141,4 +141,15 @@ config KVM_XEN
>  config KVM_EXTERNAL_WRITE_TRACKING
>  	bool
>  
> +config KVM_MAX_NR_VCPUS
> +	int "Maximum number of vCPUs per KVM guest"
> +	depends on KVM
> +	range 1024 4096
> +	default 4096 if MAXSMP
> +	default 1024
> +	help
> +	  Set the maximum number of vCPUs per KVM guest. Larger values will increase
> +	  the memory footprint of each KVM guest, regardless of how many vCPUs are
> +	  configured.

Last nit, I think the last linke should be like so:

       the memory footprint of each KVM guest, regardless of how many vCPUs are
       created for a given VM.

No need for a v4 unless you object to any of the above, I'm happt to fixup when
applying.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ