[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240323011335.GC2357401@ls.amr.corp.intel.com>
Date: Fri, 22 Mar 2024 18:13:35 -0700
From: Isaku Yamahata <isaku.yamahata@...el.com>
To: "Huang, Kai" <kai.huang@...el.com>
Cc: isaku.yamahata@...el.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, isaku.yamahata@...il.com,
Paolo Bonzini <pbonzini@...hat.com>, erdemaktas@...gle.com,
Sean Christopherson <seanjc@...gle.com>,
Sagi Shahar <sagis@...gle.com>, chen.bo@...el.com,
hang.yuan@...el.com, tina.zhang@...el.com,
isaku.yamahata@...ux.intel.com
Subject: Re: [PATCH v19 037/130] KVM: TDX: Make KVM_CAP_MAX_VCPUS backend
specific
On Fri, Mar 22, 2024 at 12:36:40PM +1300,
"Huang, Kai" <kai.huang@...el.com> wrote:
> So how about:
Thanks for it. I'll update the commit message with some minor fixes.
> "
> TDX has its own mechanism to control the maximum number of VCPUs that the
> TDX guest can use. When creating a TDX guest, the maximum number of vcpus
> needs to be passed to the TDX module as part of the measurement of the
> guest.
>
> Because the value is part of the measurement, thus part of attestation, it
^'s
> better to allow the userspace to be able to configure it. E.g. the users
the userspace to configure it ^,
> may want to precisely control the maximum number of vcpus their precious VMs
> can use.
>
> The actual control itself must be done via the TDH.MNG.INIT SEAMCALL itself,
> where the number of maximum cpus is an input to the TDX module, but KVM
> needs to support the "per-VM number of maximum vcpus" and reflect that in
per-VM maximum number of vcpus
> the KVM_CAP_MAX_VCPUS.
>
> Currently, the KVM x86 always reports KVM_MAX_VCPUS for all VMs but doesn't
> allow to enable KVM_CAP_MAX_VCPUS to configure the number of maximum vcpus
maximum number of vcpus
> on VM-basis.
>
> Add "per-VM maximum vcpus" to KVM x86/TDX to accommodate TDX's needs.
>
> The userspace-configured value then can be verified when KVM is actually
used
> creating the TDX guest.
> "
--
Isaku Yamahata <isaku.yamahata@...el.com>
Powered by blists - more mailing lists