lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 18 Jun 2024 07:49:43 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Kai Huang <kai.huang@...el.com>
Cc: Tina Zhang <tina.zhang@...el.com>, Hang Yuan <hang.yuan@...el.com>, 
	"pbonzini@...hat.com" <pbonzini@...hat.com>, Bo2 Chen <chen.bo@...el.com>, 
	"sagis@...gle.com" <sagis@...gle.com>, 
	"isaku.yamahata@...ux.intel.com" <isaku.yamahata@...ux.intel.com>, Erdem Aktas <erdemaktas@...gle.com>, 
	"isaku.yamahata@...il.com" <isaku.yamahata@...il.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>, 
	Isaku Yamahata <isaku.yamahata@...el.com>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v19 037/130] KVM: TDX: Make KVM_CAP_MAX_VCPUS backend specific

On Tue, Jun 18, 2024, Kai Huang wrote:
> On 15/06/2024 12:04 pm, Sean Christopherson wrote:
> > On Fri, Jun 14, 2024, Kai Huang wrote:
> > > > - The "max_vcpus_per_td" can be different depending on module versions. In
> > > > practice it reflects the maximum physical logical cpus that all the
> > > > platforms (that the module supports) can possibly have.
> > 
> > It's a reasonable restriction, e.g. KVM_CAP_NR_VCPUS is already capped at number
> > of online CPUs, although userspace is obviously allowed to create oversubscribed
> > VMs.
> > 
> > I think the sane thing to do is document that TDX VMs are restricted to the number
> > of logical CPUs in the system, have KVM_CAP_MAX_VCPUS enumerate exactly that, and
> > then sanity check that max_vcpus_per_td is greater than or equal to what KVM
> > reports for KVM_CAP_MAX_VCPUS. >
> > Stating that the maximum number of vCPUs depends on the whims TDX module doesn't
> > provide a predictable ABI for KVM, i.e. I don't want to simply forward TDX's
> > max_vcpus_per_td to userspace.
> 
> This sounds good to me.  I think it should be also OK for client too, if TDX
> ever gets supported for client.
> 
> IIUC we can consult the @nr_cpu_ids or num_possible_cpus() to get the
> "number of logical CPUs in the system".  And we can reject to use the TDX
> module if 'max_vcpus_per_td' turns to be smaller.

I assume TDX is incompatible with actual physical CPU hotplug?  If so, we can and
should use num_present_cpus().  If  loading the TDX module completely disables
onlining CPUs, then we can use num_online_cpus().

> I think the relevant question is is whether we should still report "number
> of logical CPUs in the system" via KVM_CAP_MAX_VCPUS?  Because if doing so,
> this still means the userspace will need to check KVM_CAP_MAX_VCPUS vm
> extention on per-vm basis.

Yes.

> And if it does, then from userspace's perspective, it actually doesn't
> matter whether underneath the per-vm KVM_CAP_MAX_VCPUS is limited by TDX or
> the system cpus (also see below).

It matters because I don't want KVM's ABI to be tied to the whims of the TDX module.
Today, there's no limitations on the max number of vCPUs.  Tomorrow, it's limited
by the number of pCPUs.  Three days from now, I don't want to find out that the
TDX module is limiting the number of vCPUs based on some other new criteria.

> The userspace cannot tell the difference anyway.  It just needs to change to
> query KVM_CAP_MAX_VCPUS to per-vm basis.
> 
> Or, we could limit this to TDX guest ONLY:
> 
> The KVM_CAP_MAX_VCPUS is still global.  However for TDX specifically, the
> userspace should use other way to query the number of LPs the system
> supports (I assume there should be existing ABI for this?).
> 
> But looks this isn't something nice?

What's wrong with querying KVM_CAP_MAX_VCPUS on the VM file descriptor?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ