[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230227212605.GF4175971@ls.amr.corp.intel.com>
Date: Mon, 27 Feb 2023 13:26:05 -0800
From: Isaku Yamahata <isaku.yamahata@...il.com>
To: "Huang, Kai" <kai.huang@...el.com>
Cc: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"Shahar, Sagi" <sagis@...gle.com>,
"Aktas, Erdem" <erdemaktas@...gle.com>,
"isaku.yamahata@...il.com" <isaku.yamahata@...il.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>,
"Christopherson,, Sean" <seanjc@...gle.com>
Subject: Re: [PATCH v11 017/113] KVM: Support KVM_CAP_MAX_VCPUS for
KVM_ENABLE_CAP
On Mon, Jan 16, 2023 at 04:44:21AM +0000,
"Huang, Kai" <kai.huang@...el.com> wrote:
> On Thu, 2023-01-12 at 08:31 -0800, isaku.yamahata@...el.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@...el.com>
> >
> > TDX attestation includes the maximum number of vcpu that the guest can
> > accommodate.
> >
>
> I don't understand why "attestation" is the reason here. Let's say TDX is used
> w/o attestation, I don't think this patch can be discarded?
>
> IMHO the true reason is TDX has it's own control of maximum number of vcpus,
> i.e. asking you to specify the value when creating the TD. Therefore, the
> constant KVM_MAX_VCPUS doesn't work for TDX guest anymore.
Without TDX attestation, this can be discarded. The TD is created with
max_vcpus=KVM_MAX_VCPUS by default.
>
>
> > For that, the maximum number of vcpu needs to be specified
> > instead of constant, KVM_MAX_VCPUS. Make KVM_ENABLE_CAP support
> > KVM_CAP_MAX_VCPUS.
> >
> > Suggested-by: Sagi Shahar <sagis@...gle.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> > ---
> > virt/kvm/kvm_main.c | 20 ++++++++++++++++++++
> > 1 file changed, 20 insertions(+)
> >
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index a235b628b32f..1cfa7da92ad0 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -4945,7 +4945,27 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,
> > }
> >
> > mutex_unlock(&kvm->slots_lock);
> > + return r;
> > + }
> > + case KVM_CAP_MAX_VCPUS: {
> > + int r;
> >
> > + if (cap->flags || cap->args[0] == 0)
> > + return -EINVAL;
> > + if (cap->args[0] > kvm_vm_ioctl_check_extension(kvm, KVM_CAP_MAX_VCPUS))
> > + return -E2BIG;
> > +
> > + mutex_lock(&kvm->lock);
> > + /* Only decreasing is allowed. */
>
> Why?
I'll make it x86 specific and will drop this check.
> > + if (cap->args[0] > kvm->max_vcpus)
> > + r = -E2BIG;
> > + else if (kvm->created_vcpus)
> > + r = -EBUSY;
> > + else {
> > + kvm->max_vcpus = cap->args[0];
> > + r = 0;
> > + }
> > + mutex_unlock(&kvm->lock);
> > return r;
> > }
> > default:
>
> Also, IIUC this change is made to the generic kvm_main.c, which means other
> archs are affected too. Is this OK to other archs? Why such change cannot
> TDX-specific (or, at least x86, or vmx specific)?
Ok, I made it x86 specific.
--
Isaku Yamahata <isaku.yamahata@...il.com>
Powered by blists - more mailing lists