[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM6PR12MB35008628D97A59AA302E772FCA8E9@DM6PR12MB3500.namprd12.prod.outlook.com>
Date: Wed, 20 Jul 2022 19:04:56 +0000
From: Kechen Lu <kechenl@...dia.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"chao.gao@...el.com" <chao.gao@...el.com>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
Somdutta Roy <somduttar@...dia.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [RFC PATCH v4 5/7] KVM: x86: add vCPU scoped toggling for
disabled exits
> -----Original Message-----
> From: Sean Christopherson <seanjc@...gle.com>
> Sent: Wednesday, July 20, 2022 11:42 AM
> To: Kechen Lu <kechenl@...dia.com>
> Cc: kvm@...r.kernel.org; pbonzini@...hat.com; chao.gao@...el.com;
> vkuznets@...hat.com; Somdutta Roy <somduttar@...dia.com>; linux-
> kernel@...r.kernel.org
> Subject: Re: [RFC PATCH v4 5/7] KVM: x86: add vCPU scoped toggling for
> disabled exits
>
> External email: Use caution opening links or attachments
>
>
> On Tue, Jun 21, 2022, Kechen Lu wrote:
> > @@ -5980,6 +5987,8 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm,
> > struct kvm_irq_level *irq_event, int kvm_vm_ioctl_enable_cap(struct kvm
> *kvm,
> > struct kvm_enable_cap *cap) {
> > + struct kvm_vcpu *vcpu;
> > + unsigned long i;
> > int r;
> >
> > if (cap->flags)
> > @@ -6036,14 +6045,17 @@ int kvm_vm_ioctl_enable_cap(struct kvm
> *kvm,
> > break;
> >
> > mutex_lock(&kvm->lock);
> > - if (kvm->created_vcpus)
> > - goto disable_exits_unlock;
> > + if (kvm->created_vcpus) {
>
> I retract my comment about using a request, I got ahead of myself.
>
> Don't update vCPUs, the whole point of adding the !kvm->created_vcpus
> check was to avoid having to update vCPUs when the per-VM behavior
> changed.
>
> In other words, keep the restriction and drop the request.
>
I see. If we keep the restriction here and not updating vCPUs when kvm->created_vcpus is true, the per-VM and per-vCPU assumption would be different here? Not sure if I understand right:
For per-VM, we assume the per-VM cap enabling is only before vcpus creation. For per-vCPU cap enabling, we are able to toggle the disabled exits runtime.
If I understand correctly, this also makes sense though.
BR,
Kechen
> > + kvm_for_each_vcpu(i, vcpu, kvm) {
> > + kvm_ioctl_disable_exits(vcpu->arch, cap->args[0]);
> > + kvm_make_request(KVM_REQ_DISABLE_EXITS, vcpu);
> > + }
> > + }
> > + mutex_unlock(&kvm->lock);
> >
> > kvm_ioctl_disable_exits(kvm->arch, cap->args[0]);
> >
> > r = 0;
> > -disable_exits_unlock:
> > - mutex_unlock(&kvm->lock);
> > break;
> > case KVM_CAP_MSR_PLATFORM_INFO:
> > kvm->arch.guest_can_read_msr_platform_info =
> > cap->args[0]; @@ -10175,6 +10187,9 @@ static int
> > vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >
> > if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING,
> vcpu))
> >
> > static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
> > +
> > + if (kvm_check_request(KVM_REQ_DISABLE_EXITS, vcpu))
> > +
> > + static_call(kvm_x86_update_disabled_exits)(vcpu);
> > }
> >
> > if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
> > --
> > 2.32.0
> >
Powered by blists - more mailing lists