[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xaihcdtaqimcu5knl2cqvj36dfwqq6xizxr7eg6cwyqsvwj2zr@tgjhldlrarp2>
Date: Wed, 5 Feb 2025 16:30:29 +0530
From: Naveen N Rao <naveen@...nel.org>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Suravee Suthikulpanit <suravee.suthikulpanit@....com>, Vasant Hegde <vasant.hegde@....com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH 1/3] KVM: x86: hyper-v: Convert synic_auto_eoi_used to an
atomic
On Tue, Feb 04, 2025 at 11:33:01AM -0800, Sean Christopherson wrote:
> On Tue, Feb 04, 2025, Naveen N Rao wrote:
> > Hi Maxim,
> >
> > On Mon, Feb 03, 2025 at 08:30:13PM -0500, Maxim Levitsky wrote:
> > > On Mon, 2025-02-03 at 22:33 +0530, Naveen N Rao (AMD) wrote:
> > > > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> > > > index 6a6dd5a84f22..7a4554ea1d16 100644
> > > > --- a/arch/x86/kvm/hyperv.c
> > > > +++ b/arch/x86/kvm/hyperv.c
> > > > @@ -131,25 +131,18 @@ static void synic_update_vector(struct kvm_vcpu_hv_synic *synic,
> > > > if (auto_eoi_old == auto_eoi_new)
> > > > return;
> > > >
> > > > - if (!enable_apicv)
> > > > - return;
> > > > -
> > > > - down_write(&vcpu->kvm->arch.apicv_update_lock);
> > > > -
> > > > if (auto_eoi_new)
> > > > - hv->synic_auto_eoi_used++;
> > > > + atomic_inc(&hv->synic_auto_eoi_used);
> > > > else
> > > > - hv->synic_auto_eoi_used--;
> > > > + atomic_dec(&hv->synic_auto_eoi_used);
> > > >
> > > > /*
> > > > * Inhibit APICv if any vCPU is using SynIC's AutoEOI, which relies on
> > > > * the hypervisor to manually inject IRQs.
> > > > */
> > > > - __kvm_set_or_clear_apicv_inhibit(vcpu->kvm,
> > > > - APICV_INHIBIT_REASON_HYPERV,
> > > > - !!hv->synic_auto_eoi_used);
> > > > -
> > > > - up_write(&vcpu->kvm->arch.apicv_update_lock);
> > > > + kvm_set_or_clear_apicv_inhibit(vcpu->kvm,
> > > > + APICV_INHIBIT_REASON_HYPERV,
> > > > + !!atomic_read(&hv->synic_auto_eoi_used));
> > >
> > > Hi,
> > >
> > > This introduces a race, because there is a race window between the moment
> > > we read hv->synic_auto_eoi_used, and decide to set/clear the inhibit.
> > >
> > > After we read hv->synic_auto_eoi_used, but before we call the
> > > kvm_set_or_clear_apicv_inhibit, other core might also run
> > > synic_update_vector and change hv->synic_auto_eoi_used, finish setting the
> > > inhibit in kvm_set_or_clear_apicv_inhibit, and only then we will call
> > > kvm_set_or_clear_apicv_inhibit with the stale value of
> > > hv->synic_auto_eoi_used and clear it.
> >
> > Ah, indeed. Thanks for the explanation.
> >
> > I wonder if we can switch to using kvm_hv->hv_lock in place of
> > apicv_update_lock. That lock is already used to guard updates to
> > partition-wide MSRs in kvm_hv_set_msr_common(). So, that might be ok
> > too?
>
> Why? All that would do is add complexity (taking two locks, or ensuring there
> is no race when juggling locks), because if the guest is actually
> toggling AutoEOI
> at a meaningful rate on multiple vCPUs, then there is going to be lock contention
> regardless of which lock is taken.
Yes, indeed.
The rationale for switching to a different lock was to address the
original goal with this patch, which is to restrict use of
apicv_update_lock to only toggling the APICv state. But, that is only
relevant if we want to attempt that.
I do see why hv_lock won't work in this scenario though, so yes, we
either need to retain use of apicv_update_lock, or introduce a new mutex
for protecting updates to synic_auto_eoi_used.
Thanks,
Naveen
Powered by blists - more mailing lists