[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YKWI1GPdNc4shaCt@google.com>
Date: Wed, 19 May 2021 21:53:24 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Jon Kohler <jon@...anix.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Bijan Mottahedeh <bijan.mottahedeh@...anix.com>,
Raphael Norwitz <raphael.norwitz@...anix.com>,
Junaid Shahid <junaids@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"x86@...nel.org" <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] kvm: x86: move srcu lock out of kvm_vcpu_check_block
On Wed, May 05, 2021, Jon Kohler wrote:
>
> > On May 1, 2021, at 9:05 AM, Paolo Bonzini <pbonzini@...hat.com> wrote:
> >
> > On 30/04/21 22:45, Sean Christopherson wrote:
> >> On Wed, Apr 28, 2021, Jon Kohler wrote:
> >>> To improve performance, this moves kvm->srcu lock logic from
> >>> kvm_vcpu_check_block to kvm_vcpu_running and wraps directly around
> >>> check_events. Also adds a hint for callers to tell
> >>> kvm_vcpu_running whether or not to acquire srcu, which is useful in
> >>> situations where the lock may already be held. With this in place, we
> >>> see roughly 5% improvement in an internal benchmark [3] and no more
> >>> impact from this lock on non-nested workloads.
> >> ...
> >>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> >>> index efc7a82ab140..354f690cc982 100644
> >>> --- a/arch/x86/kvm/x86.c
> >>> +++ b/arch/x86/kvm/x86.c
> >>> @@ -9273,10 +9273,24 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu)
> >>> return 1;
> >>> }
> >>>
> >>> -static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
> >>> +static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu, bool acquire_srcu)
> >>> {
> >>> - if (is_guest_mode(vcpu))
> >>> - kvm_x86_ops.nested_ops->check_events(vcpu);
> >>> + if (is_guest_mode(vcpu)) {
> >>> + if (acquire_srcu) {
> >>> + /*
> >>> + * We need to lock because check_events could call
> >>> + * nested_vmx_vmexit() which might need to resolve a
> >>> + * valid memslot. We will have this lock only when
> >>> + * called from vcpu_run but not when called from
> >>> + * kvm_vcpu_check_block > kvm_arch_vcpu_runnable.
> >>> + */
> >>> + int idx = srcu_read_lock(&vcpu->kvm->srcu);
> >>> + kvm_x86_ops.nested_ops->check_events(vcpu);
> >>> + srcu_read_unlock(&vcpu->kvm->srcu, idx);
> >>> + } else {
> >>> + kvm_x86_ops.nested_ops->check_events(vcpu);
> >>> + }
> >>> + }
> >> Obviously not your fault, but I absolutely detest calling check_events() from
> >> kvm_vcpu_running. I would much prefer to make baby steps toward cleaning up the
> >> existing mess instead of piling more weirdness on top.
> >>
> >> Ideally, APICv support would be fixed to not require a deep probe into nested
> >> events just to see if a vCPU can run. But, that's probably more than we want to
> >> bite off at this time.
> >>
> >> What if we add another nested_ops API to check if the vCPU has an event, but not
> >> actually process the event? I think that would allow eliminating the SRCU lock,
> >> and would get rid of the most egregious behavior of triggering a nested VM-Exit
> >> in a seemingly innocuous helper.
> >>
> >> If this works, we could even explore moving the call to nested_ops->has_events()
> >> out of kvm_vcpu_running() and into kvm_vcpu_has_events(); I can't tell if the
> >> side effects in vcpu_block() would get messed up with that change :-/
> >> Incomplete patch...
> >
> > I think it doesn't even have to be *nested* events. Most events are the
> > same inside or outside guest mode, as they already special case guest mode
> > inside the kvm_x86_ops callbacks (e.g. kvm_arch_interrupt_allowed is
> > already called by kvm_vcpu_has_events).
> >
> > I think we only need to extend kvm_x86_ops.nested_ops->hv_timer_pending to
> > cover MTF, plus double check that INIT and SIPI are handled correctly, and
> > then the call to check_nested_events can go away.
>
> Thanks, Paolo, Sean. I appreciate the prompt response, Sorry for the slow
> reply, I was out with a hand injury for a few days and am caught up now.
>
> Just to confirm - In the spirit of baby steps as Sean mentioned, I’m happy to
> take up the idea that Sean mentioned, that makes sense to me. Perhaps we can
> do that small patch and leave a TODO do a tuneup for hv_timer_pending and the
> other double checks Paolo mentioned.
Paolo was pointing out that kvm_vcpu_has_events() already checks hv_timer_pending,
and that we could add the few missing nested event cases to kvm_vcpu_has_events()
instead of wholesale checking everything that's in check_nested_events().
I believe that would work, as I suspect the underlying bug that was alluded to
by commit 0ad3bed6c5ec ("kvm: nVMX: move nested events check to kvm_vcpu_running")
has since been fixed. But, I'm not sure it makes much difference in practice
since we'll likely end up with nested_ops->has_events() either way.
Staring a bit more, I'm pretty sure hv_timer_pending() can be made obsolete and
dropped. Unless Paolo objects, I still like my original proposal.
I think the safest approach from a bisection standpoint would be to do this in
3-4 stages:
1. Refactor check_nested_events() to split out a has_events() helper.
2. Move the has_events() call from kvm_vcpu_running() into kvm_vcpu_has_events()
3. Drop the explicit hv_timer_pending() in inject_pending_event(). It should
be dead code since it's just a pointer to nested_vmx_preemption_timer_pending(),
which is handled by vmx_check_nested_events() and called earlier.
4. Drop the explicit hv_timer_pending() in kvm_vcpu_has_events() for the same
reasons as (3). This can also drop hv_timer_pending() entirely.
> Or would you rather skip that approach and go right to making
> check_nested_events go-away first? Guessing that might be a larger effort
> with more nuances though?
Powered by blists - more mailing lists