[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17d647aa55db14435a88b48f11ab62bb59831935.camel@redhat.com>
Date: Wed, 17 Aug 2022 17:10:19 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
vkuznets@...hat.com
Subject: Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
On Tue, 2022-08-16 at 23:34 +0000, Sean Christopherson wrote:
> On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> > The return value of kvm_vcpu_block will be repurposed soon to return
>
> kvm_vcpu_block()
>
> > the state of KVM_REQ_UNHALT. In preparation for that, get rid of the
> > current return value. It is only used by kvm_vcpu_halt to decide whether
>
> kvm_vcpu_halt()
>
> > the call resulted in a wait, but the same effect can be obtained with
> > a single round of polling.
> >
> > No functional change intended, apart from practically indistinguishable
> > changes to the polling behavior.
>
> This "breaks" update_halt_poll_stats(). At the very least, it breaks the comment
> that effectively says "waited" is set if and only if schedule() is called.
>
> /*
> * Note, halt-polling is considered successful so long as the vCPU was
> * never actually scheduled out, i.e. even if the wake event arrived
> * after of the halt-polling loop itself, but before the full wait.
> */
> if (do_halt_poll)
> update_halt_poll_stats(vcpu, start, poll_end, !waited);
>
> > @@ -3493,35 +3489,32 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
> > {
> > bool halt_poll_allowed = !kvm_arch_no_poll(vcpu);
> > bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns;
> > - ktime_t start, cur, poll_end;
> > + ktime_t start, cur, poll_end, stop;
> > bool waited = false;
> > u64 halt_ns;
> >
> > start = cur = poll_end = ktime_get();
> > - if (do_halt_poll) {
> > - ktime_t stop = ktime_add_ns(start, vcpu->halt_poll_ns);
> > + stop = do_halt_poll ? start : ktime_add_ns(start, vcpu->halt_poll_ns);
>
> This is inverted, the loop should terminate after the first iteration (stop==start)
> if halt-polling is _not_ allowed, i.e. do_halt_poll is false.
>
> Overall, I don't like this patch. I don't necessarily hate it, but I definitely
> don't like it.
Roughtly same opinion here.
>
> Isn't freeing up the return from kvm_vcpu_check_block() unnecessary? Can't we
> just do:
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9f11b505cbee..ccb9f8bdeb18 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
> if (hv_timer)
> kvm_lapic_switch_to_hv_timer(vcpu);
>
> - if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
> + if (!kvm_arch_vcpu_runnable(vcpu))
The 'kvm_vcpu_block()' returns when 'kvm_vcpu_check_block()' returns a negative number
which can happen also due to pending apic timer / signal, however it only sets the
'KVM_REQ_UNHALT' only when 'kvm_arch_vcpu_runnable()==true' so the above does make
sense.
Best regards,
Maxim Levitsky
> return 1;
> }
>
>
> which IMO is more intuitive and doesn't require reworking halt-polling (again).
>
> I don't see why KVM cares if a vCPU becomes runnable after detecting that something
> else caused kvm_vcpu_check_block() to bail. E.g. a pending signal doesn't invalidate
> a pending vCPU event, and either way KVM is bailing from the run loop.
>
Powered by blists - more mailing lists