[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yvwpb6ofD1S+Rqk1@google.com>
Date: Tue, 16 Aug 2022 23:34:07 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
mlevitsk@...hat.com, vkuznets@...hat.com
Subject: Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> The return value of kvm_vcpu_block will be repurposed soon to return
kvm_vcpu_block()
> the state of KVM_REQ_UNHALT. In preparation for that, get rid of the
> current return value. It is only used by kvm_vcpu_halt to decide whether
kvm_vcpu_halt()
> the call resulted in a wait, but the same effect can be obtained with
> a single round of polling.
>
> No functional change intended, apart from practically indistinguishable
> changes to the polling behavior.
This "breaks" update_halt_poll_stats(). At the very least, it breaks the comment
that effectively says "waited" is set if and only if schedule() is called.
/*
* Note, halt-polling is considered successful so long as the vCPU was
* never actually scheduled out, i.e. even if the wake event arrived
* after of the halt-polling loop itself, but before the full wait.
*/
if (do_halt_poll)
update_halt_poll_stats(vcpu, start, poll_end, !waited);
> @@ -3493,35 +3489,32 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
> {
> bool halt_poll_allowed = !kvm_arch_no_poll(vcpu);
> bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns;
> - ktime_t start, cur, poll_end;
> + ktime_t start, cur, poll_end, stop;
> bool waited = false;
> u64 halt_ns;
>
> start = cur = poll_end = ktime_get();
> - if (do_halt_poll) {
> - ktime_t stop = ktime_add_ns(start, vcpu->halt_poll_ns);
> + stop = do_halt_poll ? start : ktime_add_ns(start, vcpu->halt_poll_ns);
This is inverted, the loop should terminate after the first iteration (stop==start)
if halt-polling is _not_ allowed, i.e. do_halt_poll is false.
Overall, I don't like this patch. I don't necessarily hate it, but I definitely
don't like it.
Isn't freeing up the return from kvm_vcpu_check_block() unnecessary? Can't we
just do:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9f11b505cbee..ccb9f8bdeb18 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
if (hv_timer)
kvm_lapic_switch_to_hv_timer(vcpu);
- if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
+ if (!kvm_arch_vcpu_runnable(vcpu))
return 1;
}
which IMO is more intuitive and doesn't require reworking halt-polling (again).
I don't see why KVM cares if a vCPU becomes runnable after detecting that something
else caused kvm_vcpu_check_block() to bail. E.g. a pending signal doesn't invalidate
a pending vCPU event, and either way KVM is bailing from the run loop.
Powered by blists - more mailing lists