[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YVIAdVxc+q2UWB+J@google.com>
Date: Mon, 27 Sep 2021 17:33:41 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Christian Borntraeger <borntraeger@...ibm.com>,
David Matlack <dmatlack@...gle.com>,
Jon Cargille <jcargill@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
James Morse <james.morse@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
David Hildenbrand <david@...hat.com>,
Cornelia Huck <cohuck@...hat.com>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Joerg Roedel <joro@...tes.org>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
KVM ARM <kvmarm@...ts.cs.columbia.edu>,
"open list:MIPS" <linux-mips@...r.kernel.org>,
kvm <kvm@...r.kernel.org>, kvm-ppc <kvm-ppc@...r.kernel.org>,
"Kernel Mailing List, Linux" <linux-kernel@...r.kernel.org>,
Jing Zhang <jingzhangos@...gle.com>,
Marc Zyngier <maz@...nel.org>,
Huacai Chen <chenhuacai@...nel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@...il.com>,
Paul Mackerras <paulus@...abs.org>,
Janosch Frank <frankja@...ux.ibm.com>
Subject: Re: disabling halt polling broken? (was Re: [PATCH 00/14] KVM:
Halt-polling fixes, cleanups and a new stat)
On Mon, Sep 27, 2021, Paolo Bonzini wrote:
> On Mon, Sep 27, 2021 at 5:17 PM Christian Borntraeger
> <borntraeger@...ibm.com> wrote:
> > > So I think there are two possibilities that makes sense:
> > >
> > > * track what is using KVM_CAP_HALT_POLL, and make writes to halt_poll_ns follow that
> >
> > what about using halt_poll_ns for those VMs that did not uses KVM_CAP_HALT_POLL and the private number for those that did.
>
> Yes, that's what I meant. David pointed out that doesn't allow you to
> disable halt polling altogether, but for that you can always ask each
> VM's userspace one by one, or just not use KVM_CAP_HALT_POLL. (Also, I
> don't know about Google's usecase, but mine was actually more about
> using KVM_CAP_HALT_POLL to *disable* halt polling on some VMs!).
I kinda like the idea if special-casing halt_poll_ns=0, e.g. for testing or
in-the-field mitigation if halt-polling is broken. It'd be trivial to support, e.g.
@@ -3304,19 +3304,23 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
update_halt_poll_stats(vcpu, start, poll_end, !waited);
if (halt_poll_allowed) {
+ max_halt_poll_ns = vcpu->kvm->max_halt_poll_ns;
+ if (!max_halt_poll_ns || !halt_poll_ns) <------ squish the max if halt_poll_ns==0
+ max_halt_poll_ns = halt_poll_ns;
+
if (!vcpu_valid_wakeup(vcpu)) {
shrink_halt_poll_ns(vcpu);
- } else if (vcpu->kvm->max_halt_poll_ns) {
+ } else if (max_halt_poll_ns) {
if (halt_ns <= vcpu->halt_poll_ns)
;
/* we had a long block, shrink polling */
else if (vcpu->halt_poll_ns &&
- halt_ns > vcpu->kvm->max_halt_poll_ns)
+ halt_ns > max_halt_poll_ns)
shrink_halt_poll_ns(vcpu);
/* we had a short halt and our poll time is too small */
- else if (vcpu->halt_poll_ns < vcpu->kvm->max_halt_poll_ns &&
- halt_ns < vcpu->kvm->max_halt_poll_ns)
- grow_halt_poll_ns(vcpu);
+ else if (vcpu->halt_poll_ns < max_halt_poll_ns &&
+ halt_ns < max_halt_poll_ns)
+ grow_halt_poll_ns(vcpu, max_halt_poll_ns);
} else {
vcpu->halt_poll_ns = 0;
}
Powered by blists - more mailing lists