[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <BLU437-SMTP94580321B62DB8505D0B3780690@phx.gbl>
Date: Wed, 2 Sep 2015 14:01:31 +0800
From: Wanpeng Li <wanpeng.li@...mail.com>
To: David Matlack <dmatlack@...gle.com>
CC: Paolo Bonzini <pbonzini@...hat.com>,
kvm list <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Peter Kieser <peter@...ser.ca>
Subject: Re: [PATCH v4 0/3] KVM: Dynamic Halt-Polling
On 9/2/15 9:49 AM, David Matlack wrote:
> On Tue, Sep 1, 2015 at 5:29 PM, Wanpeng Li <wanpeng.li@...mail.com> wrote:
>> On 9/2/15 7:24 AM, David Matlack wrote:
>>> On Tue, Sep 1, 2015 at 3:58 PM, Wanpeng Li <wanpeng.li@...mail.com> wrote:
> <snip>
>>>> Why this can happen?
>>> Ah, probably because I'm missing 9c8fd1ba220 (KVM: x86: optimize delivery
>>> of TSC deadline timer interrupt). I don't think the edge case exists in
>>> the latest kernel.
>>
>> Yeah, hope we both(include Peter Kieser) can test against latest kvm tree to
>> avoid confusing. The reason to introduce the adaptive halt-polling toggle is
>> to handle the "edge case" as you mentioned above. So I think we can make
>> more efforts improve v4 instead. I will improve v4 to handle short halt
>> today. ;-)
> That's fine. It's just easier to convey my ideas with a patch. FYI the
> other reason for the toggle patch was to add the timer for kvm_vcpu_block,
> which I think is the only way to get dynamic halt-polling right. Feel free
> to work on top of v4!
I introduce your idea to shrink/grow poll time in v5 by detecting
long/short halt and the performance looks good. Many thanks your help,
David! ;-)
Regards,
Wanpeng Li
>
> <snip>
>>>> Did you test your patch against a windows guest?
>>> I have not. I tested against a 250HZ linux guest to check how it performs
>>> against a ticking guest. Presumably, windows should be the same, but at a
>>> higher tick rate. Do you have a test for Windows?
>>
>> I just test the idle vCPUs usage.
>>
>>
>> V4 for windows 10:
>>
>> +-----------------+----------------+-----------------------+
>> | | |
>> |
>> | w/o halt-poll | w/ halt-poll | dynamic(v4) halt-poll
>> |
>> +-----------------+----------------+-----------------------+
>> | | |
>> |
>> | ~2.1% | ~3.0% | ~2.4%
>> |
>> +-----------------+----------------+-----------------------+
> I'm not seeing the same results with v4. With a 250HZ ticking guest
> I see 15% c0 with halt_poll_ns=2000000 and 1.27% with halt_poll_ns=0.
> Are you running one vcpu per pcpu?
>
> (The reason for the overhead: the new tracepoint shows each vcpu is
> alternating between 0 and 500 us.)
>
>> V4 for linux guest:
>>
>> +-----------------+----------------+-------------------+
>> | | | |
>> | w/o halt-poll | w/ halt-poll | dynamic halt-poll |
>> +-----------------+----------------+-------------------+
>> | | | |
>> | ~0.9% | ~1.8% | ~1.2% |
>> +-----------------+----------------+-------------------+
>>
>>
>> Regards,
>> Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists