[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <259c95bc-3641-965b-4054-a233a6ee785c@gmail.com>
Date: Wed, 13 Sep 2017 19:56:23 +0800
From: Yang Zhang <yang.zhang.wz@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
wanpeng.li@...mail.com, pbonzini@...hat.com, tglx@...utronix.de,
rkrcmar@...hat.com, dmatlack@...gle.com, agraf@...e.de,
peterz@...radead.org, linux-doc@...r.kernel.org,
Quan Xu <quan.xu0@...il.com>
Subject: Re: [RFC PATCH v2 0/7] x86/idle: add halt poll support
On 2017/8/29 22:56, Michael S. Tsirkin wrote:
> On Tue, Aug 29, 2017 at 11:46:34AM +0000, Yang Zhang wrote:
>> Some latency-intensive workload will see obviously performance
>> drop when running inside VM.
>
> But are we trading a lot of CPU for a bit of lower latency?
>
>> The main reason is that the overhead
>> is amplified when running inside VM. The most cost i have seen is
>> inside idle path.
>>
>> This patch introduces a new mechanism to poll for a while before
>> entering idle state. If schedule is needed during poll, then we
>> don't need to goes through the heavy overhead path.
>
> Isn't it the job of an idle driver to find the best way to
> halt the CPU?
>
> It looks like just by adding a cstate we can make it
> halt at higher latencies only. And at lower latencies,
> if it's doing a good job we can hopefully use mwait to
> stop the CPU.
>
> In fact I have been experimenting with exactly that.
> Some initial results are encouraging but I could use help
> with testing and especially tuning. If you can help
> pls let me know!
Quan, Can you help to test it and give result? Thanks.
--
Yang
Alibaba Cloud Computing
Powered by blists - more mailing lists