[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+Cycx3ewegOXR7c70kpdsaJA-=M5QztDt4J2L=VqpeCsfQ@mail.gmail.com>
Date: Tue, 14 Nov 2017 16:22:35 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Quan Xu <quan.xu0@...il.com>
Cc: Juergen Gross <jgross@...e.com>, Quan Xu <quan.xu03@...il.com>,
kvm <kvm@...r.kernel.org>, linux-doc@...r.kernel.org,
"open list:FILESYSTEMS (VFS and infrastructure)"
<linux-fsdevel@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
virtualization@...ts.linux-foundation.org,
"the arch/x86 maintainers" <x86@...nel.org>,
xen-devel <xen-devel@...ts.xenproject.org>,
Yang Zhang <yang.zhang.wz@...il.com>,
Alok Kataria <akataria@...are.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
2017-11-14 16:15 GMT+08:00 Quan Xu <quan.xu0@...il.com>:
>
>
> On 2017/11/14 15:12, Wanpeng Li wrote:
>>
>> 2017-11-14 15:02 GMT+08:00 Quan Xu <quan.xu0@...il.com>:
>>>
>>>
>>> On 2017/11/13 18:53, Juergen Gross wrote:
>>>>
>>>> On 13/11/17 11:06, Quan Xu wrote:
>>>>>
>>>>> From: Quan Xu <quan.xu0@...il.com>
>>>>>
>>>>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
>>>>> in idle path which will poll for a while before we enter the real idle
>>>>> state.
>>>>>
>>>>> In virtualization, idle path includes several heavy operations
>>>>> includes timer access(LAPIC timer or TSC deadline timer) which will
>>>>> hurt performance especially for latency intensive workload like message
>>>>> passing task. The cost is mainly from the vmexit which is a hardware
>>>>> context switch between virtual machine and hypervisor. Our solution is
>>>>> to poll for a while and do not enter real idle path if we can get the
>>>>> schedule event during polling.
>>>>>
>>>>> Poll may cause the CPU waste so we adopt a smart polling mechanism to
>>>>> reduce the useless poll.
>>>>>
>>>>> Signed-off-by: Yang Zhang <yang.zhang.wz@...il.com>
>>>>> Signed-off-by: Quan Xu <quan.xu0@...il.com>
>>>>> Cc: Juergen Gross <jgross@...e.com>
>>>>> Cc: Alok Kataria <akataria@...are.com>
>>>>> Cc: Rusty Russell <rusty@...tcorp.com.au>
>>>>> Cc: Thomas Gleixner <tglx@...utronix.de>
>>>>> Cc: Ingo Molnar <mingo@...hat.com>
>>>>> Cc: "H. Peter Anvin" <hpa@...or.com>
>>>>> Cc: x86@...nel.org
>>>>> Cc: virtualization@...ts.linux-foundation.org
>>>>> Cc: linux-kernel@...r.kernel.org
>>>>> Cc: xen-devel@...ts.xenproject.org
>>>>
>>>> Hmm, is the idle entry path really so critical to performance that a new
>>>> pvops function is necessary?
>>>
>>> Juergen, Here is the data we get when running benchmark netperf:
>>> 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
>>> 29031.6 bit/s -- 76.1 %CPU
>>>
>>> 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
>>> 35787.7 bit/s -- 129.4 %CPU
>>>
>>> 3. w/ kvm dynamic poll:
>>> 35735.6 bit/s -- 200.0 %CPU
>>
>> Actually we can reduce the CPU utilization by sleeping a period of
>> time as what has already been done in the poll logic of IO subsystem,
>> then we can improve the algorithm in kvm instead of introduing another
>> duplicate one in the kvm guest.
>
> We really appreciate upstream's kvm dynamic poll mechanism, which is
> really helpful for a lot of scenario..
>
> However, as description said, in virtualization, idle path includes
> several heavy operations includes timer access (LAPIC timer or TSC
> deadline timer) which will hurt performance especially for latency
> intensive workload like message passing task. The cost is mainly from
> the vmexit which is a hardware context switch between virtual machine
> and hypervisor.
>
> for upstream's kvm dynamic poll mechanism, even you could provide a
> better algorism, how could you bypass timer access (LAPIC timer or TSC
> deadline timer), or a hardware context switch between virtual machine
> and hypervisor. I know these is a tradeoff.
>
> Furthermore, here is the data we get when running benchmark contextswitch
> to measure the latency(lower is better):
>
> 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
> 3402.9 ns/ctxsw -- 199.8 %CPU
>
> 2. w/ patch and disable kvm dynamic poll:
> 1163.5 ns/ctxsw -- 205.5 %CPU
>
> 3. w/ kvm dynamic poll:
> 2280.6 ns/ctxsw -- 199.5 %CPU
>
> so, these tow solution are quite similar, but not duplicate..
>
> that's also why to add a generic idle poll before enter real idle path.
> When a reschedule event is pending, we can bypass the real idle path.
>
There is a similar logic in the idle governor/driver, so how this
patchset influence the decision in the idle governor/driver when
running on bare-metal(power managment is not exposed to the guest so
we will not enter into idle driver in the guest)?
Regards,
Wanpeng Li
Powered by blists - more mailing lists