[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <00d884a7-d463-74b4-82cf-9deb0aa70971@redhat.com>
Date: Wed, 8 Jan 2020 18:14:53 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Wanpeng Li <kernellwp@...il.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Marcelo Tosatti <mtosatti@...hat.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
KarimAllah <karahmed@...zon.de>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ingo Molnar <mingo@...nel.org>,
Ankur Arora <ankur.a.arora@...cle.com>
Subject: Re: [PATCH RFC] sched/fair: Penalty the cfs task which executes
mwait/hlt
On 08/01/20 16:50, Peter Zijlstra wrote:
> On Wed, Jan 08, 2020 at 09:50:01AM +0800, Wanpeng Li wrote:
>> From: Wanpeng Li <wanpengli@...cent.com>
>>
>> To deliver all of the resources of a server to instances in cloud, there are no
>> housekeeping cpus reserved. libvirtd, qemu main loop, kthreads, and other agent/tools
>> etc which can't be offloaded to other hardware like smart nic, these stuff will
>> contend with vCPUs even if MWAIT/HLT instructions executed in the guest.
^^ this is the problem statement:
He has VCPU threads which are being pinned 1:1 to physical CPUs. He
needs to have various housekeeping threads preempting those vCPU
threads, but he'd rather preempt vCPU threads that are doing HLT/MWAIT
than those that are keeping the CPU busy.
>> The is no trap and yield the pCPU after we expose mwait/hlt to the guest [1][2],
>> the top command on host still observe 100% cpu utilization since qemu process is
>> running even though guest who has the power management capability executes mwait.
>> Actually we can observe the physical cpu has already enter deeper cstate by
>> powertop on host.
>>
>> For virtualization, there is a HLT activity state in CPU VMCS field which indicates
>> the logical processor is inactive because it executed the HLT instruction, but
>> SDM 24.4.2 mentioned that execution of the MWAIT instruction may put a logical
>> processor into an inactive state, however, this VMCS field never reflects this
>> state.
>
> So far I think I can follow, however it does not explain who consumes
> this VMCS state if it is set and how that helps. Also, this:
I think what Wanpeng was saying is: "KVM could gather this information
using the activity state field in the VMCS. However, when the guest
does MWAIT the processor can go into an inactive state without updating
the VMCS." Hence looking at the APERFMPERF ratio.
>> This patch avoids fine granularity intercept and reschedule vCPU if MWAIT/HLT
>> instructions executed, because it can worse the message-passing workloads which
>> will switch between idle and running frequently in the guest. Lets penalty the
>> vCPU which is long idle through tick-based sampling and preemption.
>
> is just complete gibberish. And I have no idea what problem you're
> trying to solve how.
This is just explaining why MWAIT and HLT is not being trapped in his
setup. (Because vmexit on HLT or MWAIT is awfully expensive).
> Also, I don't think the TSC/MPERF ratio is architected, we can't assume
> this is true for everything that has APERFMPERF.
Right, you have to look at APERF/MPERF, not TSC/MPERF. My scheduler-fu
is zero so I can't really help with a nicer solution.
Paolo
Powered by blists - more mailing lists