lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200113104314.GU2844@hirez.programming.kicks-ass.net>
Date:   Mon, 13 Jan 2020 11:43:14 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Wanpeng Li <kernellwp@...il.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Marcelo Tosatti <mtosatti@...hat.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        KarimAllah <karahmed@...zon.de>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Ingo Molnar <mingo@...nel.org>,
        Ankur Arora <ankur.a.arora@...cle.com>,
        christopher.s.hall@...el.com, hubert.chrzaniuk@...el.com,
        len.brown@...el.com, thomas.lendacky@....com, rjw@...ysocki.net
Subject: Re: [PATCH RFC] sched/fair: Penalty the cfs task which executes
 mwait/hlt


Preserved most (+- edits) for the people added to Cc.

On Thu, Jan 09, 2020 at 07:53:51PM +0800, Wanpeng Li wrote:
> On Thu, 9 Jan 2020 at 01:15, Paolo Bonzini <pbonzini@...hat.com> wrote:
> > On 08/01/20 16:50, Peter Zijlstra wrote:
> > > On Wed, Jan 08, 2020 at 09:50:01AM +0800, Wanpeng Li wrote:
> > >> From: Wanpeng Li <wanpengli@...cent.com>
> > >>
> > >> To deliver all of the resources of a server to instances in cloud, there are no
> > >> housekeeping cpus reserved. libvirtd, qemu main loop, kthreads, and other agent/tools
> > >> etc which can't be offloaded to other hardware like smart nic, these stuff will
> > >> contend with vCPUs even if MWAIT/HLT instructions executed in the guest.
> >
> > ^^ this is the problem statement:
> >
> > He has VCPU threads which are being pinned 1:1 to physical CPUs.  He
> > needs to have various housekeeping threads preempting those vCPU
> > threads, but he'd rather preempt vCPU threads that are doing HLT/MWAIT
> > than those that are keeping the CPU busy.
> >
> > >> The is no trap and yield the pCPU after we expose mwait/hlt to the guest [1][2],
> > >> the top command on host still observe 100% cpu utilization since qemu process is
> > >> running even though guest who has the power management capability executes mwait.
> > >> Actually we can observe the physical cpu has already enter deeper cstate by
> > >> powertop on host.
> > >>
> > >> For virtualization, there is a HLT activity state in CPU VMCS field which indicates
> > >> the logical processor is inactive because it executed the HLT instruction, but
> > >> SDM 24.4.2 mentioned that execution of the MWAIT instruction may put a logical
> > >> processor into an inactive state, however, this VMCS field never reflects this
> > >> state.
> > >
> > > So far I think I can follow, however it does not explain who consumes
> > > this VMCS state if it is set and how that helps. Also, this:
> >
> > I think what Wanpeng was saying is: "KVM could gather this information
> > using the activity state field in the VMCS.  However, when the guest
> > does MWAIT the processor can go into an inactive state without updating
> > the VMCS."  Hence looking at the APERFMPERF ratio.
> >
> > >> This patch avoids fine granularity intercept and reschedule vCPU if MWAIT/HLT
> > >> instructions executed, because it can worse the message-passing workloads which
> > >> will switch between idle and running frequently in the guest. Lets penalty the
> > >> vCPU which is long idle through tick-based sampling and preemption.
> > >
> > > is just complete gibberish. And I have no idea what problem you're
> > > trying to solve how.
> >
> > This is just explaining why MWAIT and HLT is not being trapped in his
> > setup.  (Because vmexit on HLT or MWAIT is awfully expensive).
> >
> > > Also, I don't think the TSC/MPERF ratio is architected, we can't assume
> > > this is true for everything that has APERFMPERF.
> >
> > Right, you have to look at APERF/MPERF, not TSC/MPERF.

> Peterz, do you have nicer solution for this?

So as you might've seen, we're going to go read the APERF/MPERF thingies
in the tick anyway:

  https://lkml.kernel.org/r/20191002122926.385-1-ggherdovich@suse.cz

(your proposed patch even copied some naming off of that, so I'm
assuming you've actually seen that)

So the very first thing we need to get sorted is that MPERF/TSC ratio
thing. TurboStat does it, but has 'funny' hacks on like:

  b2b34dfe4d9a ("tools/power turbostat: KNL workaround for %Busy and Avg_MHz")

and I imagine that there's going to be more exceptions there. You're
basically going to have to get both Intel and AMD to commit to this.

IFF we can get concensus on MPERF/TSC, then yes, that is a reasonable
way to detect a VCPU being idle I suppose. I've added a bunch of people
who seem to know about this.

Anyone, what will it take to get MPERF/TSC 'working' ?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ