lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z0-3bc1reu1slCtL@google.com>
Date: Tue, 3 Dec 2024 17:59:09 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Jim Mattson <jmattson@...gle.com>
Cc: Mingwei Zhang <mizhang@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>, 
	Huang Rui <ray.huang@....com>, "Gautham R. Shenoy" <gautham.shenoy@....com>, 
	Mario Limonciello <mario.limonciello@....com>, "Rafael J. Wysocki" <rafael@...nel.org>, 
	Viresh Kumar <viresh.kumar@...aro.org>, 
	Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>, Len Brown <lenb@...nel.org>, 
	"H. Peter Anvin" <hpa@...or.com>, Perry Yuan <perry.yuan@....com>, kvm@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [RFC PATCH 00/22] KVM: x86: Virtualize IA32_APERF and IA32_MPERF MSRs

On Tue, Dec 03, 2024, Jim Mattson wrote:
> On Tue, Dec 3, 2024 at 3:19 PM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Thu, Nov 21, 2024, Mingwei Zhang wrote:
> > > Linux guests read IA32_APERF and IA32_MPERF on every scheduler tick
> > > (250 Hz by default) to measure their effective CPU frequency. To avoid
> > > the overhead of intercepting these frequent MSR reads, allow the guest
> > > to read them directly by loading guest values into the hardware MSRs.
> > >
> > > These MSRs are continuously running counters whose values must be
> > > carefully tracked during all vCPU state transitions:
> > > - Guest IA32_APERF advances only during guest execution
> >
> > That's not what this series does though.  Guest APERF advances while the vCPU is
> > loaded by KVM_RUN, which is *very* different than letting APERF run freely only
> > while the vCPU is actively executing in the guest.
> >
> > E.g. a vCPU that is memory oversubscribed via zswap will account a significant
> > amount of CPU time in APERF when faulting in swapped memory, whereas traditional
> > file-backed swap will not due to the task being scheduled out while waiting on I/O.
> 
> Are you saying that APERF should stop completely outside of VMX
> non-root operation / guest mode?
> While that is possible, the overhead would be significantly
> higher...probably high enough to make it impractical.

No, I'm simply pointing out that the cover letter is misleading/inaccurate.

> > In general, the "why" of this series is missing.  What are the use cases you are
> > targeting?  What are the exact semantics you want to define?  *Why* did are you
> > proposed those exact semantics?
> 
> I get the impression that the questions above are largely rhetorical, and

Nope, not rhetorical, I genuinely want to know.  I can't tell if ya'll thought
about the side effects of things like swap and emulated I/O, and if you did, what
made you come to the conclusion that the "best" boundary is on sched_out() and
return to userspace.

> that you would not be happy with the answers anyway, but if you really are
> inviting a version 2, I will gladly expound upon the why.

No need for a new version at this time, just give me the details.

> > E.g. emulated I/O that is handled in KVM will be accounted to APERF, but I/O that
> > requires userspace exits will not.  It's not necessarily wrong for heavy userspace
> > I/O to cause observed frequency to drop, but it's not obviously correct either.
> >
> > The use cases matter a lot for APERF/MPERF, because trying to reason about what's
> > desirable for an oversubscribed setup requires a lot more work than defining
> > semantics for setups where all vCPUs are hard pinned 1:1 and memory is more or
> > less just partitioned.  Not to mention the complexity for trying to support all
> > potential use cases is likely quite a bit higher.
> >
> > And if the use case is specifically for slice-of-hardware, hard pinned/partitioned
> > VMs, does it matter if the host's view of APERF/MPERF is not accurately captured
> > at all times?  Outside of maybe a few CPUs running bookkeeping tasks, the only
> > workloads running on CPUs should be vCPUs.  It's not clear to me that observing
> > the guest utilization is outright wrong in that case.
> 
> My understanding is that Google Cloud customers have been asking for this
> feature for all manner of VM families for years, and most of those VM
> families are not slice-of-hardware, since we just launched our first such
> offering a few months ago.

But do you actually want to expose APERF/MPERF to those VMs?  With my upstream
hat on, what someone's customers are asking for isn't relevant.  What's relevant
is what that someone wants to deliver/enable.

> > One idea for supporting APERF/MPERF in KVM would be to add a kernel param to
> > disable/hide APERF/MPERF from the host, and then let KVM virtualize/passthrough
> > APERF/MPERF if and only if the feature is supported in hardware, but hidden from
> > the kernel.  I.e. let the system admin gift APERF/MPERF to KVM.
> 
> Part of our goal has been to enable guest APERF/MPERF without impacting the
> use of host APERF/MPERF, since one of the first things our support teams look
> at in response to a performance complaint is the effective frequencies of the
> CPUs as reported on the host.

But is looking at the host's view even useful if (a) the only thing running on
those CPUs is a single vCPU, and (b) host userspace only sees the effective
frequencies when _host_ code is running?  Getting the effective frequency for
when the userspace VMM is processing emulated I/O probably isn't going to be all
that helpful.

And gifting APERF/MPERF to VMs doesn't have to mean the host can't read the MSRs,
e.g. via turbostat.  It just means the kernel won't use APERF/MPERF for scheduling
decisions or any other behaviors that rely on an accurate host view.

> I can explain all of this in excruciating detail, but I'm not really
> motivated by your initial response, which honestly seems a bit hostile.

Probably because this series made me a bit grumpy :-)  As presented, this feels
way, way too much like KVM's existing PMU "virtualization".  Mostly works if you
stare at it just so, but devoid of details on why X was done instead of Y, and
seemingly ignores multiple edge cases.

I'm not saying you and Mingwei haven't thought about edge cases and design
tradeoffs, but nothing in the cover letter, changelogs, comments (none), or
testcases (also none) communicates those thoughts to others.

> At least you looked at the code, which is a far warmer reception than I
> usually get.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ