[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86o7o1v1u7.wl-maz@kernel.org>
Date: Thu, 06 Apr 2023 09:42:24 +0100
From: Marc Zyngier <maz@...nel.org>
To: Saravana Kannan <saravanak@...gle.com>
Cc: David Dai <davidai@...gle.com>,
Oliver Upton <oliver.upton@...ux.dev>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Zenghui Yu <yuzenghui@...wei.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Lorenzo Pieralisi <lpieralisi@...nel.org>,
Sudeep Holla <sudeep.holla@....com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
kernel-team@...roid.com, linux-pm@...r.kernel.org,
devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, linux-doc@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev
Subject: Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
On Wed, 05 Apr 2023 22:00:59 +0100,
Saravana Kannan <saravanak@...gle.com> wrote:
>
> On Tue, Apr 4, 2023 at 1:49 PM Marc Zyngier <maz@...nel.org> wrote:
> >
> > On Tue, 04 Apr 2023 20:43:40 +0100,
> > Oliver Upton <oliver.upton@...ux.dev> wrote:
> > >
> > > Folks,
> > >
> > > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > >
> > > <snip>
> > >
> > > > PCMark
> > > > Higher is better
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Writing | 6864 | 8825 | +29% | 8529 | +24% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > >
> > > > PCMark Performance/mAh
> > > > Higher is better
> > > > +-----------+----------+-----------+--------+------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-----------+----------+-----------+--------+------+--------+
> > > > | Score/mAh | 79 | 88 | +11% | 83 | +7% |
> > > > +-----------+----------+-----------+--------+------+--------+
> > > >
> > > > Roblox
> > > > Higher is better
> > > > +-----+----------+------------+--------+-------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-----+----------+------------+--------+-------+--------+
> > > > | FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
> > > > +-----+----------+------------+--------+-------+--------+
> > > >
> > > > Roblox Frames/mAh
> > > > Higher is better
> > > > +------------+----------+------------+--------+--------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +------------+----------+------------+--------+--------+--------+
> > > > | Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
> > > > +------------+----------+------------+--------+--------+--------+
> > >
> > > </snip>
> > >
> > > > Next steps:
> > > > ===========
> > > > We are continuing to look into communication mechanisms other than
> > > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > > userspace. Any inputs in this regard are greatly appreciated.
>
> Hi Oliver and Marc,
>
> Replying to both of you in this one email.
>
> > >
> > > We're highly unlikely to entertain such an interface in KVM.
> > >
> > > The entire feature is dependent on pinning vCPUs to physical cores, for which
> > > userspace is in the driver's seat. That is a well established and documented
> > > policy which can be seen in the way we handle heterogeneous systems and
> > > vPMU.
> > >
> > > Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> > > that I would not expect to benefit the typical user of KVM.
> > >
> > > Based on the data above, it would appear that the userspace implementation is
> > > in the same neighborhood as a KVM-based implementation, which only further
> > > weakens the case for moving this into the kernel.
>
> Oliver,
>
> Sorry if the tables/data aren't presented in an intuitive way, but
> MMIO vs hypercall is definitely not in the same neighborhood. The
> hypercall method often gives close to 2x the improvement that the MMIO
> method gives. For example:
>
> - Roblox FPS: MMIO improves it by 32% vs hypercall improves it by 57%.
> - Frames/mAh: MMIO improves it by 13% vs hypercall improves it by 26%.
> - PC Mark Data manipulation: MMIO makes it worse by 8% vs hypercall
> improves it by 5%
>
> Hypercall does better for other cases too, just not as good. For example,
> - PC Mark Photo editing: Going from MMIO to hypercall gives a 10% improvement.
>
> These are all pretty non-trivial, at least in the mobile world. Heck,
> whole teams would spend months for 2% improvement in battery :)
>
> > >
> > > I certainly can appreciate the motivation for the series, but this feature
> > > should be in userspace as some form of a virtual device.
> >
> > +1 on all of the above.
>
> Marc and Oliver,
>
> We are not tied to hypercalls. We want to do the right thing here, but
> MMIO going all the way to userspace definitely doesn't cut it as is.
> This is where we need some guidance. See more below.
I don't buy this assertion at all. An MMIO in userspace is already
much better than nothing. One of my many objection to the whole series
is that it is built as a massively invasive thing that has too many
fingers in too many pies, with unsustainable assumptions such as 1:1
mapping between CPU and vCPUs.
I'd rather you build something simple first (pure userspace using
MMIOs), work out where the bottlenecks are, and work with us to add
what is needed to get to something sensible, and only that. I'm not
willing to sacrifice maintainability for maximum performance (the
whole thing reminds me of the in-kernel http server...).
>
> > The one thing I'd like to understand that the comment seems to imply
> > that there is a significant difference in overhead between a hypercall
> > and an MMIO. In my experience, both are pretty similar in cost for a
> > handling location (both in userspace or both in the kernel).
>
> I think the main difference really is that in our hypercall vs MMIO
> comparison the hypercall is handled in the kernel vs MMIO goes all the
> way to userspace. I agree with you that the difference probably won't
> be significant if both of them go to the same "depth" in the privilege
> levels.
>
> > MMIO
> > handling is a tiny bit more expensive due to a guaranteed TLB miss
> > followed by a walk of the in-kernel device ranges, but that's all. It
> > should hardly register.
> >
> > And if you really want some super-low latency, low overhead
> > signalling, maybe an exception is the wrong tool for the job. Shared
> > memory communication could be more appropriate.
>
> Yeah, that's one of our next steps. Ideally, we want to use shared
> memory for the host to guest information flow. It's a 32-bit value
> representing the current frequency that the host can update whenever
> the host CPU frequency changes and the guest can read whenever it
> needs it.
Why should the guest care? Why can't the guest ask for an arbitrary
capacity, and get what it gets? You give no information as to *why*
you are doing what you are doing...
>
> For guest to host information flow, we'll need a kick from guest to
> host because we need to take action on the host side when threads
> migrate between vCPUs and cause a significant change in vCPU util.
> Again it can be just a shared memory and some kick. This is what we
> are currently trying to figure out how to do.
That kick would have to go to userspace. There is no way I'm willing
to introduce scheduling primitives inside KVM (the ones we have are
ridiculously bad anyway), and I very much want to avoid extra PV gunk.
> If there are APIs to do this, can you point us to those please? We'd
> also want the shared memory to be accessible by the VMM (so, shared
> between guest kernel, host kernel and VMM).
By default, *ALL* the memory is shared. Isn't that wonderful?
>
> Are the above next steps sane? Or is that a no-go? The main thing we
> want to cut out is the need for having to switch to userspace for
> every single interaction because, as is, it leaves a lot on the table.
Well, for a start, you could disclose how often you hit this DVFS
"device", and when are the critical state changes that must happen
immediately vs those that can simply be posted without having to take
immediate effect.
This sort of information would be much more interesting than a bunch
of benchmarks I know nothing about.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists