[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eTNhhHKoGrsiQT24UCUzY+TGKOkGPMDh7MZ5+YzSHjkhg@mail.gmail.com>
Date: Wed, 16 Feb 2022 09:53:53 -0800
From: Jim Mattson <jmattson@...gle.com>
To: Like Xu <like.xu.linux@...il.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
David Dunn <daviddunn@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Stephane Eranian <eranian@...gle.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: KVM: x86: Reconsider the current approach of vPMU
On Tue, Feb 15, 2022 at 7:33 PM Like Xu <like.xu.linux@...il.com> wrote:
> AFAI, most cloud provider don't want to lose this flexibility as it leaves
> hundreds of "profile KVM guests" cases with nowhere to land.
I can only speak for Google Cloud. We'd like to be able to profile
host code with system-wide pinned counters on a periodic basis, but we
would like to do so without breaking PMU virtualization in the guest
(and that includes not only the guest counters that run in guest mode,
but also the guest counters that have to run in both guest mode and
host mode, such as reference cycles unhalted).
The one thing that is clear to me from these discussions is that the
perf subsystem behavior needs to be more configurable than it is
today.
One possibility would be to separate priority from usage. Instead of
having implicit priorities based on whether the event is system-wide
pinned, system-wide multiplexed, thread pinned, or thread multiplexed,
we could offer numeric priorities independent of the four usage modes.
That should offer enough flexibility to satisfy everyone.
Powered by blists - more mailing lists