[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL715W+RgX2JfeRsenNoU4TuTWwLS5H=P+vrZK_GQVQmMkyraw@mail.gmail.com>
Date: Wed, 4 Oct 2023 12:51:38 -0700
From: Mingwei Zhang <mizhang@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Sean Christopherson <seanjc@...gle.com>,
Ingo Molnar <mingo@...nel.org>,
Dapeng Mi <dapeng1.mi@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Kan Liang <kan.liang@...ux.intel.com>,
Like Xu <likexu@...cent.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>, kvm@...r.kernel.org,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Zhang Xiong <xiong.y.zhang@...el.com>,
Lv Zhiyuan <zhiyuan.lv@...el.com>,
Yang Weijiang <weijiang.yang@...el.com>,
Dapeng Mi <dapeng1.mi@...el.com>,
Jim Mattson <jmattson@...gle.com>,
David Dunn <daviddunn@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Patch v4 07/13] perf/x86: Add constraint for guest perf metrics event
On Wed, Oct 4, 2023 at 4:22 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, Oct 03, 2023 at 08:23:26AM -0700, Sean Christopherson wrote:
> > On Tue, Oct 03, 2023, Peter Zijlstra wrote:
> > > On Mon, Oct 02, 2023 at 05:56:28PM -0700, Sean Christopherson wrote:
>
> > > > Well drat, that there would have saved a wee bit of frustration. Better late
> > > > than never though, that's for sure.
> > > >
> > > > Just to double confirm: keeping guest PMU state loaded until the vCPU is scheduled
> > > > out or KVM exits to userspace, would mean that host perf events won't be active
> > > > for potentially large swaths of non-KVM code. Any function calls or event/exception
> > > > handlers that occur within the context of ioctl(KVM_RUN) would run with host
> > > > perf events disabled.
> > >
> > > Hurmph, that sounds sub-optimal, earlier you said <1500 cycles, this all
> > > sounds like a ton more.
> > >
> > > /me frobs around the kvm code some...
> > >
> > > Are we talking about exit_fastpath loop in vcpu_enter_guest() ? That
> > > seems to run with IRQs disabled, so at most you can trigger a #PF or
> > > something, which will then trip an exception fixup because you can't run
> > > #PF with IRQs disabled etc..
> > >
> > > That seems fine. That is, a theoretical kvm_x86_handle_enter_irqoff()
> > > coupled with the existing kvm_x86_handle_exit_irqoff() seems like
> > > reasonable solution from where I'm sitting. That also more or less
> > > matches the FPU state save/restore AFAICT.
> > >
> > > Or are you talking about the whole of vcpu_run() ? That seems like a
> > > massive amount of code, and doesn't look like anything I'd call a
> > > fast-path. Also, much of that loop has preemption enabled...
> >
> > The whole of vcpu_run(). And yes, much of it runs with preemption enabled. KVM
> > uses preempt notifiers to context switch state if the vCPU task is scheduled
> > out/in, we'd use those hooks to swap PMU state.
> >
> > Jumping back to the exception analogy, not all exits are equal. For "simple" exits
> > that KVM can handle internally, the roundtrip is <1500. The exit_fastpath loop is
> > roughly half that.
> >
> > But for exits that are more complex, e.g. if the guest hits the equivalent of a
> > page fault, the cost of handling the page fault can vary significantly. It might
> > be <1500, but it might also be 10x that if handling the page fault requires faulting
> > in a new page in the host.
> >
> > We don't want to get too aggressive with moving stuff into the exit_fastpath loop,
> > because doing too much work with IRQs disabled can cause latency problems for the
> > host. This isn't much of a concern for slice-of-hardware setups, but would be
> > quite problematic for other use cases.
> >
> > And except for obviously slow paths (from the guest's perspective), extra latency
> > on any exit can be problematic. E.g. even if we got to the point where KVM handles
> > 99% of exits the fastpath (may or may not be feasible), a not-fastpath exit at an
> > inopportune time could throw off the guest's profiling results, introduce unacceptable
> > jitter, etc.
>
> I'm confused... the PMU must not be running after vm-exit. It must not
> be able to profile the host. So what jitter are you talking about?
>
> Even if we persist the MSR contents, the PMU itself must be disabled on
> vm-exit and enabled on vm-enter. If not by hardware then by software
> poking at the global ctrl msr.
>
> I also don't buy the latency argument, we already do full and complete
> PMU rewrites with IRQs disabled in the context switch path. And as
> mentioned elsewhere, the whole AMX thing has an 8k copy stuck in the FPU
> save/restore.
>
> I would much prefer we keep the PMU swizzle inside the IRQ disabled
> region of vcpu_enter_guest(). That's already a ton better than you have
> today.
Peter, I think the jitter Sean was talking about is the potential
issue in pass-through implementation. If KVM follows the perf
subsystem requirement, then after VMEXIT, any perf_event with
exclude_guest=1 (and higher priority ?) should start counting. Because
the guest VM exclusively owns the PMU with all counters at that point,
the gigantic msr save/restore is needed which requires a whole bunch
of wrmsrs. That will be a performance disaster since VMEXIT could
happen at a very high frequency.
In comparison, if we are talking about the existing non-pass-through
implementation, then the PMU context switch immediately becomes
simple: only global ctrl tweak is needed at VM boundary (to stop
exclude_host events and start exclude_guest events in one shot), since
the guest VM and host perf subsystem share the hardware PMU counters.
Peter, that latency argument in pass-through implementation is
something that we hope you could buy. This should be relatively easy
to prove. I can provide some data if you need.
To cope with that, KVM might need to defer that msr save/restore for
PMU to a later point in pass-through implementation. But that will be
conflicting with the support of the perf_event with exclude_guest=1.
So, I guess that's why Sean mentioned this: "If y'all are willing to
let KVM redefined exclude_guest to be KVM's outer run loop, then I'm
all for exploring that option."
Note that the situation is similar to AMX, i.e., when guest VMEXIT to
host, the FPU should be switched to the host FPU as well, but because
AMX is too big and thus too slow, KVM defers that to a very late
point.
Hope this explains a little bit and sorry if this might be an
injection of noise.
Thanks.
-Mingwei
>
Powered by blists - more mailing lists