[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181228191006.GI25620@tassilo.jf.intel.com>
Date: Fri, 28 Dec 2018 11:10:06 -0800
From: Andi Kleen <ak@...ux.intel.com>
To: Wei Wang <wei.w.wang@...el.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
pbonzini@...hat.com, peterz@...radead.org, kan.liang@...el.com,
mingo@...hat.com, rkrcmar@...hat.com, like.xu@...el.com,
jannh@...gle.com, arei.gonglei@...wei.com
Subject: Re: [PATCH v4 10/10] KVM/x86/lbr: lazy save the guest lbr stack
On Fri, Dec 28, 2018 at 11:47:06AM +0800, Wei Wang wrote:
> On 12/28/2018 04:51 AM, Andi Kleen wrote:
> > Thanks. This looks a lot better than the earlier versions.
> >
> > Some more comments.
> >
> > On Wed, Dec 26, 2018 at 05:25:38PM +0800, Wei Wang wrote:
> > > When the vCPU is scheduled in:
> > > - if the lbr feature was used in the last vCPU time slice, set the lbr
> > > stack to be interceptible, so that the host can capture whether the
> > > lbr feature will be used in this time slice;
> > > - if the lbr feature wasn't used in the last vCPU time slice, disable
> > > the vCPU support of the guest lbr switching.
> > time slice is the time from exit to exit?
>
> It's the vCPU thread time slice (e.g. 100ms).
I don't think the time slices are that long, but ok.
>
> >
> > This might be rather short in some cases if the workload does a lot of exits
> > (which I would expect PMU workloads to do) Would be better to use some
> > explicit time check, or at least N exits.
>
> Did you mean further increasing the lazy time to multiple host thread
> scheduling time slices?
> What would be a good value for "N"?
I'm not sure -- i think the goal would be to find a value that optimizes
performance (or rather minimizes overhead). But perhaps if it's as you say the
scheduler time slice it might be good enough as it is.
I guess it could be tuned later based on more experneice.
> > or partially cleared. This would be user visible.
> >
> > In theory could try to detect if the guest is inside a PMI and
> > save/restore then, but that would likely be complicated. I would
> > save/restore for all cases.
>
> Yes, it is easier to save for all the cases. But curious for the
> non-callstack
> mode, it's just ponit sampling functions (kind of speculative in some
> degree).
> Would rarely losing a few recordings important in that case?
In principle no for statistical samples, but I know some tools complain
for bogus samples (e.g. autofdo will). Also with perf report --branch-history it will
be definitely visible. I think it's easier to always safe now than to
handle the user complaints about this later.
-Andi
Powered by blists - more mailing lists