[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <286AC319A985734F985F78AFA26841F73978F75E@shsmsx102.ccr.corp.intel.com>
Date: Fri, 7 Sep 2018 15:20:21 +0000
From: "Wang, Wei W" <wei.w.wang@...el.com>
To: Andi Kleen <ak@...ux.intel.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"Liang, Kan" <kan.liang@...el.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"rkrcmar@...hat.com" <rkrcmar@...hat.com>,
"Xu, Like" <like.xu@...el.com>
Subject: RE: [PATCH v2 6/8] perf/x86/intel/lbr: guest requesting KVM for lbr
stack save/restore
On Friday, September 7, 2018 10:11 PM, Andi Kleen wrote:
> > This could achieve the above #1, but how would it solve #2 above? That
> > is, after the guest uses the lbr feature for a while, the lbr stack
> > has been passed through, then the guest doesn't use lbr any more, but
> > the vCPU will still save/restore on switching?
>
> If nothing accesses the MSR LBRs after a context switch in the guest nothing
> gets saved/restored due to:
>
> > > Also when the LBRs haven't been set to direct access the state
> > > doesn't need to be saved.
How would you realize the function of saving/restoring the lbr stack on the host?
Here, we create a perf event on the host (please see guest_lbr_event_create on patch 7), which essentially satisfies all the conditions (e.g. increases cpuc->lbr_users) that are required to have the lbr stack saved/restored on the vCPU switching.
If we want to stop the host side lbr stack save/restore for the vCPU, we need accordingly to call guest_lbr_event_release (in patch 7) to destroy that perf event (the host doesn't automatically stop saving the lbr stack for the vCPU if that perf event is still there).
When would you call that release function? (we all know that the lbr doesn't need to be saved when the guest is not using it, but we need to destroy that perf event to achieve "doesn't need to be saved")
Best,
Wei
Powered by blists - more mailing lists