[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f63de4f-a690-e29b-f3d4-2397a3837ddc@intel.com>
Date: Thu, 12 Jan 2023 09:57:25 +0800
From: "Yang, Weijiang" <weijiang.yang@...el.com>
To: "Christopherson,, Sean" <seanjc@...gle.com>
CC: "like.xu.linux@...il.com" <like.xu.linux@...il.com>,
"kan.liang@...ux.intel.com" <kan.liang@...ux.intel.com>,
"Wang, Wei W" <wei.w.wang@...el.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"jmattson@...gle.com" <jmattson@...gle.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 00/15] Introduce Architectural LBR for vPMU
Hi, Sean,
Sorry to bother, do you have time to review this series? The feature
has been pending
for a long time, and I want to move it forward.
Thanks!
On 11/25/2022 12:05 PM, Yang, Weijiang wrote:
> Intel CPU model-specific LBR(Legacy LBR) has evolved to Architectural
> LBR(Arch LBR [0]), it's the replacement of legacy LBR on new platforms.
> The native support patches were merged into 5.9 kernel tree, and this
> patch series is to enable Arch LBR in vPMU so that guest can benefit
> from the feature.
>
> The main advantages of Arch LBR are [1]:
> - Faster context switching due to XSAVES support and faster reset of
> LBR MSRs via the new DEPTH MSR
> - Faster LBR read for a non-PEBS event due to XSAVES support, which
> lowers the overhead of the NMI handler.
> - Linux kernel can support the LBR features without knowing the model
> number of the current CPU.
>
> From end user's point of view, the usage of Arch LBR is the same as
> the Legacy LBR that has been merged in the mainline.
>
> Note, in this series, there's one restriction for guest Arch LBR, i.e.,
> guest can only set its LBR record depth the same as host's. This is due
> to the special behavior of MSR_ARCH_LBR_DEPTH:
> 1) On write to the MSR, it'll reset all Arch LBR recording MSRs to 0s.
> 2) XRSTORS resets all record MSRs to 0s if the saved depth mismatches
> MSR_ARCH_LBR_DEPTH.
> Enforcing the restriction keeps KVM Arch LBR vPMU working flow simple
> and straightforward.
>
> Paolo refactored the old series and the resulting patches became the
> base of this new series, therefore he's the author of some patches.
>
> [0] https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html
> [1] https://lore.kernel.org/lkml/1593780569-62993-1-git-send-email-kan.liang@linux.intel.com/
>
> v1:
> https://lore.kernel.org/all/20220831223438.413090-1-weijiang.yang@intel.com/
>
> Changes v2:
> 1. Removed Paolo's SOBs from some patches. [Sean]
> 2. Modified some patches due to KVM changes, e.g., SMM/vPMU refactor.
> 3. Rebased to https://git.kernel.org/pub/scm/virt/kvm/kvm.git : queue branch.
>
>
> Like Xu (3):
> perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
> KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR
> KVM: x86: Add XSAVE Support for Architectural LBR
>
> Paolo Bonzini (4):
> KVM: PMU: disable LBR handling if architectural LBR is available
> KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL for guest Arch LBR
> KVM: VMX: Support passthrough of architectural LBRs
> KVM: x86: Refine the matching and clearing logic for supported_xss
>
> Sean Christopherson (1):
> KVM: x86: Report XSS as an MSR to be saved if there are supported
> features
>
> Yang Weijiang (7):
> KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS
> KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list
> KVM: x86/vmx: Check Arch LBR config when return perf capabilities
> KVM: x86/vmx: Disable Arch LBREn bit in #DB and warm reset
> KVM: x86/vmx: Save/Restore guest Arch LBR Ctrl msr at SMM entry/exit
> KVM: x86: Add Arch LBR data MSR access interface
> KVM: x86/cpuid: Advertise Arch LBR feature in CPUID
>
> arch/x86/events/intel/lbr.c | 6 +-
> arch/x86/include/asm/kvm_host.h | 3 +
> arch/x86/include/asm/msr-index.h | 1 +
> arch/x86/include/asm/vmx.h | 4 +
> arch/x86/kvm/cpuid.c | 52 +++++++++-
> arch/x86/kvm/smm.c | 1 +
> arch/x86/kvm/smm.h | 3 +-
> arch/x86/kvm/vmx/capabilities.h | 5 +
> arch/x86/kvm/vmx/nested.c | 8 ++
> arch/x86/kvm/vmx/pmu_intel.c | 161 +++++++++++++++++++++++++++----
> arch/x86/kvm/vmx/vmx.c | 74 +++++++++++++-
> arch/x86/kvm/vmx/vmx.h | 6 +-
> arch/x86/kvm/x86.c | 27 +++++-
> 13 files changed, 316 insertions(+), 35 deletions(-)
>
>
> base-commit: da5f28e10aa7df1a925dbc10656cc89d9c061358
Powered by blists - more mailing lists