[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YYVODdVEc/deNP8p@google.com>
Date: Fri, 5 Nov 2021 15:30:21 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Like Xu <like.xu.linux@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] KVM: x86: Copy kvm_pmu_ops by value to eliminate
layer of indirection
On Wed, Nov 03, 2021, Like Xu wrote:
> Replace the kvm_pmu_ops pointer in common x86 with an instance of the
> struct to save one pointer dereference when invoking functions. Copy the
> struct by value to set the ops during kvm_init().
>
> Using kvm_x86_ops.hardware_enable to track whether or not the
> ops have been initialized, i.e. a vendor KVM module has been loaded.
>
> Signed-off-by: Like Xu <likexu@...cent.com>
> ---
> arch/x86/kvm/pmu.c | 41 +++++++++++++++++++++------------------
> arch/x86/kvm/pmu.h | 4 +++-
> arch/x86/kvm/vmx/nested.c | 2 +-
> arch/x86/kvm/x86.c | 3 +++
> 4 files changed, 29 insertions(+), 21 deletions(-)
>
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 0772bad9165c..0db1887137d9 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -47,6 +47,9 @@
> * * AMD: [0 .. AMD64_NUM_COUNTERS-1] <=> gp counters
> */
>
> +struct kvm_pmu_ops kvm_pmu_ops __read_mostly;
> +EXPORT_SYMBOL_GPL(kvm_pmu_ops);
> +
...
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index b4ee5e9f9e20..1e793e44b5ff 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4796,7 +4796,7 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
> return;
>
> vmx = to_vmx(vcpu);
> - if (kvm_x86_ops.pmu_ops->is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL)) {
> + if (kvm_pmu_ops.is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL)) {
I would much prefer we export kvm_pmu_is_valid_msr() and go through that for nVMX
than export all of kvm_pmu_ops for this one case.
> vmx->nested.msrs.entry_ctls_high |=
> VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
> vmx->nested.msrs.exit_ctls_high |=
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index ac83d873d65b..72d286595012 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11317,6 +11317,9 @@ int kvm_arch_hardware_setup(void *opaque)
> memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops));
> kvm_ops_static_call_update();
>
> + if (kvm_x86_ops.hardware_enable)
Huh? Did you intend this to be?
if (kvm_x86_ops.pmu_ops)
Either way, I don't see the point, VMX and SVM unconditionally provide the ops.
I would also say land this memcpy() above kvm_ops_static_call_update(), then the
enabling patch can do the static call updates in kvm_ops_static_call_update()
instead of adding another helper.
> + memcpy(&kvm_pmu_ops, kvm_x86_ops.pmu_ops, sizeof(kvm_pmu_ops));
As part of this change, the pmu_ops should be moved to kvm_x86_init_ops and tagged
as __initdata. That'll save those precious few bytes, and more importantly make
the original ops unreachable, i.e. make it harder to sneak in post-init modification
bugs.
> +
> if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
> supported_xss = 0;
>
> --
> 2.33.0
>
Powered by blists - more mailing lists