[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200923180409.32255-9-sean.j.christopherson@intel.com>
Date: Wed, 23 Sep 2020 11:04:02 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v2 08/15] KVM: VMX: Rename "__find_msr_index" to "__vmx_find_uret_msr"
Rename "__find_msr_index" to scope it to VMX, associate it with
guest_uret_msrs, and to avoid conflating "MSR's ECX index" with "MSR's
array index". Similarly, don't use "slot" in the name so as to avoid
colliding the common x86's half of "user_return_msrs" (the slot in
kvm_user_return_msrs is not the same slot in guest_uret_msrs).
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
---
arch/x86/kvm/vmx/vmx.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 4da4fc65d459..ca41ee8fac5d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -623,7 +623,7 @@ static inline bool report_flexpriority(void)
return flexpriority_enabled;
}
-static inline int __find_msr_index(struct vcpu_vmx *vmx, u32 msr)
+static inline int __vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr)
{
int i;
@@ -637,7 +637,7 @@ struct vmx_uret_msr *find_msr_entry(struct vcpu_vmx *vmx, u32 msr)
{
int i;
- i = __find_msr_index(vmx, msr);
+ i = __vmx_find_uret_msr(vmx, msr);
if (i >= 0)
return &vmx->guest_uret_msrs[i];
return NULL;
@@ -1708,24 +1708,24 @@ static void setup_msrs(struct vcpu_vmx *vmx)
* when EFER.SCE is set.
*/
if (is_long_mode(&vmx->vcpu) && (vmx->vcpu.arch.efer & EFER_SCE)) {
- index = __find_msr_index(vmx, MSR_STAR);
+ index = __vmx_find_uret_msr(vmx, MSR_STAR);
if (index >= 0)
move_msr_up(vmx, index, nr_active_uret_msrs++);
- index = __find_msr_index(vmx, MSR_LSTAR);
+ index = __vmx_find_uret_msr(vmx, MSR_LSTAR);
if (index >= 0)
move_msr_up(vmx, index, nr_active_uret_msrs++);
- index = __find_msr_index(vmx, MSR_SYSCALL_MASK);
+ index = __vmx_find_uret_msr(vmx, MSR_SYSCALL_MASK);
if (index >= 0)
move_msr_up(vmx, index, nr_active_uret_msrs++);
}
#endif
- index = __find_msr_index(vmx, MSR_EFER);
+ index = __vmx_find_uret_msr(vmx, MSR_EFER);
if (index >= 0 && update_transition_efer(vmx, index))
move_msr_up(vmx, index, nr_active_uret_msrs++);
- index = __find_msr_index(vmx, MSR_TSC_AUX);
+ index = __vmx_find_uret_msr(vmx, MSR_TSC_AUX);
if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP))
move_msr_up(vmx, index, nr_active_uret_msrs++);
- index = __find_msr_index(vmx, MSR_IA32_TSX_CTRL);
+ index = __vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL);
if (index >= 0)
move_msr_up(vmx, index, nr_active_uret_msrs++);
--
2.28.0
Powered by blists - more mailing lists