lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 06 Nov 2017 07:14:27 -0600 From: "Gustavo A. R. Silva" <garsilva@...eddedor.com> To: Paolo Bonzini <pbonzini@...hat.com> Cc: Radim Krčmář <rkrcmar@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org Subject: Re: [PATCH] KVM: VMX: replace move_msr_up with swap macro Hi Paolo, Quoting Paolo Bonzini <pbonzini@...hat.com>: > ----- Original Message ----- >> From: "Gustavo A. R. Silva" <garsilva@...eddedor.com> >> To: "Paolo Bonzini" <pbonzini@...hat.com>, "Radim Krčmář" >> <rkrcmar@...hat.com>, "Thomas Gleixner" >> <tglx@...utronix.de>, "Ingo Molnar" <mingo@...hat.com>, "H. Peter >> Anvin" <hpa@...or.com>, x86@...nel.org >> Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, "Gustavo A. >> R. Silva" <garsilva@...eddedor.com> >> Sent: Friday, November 3, 2017 11:58:19 PM >> Subject: [PATCH] KVM: VMX: replace move_msr_up with swap macro >> >> Function move_msr_up is used to _manually_ swap MSR entries in MSR array. >> This function can be removed and replaced using the swap macro instead. >> >> This code was detected with the help of Coccinelle. > > I think move_msr_up should instead change into a function like > > void mark_msr_for_save(struct vcpu_vmx *vmx, int index) > { > swap(vmx->guest_msrs[index], vmx->guest_msrs[vmx->save_nmsrs]); > vmx->save_nmsrs++; > } > > Using swap is useful, but it is also hiding what's going on exactly > (in addition, using ++ inside a macro argument might be calling for > trouble). > Thanks for your comments. I'll work on v2 based on your feedback. -- Gustavo A. R. Silva > Paolo > >> >> Signed-off-by: Gustavo A. R. Silva <garsilva@...eddedor.com> >> --- >> The new lines are over 80 characters, but I think in this case that is >> preferable over splitting them. >> >> arch/x86/kvm/vmx.c | 24 ++++++------------------ >> 1 file changed, 6 insertions(+), 18 deletions(-) >> >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >> index e6c8ffa..210e491 100644 >> --- a/arch/x86/kvm/vmx.c >> +++ b/arch/x86/kvm/vmx.c >> @@ -2544,18 +2544,6 @@ static bool vmx_invpcid_supported(void) >> return cpu_has_vmx_invpcid() && enable_ept; >> } >> >> -/* >> - * Swap MSR entry in host/guest MSR entry array. >> - */ >> -static void move_msr_up(struct vcpu_vmx *vmx, int from, int to) >> -{ >> - struct shared_msr_entry tmp; >> - >> - tmp = vmx->guest_msrs[to]; >> - vmx->guest_msrs[to] = vmx->guest_msrs[from]; >> - vmx->guest_msrs[from] = tmp; >> -} >> - >> static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu) >> { >> unsigned long *msr_bitmap; >> @@ -2600,28 +2588,28 @@ static void setup_msrs(struct vcpu_vmx *vmx) >> if (is_long_mode(&vmx->vcpu)) { >> index = __find_msr_index(vmx, MSR_SYSCALL_MASK); >> if (index >= 0) >> - move_msr_up(vmx, index, save_nmsrs++); >> + swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]); >> index = __find_msr_index(vmx, MSR_LSTAR); >> if (index >= 0) >> - move_msr_up(vmx, index, save_nmsrs++); >> + swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]); >> index = __find_msr_index(vmx, MSR_CSTAR); >> if (index >= 0) >> - move_msr_up(vmx, index, save_nmsrs++); >> + swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]); >> index = __find_msr_index(vmx, MSR_TSC_AUX); >> if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) >> - move_msr_up(vmx, index, save_nmsrs++); >> + swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]); >> /* >> * MSR_STAR is only needed on long mode guests, and only >> * if efer.sce is enabled. >> */ >> index = __find_msr_index(vmx, MSR_STAR); >> if ((index >= 0) && (vmx->vcpu.arch.efer & EFER_SCE)) >> - move_msr_up(vmx, index, save_nmsrs++); >> + swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]); >> } >> #endif >> index = __find_msr_index(vmx, MSR_EFER); >> if (index >= 0 && update_transition_efer(vmx, index)) >> - move_msr_up(vmx, index, save_nmsrs++); >> + swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]); >> >> vmx->save_nmsrs = save_nmsrs; > > >> >> -- >> 2.7.4 >> >>
Powered by blists - more mailing lists