[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5ed26dee-e439-6c2f-cd10-e73fefbd3a02@redhat.com>
Date: Tue, 5 Nov 2019 11:21:03 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Andrea Arcangeli <aarcange@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH 12/13] KVM: retpolines: x86: eliminate retpoline from
svm.c exit handlers
On 05/11/19 00:00, Andrea Arcangeli wrote:
> It's enough to check the exit value and issue a direct call to avoid
> the retpoline for all the common vmexit reasons.
>
> After this commit is applied, here the most common retpolines executed
> under a high resolution timer workload in the guest on a SVM host:
>
> [..]
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_rax+33
> ktime_get_update_offsets_now+70
> hrtimer_interrupt+131
> smp_apic_timer_interrupt+106
> apic_timer_interrupt+15
> start_sw_timer+359
> restart_apic_timer+85
> kvm_set_msr_common+1497
> msr_interception+142
> vcpu_enter_guest+684
> kvm_arch_vcpu_ioctl_run+261
> kvm_vcpu_ioctl+559
> do_vfs_ioctl+164
> ksys_ioctl+96
> __x64_sys_ioctl+22
> do_syscall_64+89
> entry_SYSCALL_64_after_hwframe+68
> ]: 1940
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_r12+33
> force_qs_rnp+217
> rcu_gp_kthread+1270
> kthread+268
> ret_from_fork+34
> ]: 4644
> @[]: 25095
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_rax+33
> lapic_next_event+28
> clockevents_program_event+148
> hrtimer_start_range_ns+528
> start_sw_timer+356
> restart_apic_timer+85
> kvm_set_msr_common+1497
> msr_interception+142
> vcpu_enter_guest+684
> kvm_arch_vcpu_ioctl_run+261
> kvm_vcpu_ioctl+559
> do_vfs_ioctl+164
> ksys_ioctl+96
> __x64_sys_ioctl+22
> do_syscall_64+89
> entry_SYSCALL_64_after_hwframe+68
> ]: 41474
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_rax+33
> clockevents_program_event+148
> hrtimer_start_range_ns+528
> start_sw_timer+356
> restart_apic_timer+85
> kvm_set_msr_common+1497
> msr_interception+142
> vcpu_enter_guest+684
> kvm_arch_vcpu_ioctl_run+261
> kvm_vcpu_ioctl+559
> do_vfs_ioctl+164
> ksys_ioctl+96
> __x64_sys_ioctl+22
> do_syscall_64+89
> entry_SYSCALL_64_after_hwframe+68
> ]: 41474
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_rax+33
> ktime_get+58
> clockevents_program_event+84
> hrtimer_start_range_ns+528
> start_sw_timer+356
> restart_apic_timer+85
> kvm_set_msr_common+1497
> msr_interception+142
> vcpu_enter_guest+684
> kvm_arch_vcpu_ioctl_run+261
> kvm_vcpu_ioctl+559
> do_vfs_ioctl+164
> ksys_ioctl+96
> __x64_sys_ioctl+22
> do_syscall_64+89
> entry_SYSCALL_64_after_hwframe+68
> ]: 41887
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_rax+33
> lapic_next_event+28
> clockevents_program_event+148
> hrtimer_try_to_cancel+168
> hrtimer_cancel+21
> kvm_set_lapic_tscdeadline_msr+43
> kvm_set_msr_common+1497
> msr_interception+142
> vcpu_enter_guest+684
> kvm_arch_vcpu_ioctl_run+261
> kvm_vcpu_ioctl+559
> do_vfs_ioctl+164
> ksys_ioctl+96
> __x64_sys_ioctl+22
> do_syscall_64+89
> entry_SYSCALL_64_after_hwframe+68
> ]: 42723
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_rax+33
> clockevents_program_event+148
> hrtimer_try_to_cancel+168
> hrtimer_cancel+21
> kvm_set_lapic_tscdeadline_msr+43
> kvm_set_msr_common+1497
> msr_interception+142
> vcpu_enter_guest+684
> kvm_arch_vcpu_ioctl_run+261
> kvm_vcpu_ioctl+559
> do_vfs_ioctl+164
> ksys_ioctl+96
> __x64_sys_ioctl+22
> do_syscall_64+89
> entry_SYSCALL_64_after_hwframe+68
> ]: 42766
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_rax+33
> ktime_get+58
> clockevents_program_event+84
> hrtimer_try_to_cancel+168
> hrtimer_cancel+21
> kvm_set_lapic_tscdeadline_msr+43
> kvm_set_msr_common+1497
> msr_interception+142
> vcpu_enter_guest+684
> kvm_arch_vcpu_ioctl_run+261
> kvm_vcpu_ioctl+559
> do_vfs_ioctl+164
> ksys_ioctl+96
> __x64_sys_ioctl+22
> do_syscall_64+89
> entry_SYSCALL_64_after_hwframe+68
> ]: 42848
> @[
> trace_retpoline+1
> __trace_retpoline+30
> __x86_indirect_thunk_rax+33
> ktime_get+58
> start_sw_timer+279
> restart_apic_timer+85
> kvm_set_msr_common+1497
> msr_interception+142
> vcpu_enter_guest+684
> kvm_arch_vcpu_ioctl_run+261
> kvm_vcpu_ioctl+559
> do_vfs_ioctl+164
> ksys_ioctl+96
> __x64_sys_ioctl+22
> do_syscall_64+89
> entry_SYSCALL_64_after_hwframe+68
> ]: 499845
>
> @total: 1780243
>
> SVM has no TSC based programmable preemption timer so it is invoking
> ktime_get() frequently.
>
> Signed-off-by: Andrea Arcangeli <aarcange@...hat.com>
> ---
> arch/x86/kvm/svm.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 0021e11fd1fb..3942bca46740 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -4995,6 +4995,18 @@ int kvm_x86_handle_exit(struct kvm_vcpu *vcpu)
> return 0;
> }
>
> +#ifdef CONFIG_RETPOLINE
> + if (exit_code == SVM_EXIT_MSR)
> + return msr_interception(svm);
> + else if (exit_code == SVM_EXIT_VINTR)
> + return interrupt_window_interception(svm);
> + else if (exit_code == SVM_EXIT_INTR)
> + return intr_interception(svm);
> + else if (exit_code == SVM_EXIT_HLT)
> + return halt_interception(svm);
> + else if (exit_code == SVM_EXIT_NPF)
> + return npf_interception(svm);
> +#endif
> return svm_exit_handlers[exit_code](svm);
> }
>
>
Queued, thanks (BTW, I still disagree about HLT exits but okay).
Paolo
Powered by blists - more mailing lists