lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8fe554e5-e76e-9a0a-548d-bdac3b6b2b60@oracle.com>
Date: Tue, 6 Feb 2024 00:10:37 -0800
From: Dongli Zhang <dongli.zhang@...cle.com>
To: Prasad Pandit <ppandit@...hat.com>, Prasad Pandit <pjp@...oraproject.org>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Sean Christopherson <seanjc@...gle.com>
Subject: Re: [PATCH] KVM: x86: make KVM_REQ_NMI request iff NMI pending for
 vcpu

Hi Prasad,

On 1/2/24 23:53, Prasad Pandit wrote:
> From: Prasad Pandit <pjp@...oraproject.org>
> 
> kvm_vcpu_ioctl_x86_set_vcpu_events() routine makes 'KVM_REQ_NMI'
> request for a vcpu even when its 'events->nmi.pending' is zero.
> Ex:
>     qemu_thread_start
>      kvm_vcpu_thread_fn
>       qemu_wait_io_event
>        qemu_wait_io_event_common
>         process_queued_cpu_work
>          do_kvm_cpu_synchronize_post_init/_reset
>           kvm_arch_put_registers
>            kvm_put_vcpu_events (cpu, level=[2|3])
> 
> This leads vCPU threads in QEMU to constantly acquire & release the
> global mutex lock, delaying the guest boot due to lock contention.

Would you mind sharing where and how the lock contention is at QEMU space? That
is, how the QEMU mutex lock is impacted by KVM KVM_REQ_NMI?

Or you meant line 3031 at QEMU side?

2858 int kvm_cpu_exec(CPUState *cpu)
2859 {
2860     struct kvm_run *run = cpu->kvm_run;
2861     int ret, run_ret;
.. ...
3023         default:
3024             DPRINTF("kvm_arch_handle_exit\n");
3025             ret = kvm_arch_handle_exit(cpu, run);
3026             break;
3027         }
3028     } while (ret == 0);
3029
3030     cpu_exec_end(cpu);
3031     qemu_mutex_lock_iothread();
3032
3033     if (ret < 0) {
3034         cpu_dump_state(cpu, stderr, CPU_DUMP_CODE);
3035         vm_stop(RUN_STATE_INTERNAL_ERROR);
3036     }
3037
3038     qatomic_set(&cpu->exit_request, 0);
3039     return ret;
3040 }

Thank you very much!

Dongli Zhang

> Add check to make KVM_REQ_NMI request only if vcpu has NMI pending.
> 
> Fixes: bdedff263132 ("KVM: x86: Route pending NMIs from userspace through process_nmi()")
> Signed-off-by: Prasad Pandit <pjp@...oraproject.org>
> ---
>  arch/x86/kvm/x86.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 1a3aaa7dafae..468870450b8b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5405,7 +5405,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>  	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
>  		vcpu->arch.nmi_pending = 0;
>  		atomic_set(&vcpu->arch.nmi_queued, events->nmi.pending);
> -		kvm_make_request(KVM_REQ_NMI, vcpu);
> +		if (events->nmi.pending)
> +			kvm_make_request(KVM_REQ_NMI, vcpu);
>  	}
>  	static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
> 
> --
> 2.43.0
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ