lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Sun,  5 Nov 2017 16:55:21 -0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Wanpeng Li <wanpeng.li@...mail.com>
Subject: [PATCH] KVM: X86: Fix softlockup when get the current kvmclock timestamp

From: Wanpeng Li <wanpeng.li@...mail.com>

 watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [qemu-system-x86:10185]
 CPU: 6 PID: 10185 Comm: qemu-system-x86 Tainted: G           OE   4.14.0-rc4+ #4
 RIP: 0010:kvm_get_time_scale+0x4e/0xa0 [kvm]
 Call Trace:
  ? get_kvmclock_ns+0xa3/0x140 [kvm]
  get_time_ref_counter+0x5a/0x80 [kvm]
  kvm_hv_process_stimers+0x120/0x5f0 [kvm]
  ? kvm_hv_process_stimers+0x120/0x5f0 [kvm]
  ? preempt_schedule+0x27/0x30
  ? ___preempt_schedule+0x16/0x18
  kvm_arch_vcpu_ioctl_run+0x4b4/0x1690 [kvm]
  ? kvm_arch_vcpu_load+0x47/0x230 [kvm]
  kvm_vcpu_ioctl+0x33a/0x620 [kvm]
  ? kvm_vcpu_ioctl+0x33a/0x620 [kvm]
  ? kvm_vm_ioctl_check_extension_generic+0x3b/0x40 [kvm]
  ? kvm_dev_ioctl+0x279/0x6c0 [kvm]
  do_vfs_ioctl+0xa1/0x5d0
  ? __fget+0x73/0xa0
  SyS_ioctl+0x79/0x90
  entry_SYSCALL_64_fastpath+0x1e/0xa9

This can be reproduced when running kvm-unit-tests/hyperv_stimer.flat and 
cpu-hotplug stress simultaneously. kvm_get_time_scale() takes too long which 
results in softlockup.

This patch fixes it by disabling preemption just for rdtsc() and __this_cpu_read() 
section in order to reduce preemption disable time, the per-CPU variable is just 
read operation and the master clock is active, so it is as safe as how it is used 
in kvm_guest_time_update().

Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
---
 arch/x86/kvm/x86.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 34c85aa..2542f9b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1781,6 +1781,7 @@ u64 get_kvmclock_ns(struct kvm *kvm)
 	struct kvm_arch *ka = &kvm->arch;
 	struct pvclock_vcpu_time_info hv_clock;
 	u64 ret;
+	unsigned long this_tsc_khz, host_tsc;
 
 	spin_lock(&ka->pvclock_gtod_sync_lock);
 	if (!ka->use_master_clock) {
@@ -1795,13 +1796,17 @@ u64 get_kvmclock_ns(struct kvm *kvm)
 	/* both __this_cpu_read() and rdtsc() should be on the same cpu */
 	get_cpu();
 
-	kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
-			   &hv_clock.tsc_shift,
-			   &hv_clock.tsc_to_system_mul);
-	ret = __pvclock_read_cycles(&hv_clock, rdtsc());
+	this_tsc_khz = __this_cpu_read(cpu_tsc_khz);
+	host_tsc = rdtsc();
 
 	put_cpu();
 
+	/* With all the info we got, fill in the values */
+	kvm_get_time_scale(NSEC_PER_SEC, this_tsc_khz * 1000LL,
+			   &hv_clock.tsc_shift,
+			   &hv_clock.tsc_to_system_mul);
+	ret = __pvclock_read_cycles(&hv_clock, host_tsc);
+
 	return ret;
 }
 
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ