lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 11 Jun 2018 15:38:49 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>
Subject: [PATCH 1/2] KVM: Fix lock holder candidate yield

From: Wanpeng Li <wanpengli@...cent.com>

After detecting pause loop which is executed by a Lock Waiter in the 
guest, the pCPU will be yielded to a Lock Holder candidate, the Lock 
Holder candidate may have its own task affinity constrain, however, 
current yield logic yield to the Lock Holder condidate unconditionally 
w/o checking the affinity constrain and set the task to the next buddy 
of cfs, this will break the scheduler. This patch fixes it by skipping 
the candidate vCPU if the current pCPU doesn't meat the affinity constrain.

Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
---
 virt/kvm/kvm_main.c | 29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index aa7da1d8e..ccf8907 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2239,17 +2239,40 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
 EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
 #endif /* !CONFIG_S390 */
 
-int kvm_vcpu_yield_to(struct kvm_vcpu *target)
+struct task_struct *vcpu_to_task(struct kvm_vcpu *target)
 {
 	struct pid *pid;
 	struct task_struct *task = NULL;
-	int ret = 0;
 
 	rcu_read_lock();
 	pid = rcu_dereference(target->pid);
 	if (pid)
 		task = get_pid_task(pid, PIDTYPE_PID);
 	rcu_read_unlock();
+	return task;
+}
+
+bool kvm_vcpu_allow_yield(struct kvm_vcpu *target)
+{
+	struct task_struct *task = NULL;
+	bool ret = false;
+
+	task = vcpu_to_task(target);
+	if (!task)
+		return ret;
+	if (cpumask_test_cpu(raw_smp_processor_id(), &task->cpus_allowed))
+		ret = true;
+	put_task_struct(task);
+
+	return ret;
+}
+
+int kvm_vcpu_yield_to(struct kvm_vcpu *target)
+{
+	struct task_struct *task = NULL;
+	int ret = 0;
+
+	task = vcpu_to_task(target);
 	if (!task)
 		return ret;
 	ret = yield_to(task, 1);
@@ -2333,6 +2356,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
 				continue;
 			if (!kvm_vcpu_eligible_for_directed_yield(vcpu))
 				continue;
+			if (!kvm_vcpu_allow_yield(vcpu))
+				continue;
 
 			yielded = kvm_vcpu_yield_to(vcpu);
 			if (yielded > 0) {
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ