lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1424170021.5749.22.camel@tkhai>
Date:	Tue, 17 Feb 2015 13:47:01 +0300
From:	Kirill Tkhai <ktkhai@...allels.com>
To:	<linux-kernel@...r.kernel.org>
CC:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	Josh Poimboeuf <jpoimboe@...hat.com>
Subject: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles


We migrate a task using TASK_ON_RQ_MIGRATING state of on_rq:

	raw_spin_lock(&old_rq->lock);
	deactivate_task(old_rq, p, 0);
	p->on_rq = TASK_ON_RQ_MIGRATING;
	set_task_cpu(p, new_cpu);
	raw_spin_unlock(&rq->lock);

I.e.:

	write TASK_ON_RQ_MIGRATING
	smp_wmb() (in __set_task_cpu)
	write new_cpu

But {,__}task_rq_lock() don't use smp_rmb(), and they may see
the cpu and TASK_ON_RQ_MIGRATING in opposite order. In this case
{,__}task_rq_lock() lock new_rq before the task is actually queued
on it.

Fix that using ordered reading.

Fixes: cca26e8009d1 "sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state"
Signed-off-by: Kirill Tkhai <ktkhai@...allels.com>
---
 kernel/sched/core.c  |    8 ++++++--
 kernel/sched/sched.h |    8 ++++++--
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fc12a1d..a42fb88 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -319,8 +319,12 @@ static struct rq *task_rq_lock(struct task_struct *p, unsigned long *flags)
 		raw_spin_lock_irqsave(&p->pi_lock, *flags);
 		rq = task_rq(p);
 		raw_spin_lock(&rq->lock);
-		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
-			return rq;
+		if (likely(rq == task_rq(p))) {
+			/* Pairs with smp_wmb() in __set_task_cpu() */
+			smp_rmb();
+			if (likely(!task_on_rq_migrating(p)))
+				return rq;
+		}
 		raw_spin_unlock(&rq->lock);
 		raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index f65f57c..4d7b03c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1031,8 +1031,12 @@ static inline struct rq *__task_rq_lock(struct task_struct *p)
 	for (;;) {
 		rq = task_rq(p);
 		raw_spin_lock(&rq->lock);
-		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
-			return rq;
+		if (likely(rq == task_rq(p))) {
+			/* Pairs with smp_wmb() in __set_task_cpu() */
+			smp_rmb();
+			if (likely(!task_on_rq_migrating(p)))
+				return rq;
+		}
 		raw_spin_unlock(&rq->lock);
 
 		while (unlikely(task_on_rq_migrating(p)))



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ