lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 21 Nov 2010 10:14:01 -0700
From:	Mike Galbraith <efault@....de>
To:	"Bjoern B. Brandenburg" <bbb.lst@...il.com>
Cc:	Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>,
	Andrea Bastoni <bastoni@...g.uniroma2.it>,
	"James H. Anderson" <anderson@...unc.edu>,
	linux-kernel@...r.kernel.org
Subject: Re: Scheduler bug related to rq->skip_clock_update?

On Sat, 2010-11-20 at 23:22 -0500, Bjoern B. Brandenburg wrote:

> I was under the impression that, as an invariant, tasks should not have
> TIF_NEED_RESCHED set after they've blocked. In this case, the idle load
> balancer should not mark the task that's on its way out with
> set_tsk_need_resched().

Nice find.

> In any case, check_preempt_curr() seems to assume that a resuming task cannot
> have TIF_NEED_RESCHED already set. Setting skip_clock_update on a remote CPU
> that hasn't even been notified via IPI seems wrong.

Yes. Does the below fix it up for you?

Sched: clear_tsk_need_resched() after pull_task() when NEWIDLE balancing

pull_task() may call set_tsk_need_resched() on a deactivated task,
leaving it vulnerable to an inappropriate preemption after wakeup.

This also confuses the skip_clock_update logic, which assumes that
schedule() will be called in very short order after being set.  Make
that logic more robust by clearing in update_rq_clock() itself, so
only one update can be skipped.

Signed-off-by: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Bjoern B. Brandenburg <bbb.lst@...il.com>
Reported-by: Bjoern B. Brandenburg <bbb.lst@...il.com>

---
 kernel/sched.c      |    3 ++-
 kernel/sched_fair.c |   10 ++++++++--
 2 files changed, 10 insertions(+), 3 deletions(-)

Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -657,6 +657,8 @@ inline void update_rq_clock(struct rq *r
 
 		sched_irq_time_avg_update(rq, irq_time);
 	}
+
+	rq->skip_clock_update = 0;
 }
 
 /*
@@ -3714,7 +3716,6 @@ static void put_prev_task(struct rq *rq,
 {
 	if (prev->se.on_rq)
 		update_rq_clock(rq);
-	rq->skip_clock_update = 0;
 	prev->sched_class->put_prev_task(rq, prev);
 }
 
Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -2019,15 +2019,21 @@ balance_tasks(struct rq *this_rq, int th
 		pulled++;
 		rem_load_move -= p->se.load.weight;
 
-#ifdef CONFIG_PREEMPT
 		/*
+		 * pull_task() may have set_tsk_need_resched().  Clear it
+		 * lest a sleeper awaken and be inappropriately preempted
+		 * shortly thereafter.
+		 *
 		 * NEWIDLE balancing is a source of latency, so preemptible
 		 * kernels will stop after the first task is pulled to minimize
 		 * the critical section.
 		 */
-		if (idle == CPU_NEWLY_IDLE)
+		if (idle == CPU_NEWLY_IDLE) {
+			clear_tsk_need_resched(this_rq->curr);
+#ifdef CONFIG_PREEMPT
 			break;
 #endif
+		}
 
 		/*
 		 * We only want to steal up to the prescribed amount of


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ