lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 15 Apr 2007 18:13:48 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	S.Çağlar Onur <caglar@...dus.org.tr>
Cc:	linux-kernel@...r.kernel.org,
	Michael Lothian <mike@...eburn.co.uk>,
	Christophe Thommeret <hftom@...e.fr>,
	Christoph Pfister <christophpfister@...il.com>,
	Jurgen Kofler <kaffeine@....net>
Subject: Re: Kaffeine problem with CFS


* S.Çağlar Onur <caglar@...dus.org.tr> wrote:

> > hm, could you try to strace it and/or attach gdb to it and figure 
> > out what's wrong? (perhaps involving the Kaffeine developers too?) 
> > As long as it's not a kernel level crash i cannot see how the 
> > scheduler could directly cause this - other than by accident 
> > creating a scheduling pattern that triggers a user-space bug more 
> > often than with other schedulers.
> 
> ...
> futex(0x89ac218, FUTEX_WAIT, 2, NULL)   = 0
> futex(0x89ac218, FUTEX_WAIT, 2, NULL)   = 0
> futex(0x89ac218, FUTEX_WAIT, 2, NULL)   = 0
> futex(0x89ac218, FUTEX_WAIT, 2, NULL)   = 0
> futex(0x89ac218, FUTEX_WAIT, 2, NULL)   = 0
> futex(0x89ac218, FUTEX_WAIT, 2, NULL)   = -1 EINTR (Interrupted system call)
> --- SIGINT (Interrupt) @ 0 (0) ---
> +++ killed by SIGINT +++
> 
> is where freeze occurs. Full log can be found at [1]

> [1] http://cekirdek.pardus.org.tr/~caglar/strace.kaffeine

thanks. This does has the appearance of a userspace race condition of 
some sorts. Can you trigger this hang with the patch below applied to 
the vanilla tree as well? (with no CFS patch applied)

if yes then this would suggest that Kaffeine has some sort of 
child-runs-first problem. (which CFS changes to parent-runs-first. 
Kaffeine starts a couple of threads and the futex calls are a sign of 
thread<->thread communication.)

[ i have also Cc:-ed the Kaffeine folks - maybe your strace gives them
  an idea about what the problem could be :) ]

	Ingo

---
 kernel/sched.c |   70 ++-------------------------------------------------------
 1 file changed, 3 insertions(+), 67 deletions(-)

Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -1653,77 +1653,13 @@ void fastcall sched_fork(struct task_str
  */
 void fastcall wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
 {
-	struct rq *rq, *this_rq;
 	unsigned long flags;
-	int this_cpu, cpu;
+	struct rq *rq;
 
 	rq = task_rq_lock(p, &flags);
 	BUG_ON(p->state != TASK_RUNNING);
-	this_cpu = smp_processor_id();
-	cpu = task_cpu(p);
-
-	/*
-	 * We decrease the sleep average of forking parents
-	 * and children as well, to keep max-interactive tasks
-	 * from forking tasks that are max-interactive. The parent
-	 * (current) is done further down, under its lock.
-	 */
-	p->sleep_avg = JIFFIES_TO_NS(CURRENT_BONUS(p) *
-		CHILD_PENALTY / 100 * MAX_SLEEP_AVG / MAX_BONUS);
-
-	p->prio = effective_prio(p);
-
-	if (likely(cpu == this_cpu)) {
-		if (!(clone_flags & CLONE_VM)) {
-			/*
-			 * The VM isn't cloned, so we're in a good position to
-			 * do child-runs-first in anticipation of an exec. This
-			 * usually avoids a lot of COW overhead.
-			 */
-			if (unlikely(!current->array))
-				__activate_task(p, rq);
-			else {
-				p->prio = current->prio;
-				p->normal_prio = current->normal_prio;
-				list_add_tail(&p->run_list, &current->run_list);
-				p->array = current->array;
-				p->array->nr_active++;
-				inc_nr_running(p, rq);
-			}
-			set_need_resched();
-		} else
-			/* Run child last */
-			__activate_task(p, rq);
-		/*
-		 * We skip the following code due to cpu == this_cpu
-	 	 *
-		 *   task_rq_unlock(rq, &flags);
-		 *   this_rq = task_rq_lock(current, &flags);
-		 */
-		this_rq = rq;
-	} else {
-		this_rq = cpu_rq(this_cpu);
-
-		/*
-		 * Not the local CPU - must adjust timestamp. This should
-		 * get optimised away in the !CONFIG_SMP case.
-		 */
-		p->timestamp = (p->timestamp - this_rq->most_recent_timestamp)
-					+ rq->most_recent_timestamp;
-		__activate_task(p, rq);
-		if (TASK_PREEMPTS_CURR(p, rq))
-			resched_task(rq->curr);
-
-		/*
-		 * Parent and child are on different CPUs, now get the
-		 * parent runqueue to update the parent's ->sleep_avg:
-		 */
-		task_rq_unlock(rq, &flags);
-		this_rq = task_rq_lock(current, &flags);
-	}
-	current->sleep_avg = JIFFIES_TO_NS(CURRENT_BONUS(current) *
-		PARENT_PENALTY / 100 * MAX_SLEEP_AVG / MAX_BONUS);
-	task_rq_unlock(this_rq, &flags);
+	__activate_task(p, rq);
+	task_rq_unlock(rq, &flags);
 }
 
 /*
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ