lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1164624611.5892.27.camel@Homer.simpson.net>
Date:	Mon, 27 Nov 2006 11:50:11 +0100
From:	Mike Galbraith <efault@....de>
To:	Don Mullis <dwm@...r.net>
Cc:	Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org,
	mingo@...e.hu
Subject: [patch] Re: 2.6.19-rc6-mm1 --
	sched-improve-migration-accuracy.patch slows boot

On Sun, 2006-11-26 at 17:38 -0800, Don Mullis wrote: 
> > This must be a bisection false positive.  The patch in question is
> > essentially a no-op for a UP kernel.

Duh!  Except for the bug, which doesn't care either way.

> Testing alternately with 
> 	1) all -mm1 patches applied, and 
> 	2) all except sched-improve-migration-accuracy*.path applied,
> confirms the misbehavior.

While fixing a sched_time accounting buglet, I stupidly broke sleep_avg
accounting, and quite thoroughly for cpu hogs.  Since I updated a task's
timestamp at tick time, but sleep_avg adjustment only takes place at
schedule time, every tick a task took without scheduling resulted in a
tick of run time lost for sleep_avg accounting.  The below should fix
it, can you confirm?

Fix sleep_avg breakage induced by sched-improve-migration-accuracy.path
Use p->last_ran to fix sched_time buglet instead of p->timestamp.

Signed-off-by: Mike Galbraith <efault@....de>

--- linux-2.6.19-rc6-mm1/kernel/sched.c.org	2006-11-27 10:24:07.000000000 +0100
+++ linux-2.6.19-rc6-mm1/kernel/sched.c	2006-11-27 10:28:59.000000000 +0100
@@ -3024,8 +3024,8 @@ EXPORT_PER_CPU_SYMBOL(kstat);
 static inline void
 update_cpu_clock(struct task_struct *p, struct rq *rq, unsigned long long now)
 {
-	p->sched_time += now - p->timestamp;
-	p->timestamp = rq->most_recent_timestamp = now;
+	p->sched_time += now - p->last_ran;
+	p->last_ran = rq->most_recent_timestamp = now;
 }
 
 /*
@@ -3038,7 +3038,7 @@ unsigned long long current_sched_time(co
 	unsigned long flags;
 
 	local_irq_save(flags);
-	ns = p->sched_time + sched_clock() - p->timestamp;
+	ns = p->sched_time + sched_clock() - p->last_ran;
 	local_irq_restore(flags);
 
 	return ns;
@@ -3553,10 +3553,11 @@ switch_tasks:
 	prev->sleep_avg -= run_time;
 	if ((long)prev->sleep_avg <= 0)
 		prev->sleep_avg = 0;
+	prev->timestamp = prev->last_ran = now;
 
 	sched_info_switch(prev, next);
 	if (likely(prev != next)) {
-		next->timestamp = prev->last_ran = now;
+		next->timestamp = now;
 		rq->nr_switches++;
 		rq->curr = next;
 		++*switch_count;


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ