lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Nov 2011 15:48:16 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Mike Galbraith <mgalbraith@...e.de>
Cc:	Suresh Siddha <suresh.b.siddha@...el.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...e.hu>, Paul Turner <pjt@...gle.com>
Subject: Re: [patch 3/7] sched: set skip_clock_update in yield_task_fair()

On Tue, 2011-11-22 at 15:21 +0100, Mike Galbraith wrote:
> This is another case where we are on our way to schedule(),
> so can save a useless clock update and resulting microscopic
> vruntime update.
> 

Everytime I see one of these skip_clock_update thingies I get the idea
we should do something smarter, because its most fragile and getting it
wrong results in a subtle trainwreck..

Would something like the below help in validating this stuff?


---
Index: linux-2.6/kernel/sched/core.c
===================================================================
--- linux-2.6.orig/kernel/sched/core.c
+++ linux-2.6/kernel/sched/core.c
@@ -113,12 +113,27 @@ void update_rq_clock(struct rq *rq)
 {
 	s64 delta;
 
-	if (rq->skip_clock_update > 0)
+	if (rq->skip_clock_update > 0) {
+		/*
+		 * We skipped an update even though we had a full context
+		 * switch in between, badness.
+		 */
+		if (rq->clock_update_stamp != rq->sched_switch)
+			trace_printk("lost an update!!\n");
 		return;
+	}
+
+	/*
+	 * We're updating the clock even though we didn't switch yet.
+	 */
+	if (rq->clock_update_stamp == rq->sched_switch)
+		trace_printk("too many updates!!\n");
 
 	delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
 	rq->clock += delta;
 	update_rq_clock_task(rq, delta);
+
+	rq->clock_update_stamp = rq->sched_switch
 }
 
 /*
@@ -1922,6 +1937,9 @@ static void finish_task_switch(struct rq
 #ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
 	local_irq_enable();
 #endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
+#ifdef CONFIG_SCHED_DEBUG
+	schedstat_inc(rq, sched_switch);
+#endif
 	finish_lock_switch(rq, prev);
 
 	fire_sched_in_preempt_notifiers(current);
Index: linux-2.6/kernel/sched/sched.h
===================================================================
--- linux-2.6.orig/kernel/sched/sched.h
+++ linux-2.6/kernel/sched/sched.h
@@ -374,6 +374,8 @@ struct rq {
 	unsigned char nohz_balance_kick;
 #endif
 	int skip_clock_update;
+	unsigned int clock_update_stamp;
+
 
 	/* capture load from *all* tasks on this cpu: */
 	struct load_weight load;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ