lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 Dec 2006 07:21:48 -0800
From:	"Chen, Kenneth W" <kenneth.w.chen@...el.com>
To:	"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
	"'Andrew Morton'" <akpm@...l.org>,
	"'Christoph Lameter'" <clameter@....com>,
	"Ingo Molnar" <mingo@...e.hu>
Cc:	"linux-kernel" <linux-kernel@...r.kernel.org>
Subject: [-mm patch] sched remove lb_stopbalance counter

Regarding to sched-decrease-number-of-load-balances.patch currently
in -mm tree: I would like to revert the change on adding lb_stopbalance
counter.  This count can be calculated by: lb_balanced - lb_nobusyg -
lb_nobusyq.  There is no need to create gazillion counters while we
can derive the value.  I'm more of against changing sched-stat format
unless it is absolutely necessary as all user land tool parsing
/proc/schedstat needs to be updated and it's a real pain trying to keep
multiple versions of it.


Signed-off-by: Ken Chen <kenneth.w.chen@...el.com>


--- ./include/linux/sched.h.orig	2006-12-05 07:56:11.000000000 -0800
+++ ./include/linux/sched.h	2006-12-05 07:57:55.000000000 -0800
@@ -684,7 +684,6 @@ struct sched_domain {
 	unsigned long lb_hot_gained[MAX_IDLE_TYPES];
 	unsigned long lb_nobusyg[MAX_IDLE_TYPES];
 	unsigned long lb_nobusyq[MAX_IDLE_TYPES];
-	unsigned long lb_stopbalance[MAX_IDLE_TYPES];
 
 	/* Active load balancing */
 	unsigned long alb_cnt;
--- ./kernel/sched.c.orig	2006-12-05 07:56:31.000000000 -0800
+++ ./kernel/sched.c	2006-12-05 08:02:11.000000000 -0800
@@ -428,7 +428,7 @@ static inline void task_rq_unlock(struct
  * bump this up when changing the output format or the meaning of an existing
  * format, so that tools can adapt (or abort)
  */
-#define SCHEDSTAT_VERSION 13
+#define SCHEDSTAT_VERSION 12
 
 static int show_schedstat(struct seq_file *seq, void *v)
 {
@@ -466,7 +466,7 @@ static int show_schedstat(struct seq_fil
 			seq_printf(seq, "domain%d %s", dcnt++, mask_str);
 			for (itype = SCHED_IDLE; itype < MAX_IDLE_TYPES;
 					itype++) {
-				seq_printf(seq, " %lu %lu %lu %lu %lu %lu %lu %lu %lu",
+				seq_printf(seq, " %lu %lu %lu %lu %lu %lu %lu %lu",
 				    sd->lb_cnt[itype],
 				    sd->lb_balanced[itype],
 				    sd->lb_failed[itype],
@@ -474,8 +474,7 @@ static int show_schedstat(struct seq_fil
 				    sd->lb_gained[itype],
 				    sd->lb_hot_gained[itype],
 				    sd->lb_nobusyq[itype],
-				    sd->lb_nobusyg[itype],
-				    sd->lb_stopbalance[itype]);
+				    sd->lb_nobusyg[itype]);
 			}
 			seq_printf(seq, " %lu %lu %lu %lu %lu %lu %lu %lu %lu %lu %lu %lu\n",
 			    sd->alb_cnt, sd->alb_failed, sd->alb_pushed,
@@ -2579,10 +2578,8 @@ redo:
 	group = find_busiest_group(sd, this_cpu, &imbalance, idle, &sd_idle,
 				   &cpus, balance);
 
-	if (*balance == 0) {
-		schedstat_inc(sd, lb_stopbalance[idle]);
+	if (*balance == 0)
 		goto out_balanced;
-	}
 
 	if (!group) {
 		schedstat_inc(sd, lb_nobusyg[idle]);



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ