lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Jun 2014 09:15:51 -0700
From:	Tim Chen <tim.c.chen@...ux.intel.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Davidlohr Bueso <davidlohr@...com>,
	Alex Shi <alex.shi@...aro.org>,
	Andi Kleen <andi@...stfloor.org>,
	Michel Lespinasse <walken@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	Peter Hurley <peter@...leysoftware.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"Paul E.McKenney" <paulmck@...ux.vnet.ibm.com>,
	Jason Low <jason.low2@...com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] sched: Fast idling of CPU when system is partially
 loaded

On Mon, 2014-06-23 at 12:16 -0700, Tim Chen wrote:
> Thanks to the review from Jason, Andi and Peter. I've updated
> the code as Peter suggested with simplified logic.
> 
> When a system is lightly loaded (i.e. no more than 1 job per cpu),
> attempt to pull job to a cpu before putting it to idle is unnecessary and
> can be skipped.  This patch adds an indicator so the scheduler can know
> when there's no more than 1 active job is on any CPU in the system to
> skip needless job pulls.
> 
> On a 4 socket machine with a request/response kind of workload from
> clients, we saw about 0.13 msec delay when we go through a full load
> balance to try pull job from all the other cpus.  While 0.1 msec was
> spent on processing the request and generating a response, the 0.13 msec
> load balance overhead was actually more than the actual work being done.
> This overhead can be skipped much of the time for lightly loaded systems.
> 
> With this patch, we tested with a netperf request/response workload that
> has the server busy with half the cpus in a 4 socket system.  We found
> the patch eliminated 75% of the load balance attempts before idling a cpu.
> 
> The overhead of setting/clearing the indicator is low as we already gather
> the necessary info while we call add_nr_running and update_sd_lb_stats.
> We switch to full load balance load immediately if any cpu got more than
> one job on its run queue in add_nr_running.  We'll clear the indicator
> to avoid load balance when we detect no cpu's have more than one job
> when we scan the work queues in update_sg_lb_stats.  We are aggressive
> in turning on the load balance and opportunistic in skipping the load
> balance.
> 
> Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
> Acked-by: Jason Low <jason.low2@...com>

Peter,

I need to fixup the code of updating the indicator under
the CONFIG_SMP compile flag.  

Also attached a complete updated patch.

Thanks.

Tim

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 6d25f1d..d051712 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1222,9 +1222,10 @@ static inline void add_nr_running(struct rq *rq, unsigned count)
 	rq->nr_running = prev_nr + count;
 
 	if (prev_nr < 2 && rq->nr_running >= 2) {
+#ifdef CONFIG_SMP
 		if (!rq->rd->overload)
 			rq->rd->overload = true;
-
+#endif
 #ifdef CONFIG_NO_HZ_FULL
 		if (tick_nohz_full_cpu(rq->cpu)) {
 			/* Order rq->nr_running write against the IPI */



The complete updated patch is attached below:
---
>From 8716a50c85f98a92d2240da923ef4ae9a9719bbe Mon Sep 17 00:00:00 2001
Message-Id: <8716a50c85f98a92d2240da923ef4ae9a9719bbe.1403625949.git.tim.c.chen@...ux.intel.com>
From: Tim Chen <tim.c.chen@...ux.intel.com>
Date: Thu, 12 Jun 2014 11:28:38 -0700
Subject: [PATCH v5] sched: Fast idling of CPU when system is partially loaded
To: Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Davidlohr Bueso <davidlohr@...com>, Alex Shi <alex.shi@...aro.org>, Andi Kleen <andi@...stfloor.org>, Michel Lespinasse <walken@...gle.com>, Rik van Riel <riel@...hat.com>, Peter Hurley <peter@...leysoftware.com>, Thomas Gleixner <tglx@...utronix.de>, Paul E.McKenney <paulmck@...ux.vnet.ibm.com>, Jason Low <jason.low2@...com>, linux-kernel@...r.kernel.org

When a system is lightly loaded (i.e. no more than 1 job per cpu),
attempt to pull job to a cpu before putting it to idle is unnecessary and
can be skipped.  This patch adds an indicator so the scheduler can know
when there's no more than 1 active job is on any CPU in the system to
skip needless job pulls.

On a 4 socket machine with a request/response kind of workload from
clients, we saw about 0.13 msec delay when we go through a full load
balance to try pull job from all the other cpus.  While 0.1 msec was
spent on processing the request and generating a response, the 0.13 msec
load balance overhead was actually more than the actual work being done.
This overhead can be skipped much of the time for lightly loaded systems.

With this patch, we tested with a netperf request/response workload that
has the server busy with half the cpus in a 4 socket system.  We found
the patch eliminated 75% of the load balance attempts before idling a cpu.

The overhead of setting/clearing the indicator is low as we already gather
the necessary info while we call add_nr_running and update_sd_lb_stats.
We switch to full load balance load immediately if any cpu got more than
one job on its run queue in add_nr_running.  We'll clear the indicator
to avoid load balance when we detect no cpu's have more than one job
when we scan the work queues in update_sg_lb_stats.  We are aggressive
in turning on the load balance and opportunistic in skipping the load
balance.

Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
Acked-by: Jason Low <jason.low2@...com>
Acked-by: Rik van Riel <riel@...hat.com>
---
 kernel/sched/fair.c  | 21 ++++++++++++++++++---
 kernel/sched/sched.h | 11 +++++++++--
 2 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fea7d33..7dfe2ad 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5867,7 +5867,8 @@ static inline int sg_capacity_factor(struct lb_env *env, struct sched_group *gro
  */
 static inline void update_sg_lb_stats(struct lb_env *env,
 			struct sched_group *group, int load_idx,
-			int local_group, struct sg_lb_stats *sgs)
+			int local_group, struct sg_lb_stats *sgs,
+			bool *overload)
 {
 	unsigned long load;
 	int i;
@@ -5885,6 +5886,10 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 
 		sgs->group_load += load;
 		sgs->sum_nr_running += rq->nr_running;
+
+		if (rq->nr_running > 1)
+			*overload = true;
+
 #ifdef CONFIG_NUMA_BALANCING
 		sgs->nr_numa_running += rq->nr_numa_running;
 		sgs->nr_preferred_running += rq->nr_preferred_running;
@@ -5995,6 +6000,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 	struct sched_group *sg = env->sd->groups;
 	struct sg_lb_stats tmp_sgs;
 	int load_idx, prefer_sibling = 0;
+	bool overload = false;
 
 	if (child && child->flags & SD_PREFER_SIBLING)
 		prefer_sibling = 1;
@@ -6015,7 +6021,8 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 				update_group_capacity(env->sd, env->dst_cpu);
 		}
 
-		update_sg_lb_stats(env, sg, load_idx, local_group, sgs);
+		update_sg_lb_stats(env, sg, load_idx, local_group, sgs,
+						&overload);
 
 		if (local_group)
 			goto next_group;
@@ -6049,6 +6056,13 @@ next_group:
 
 	if (env->sd->flags & SD_NUMA)
 		env->fbq_type = fbq_classify_group(&sds->busiest_stat);
+
+	if (!env->sd->parent) {
+		/* update overload indicator if we are at root domain */
+		if (env->dst_rq->rd->overload != overload)
+			env->dst_rq->rd->overload = overload;
+	}
+
 }
 
 /**
@@ -6767,7 +6781,8 @@ static int idle_balance(struct rq *this_rq)
 	 */
 	this_rq->idle_stamp = rq_clock(this_rq);
 
-	if (this_rq->avg_idle < sysctl_sched_migration_cost) {
+	if (this_rq->avg_idle < sysctl_sched_migration_cost ||
+	    !this_rq->rd->overload) {
 		rcu_read_lock();
 		sd = rcu_dereference_check_sched_domain(this_rq->sd);
 		if (sd)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 31cc02e..d051712 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -477,6 +477,9 @@ struct root_domain {
 	cpumask_var_t span;
 	cpumask_var_t online;
 
+	/* Indicate more than one runnable task for any CPU */
+	bool overload;
+
 	/*
 	 * The bit corresponding to a CPU gets set here if such CPU has more
 	 * than one runnable -deadline task (as it is below for RT tasks).
@@ -1218,15 +1221,19 @@ static inline void add_nr_running(struct rq *rq, unsigned count)
 
 	rq->nr_running = prev_nr + count;
 
-#ifdef CONFIG_NO_HZ_FULL
 	if (prev_nr < 2 && rq->nr_running >= 2) {
+#ifdef CONFIG_SMP
+		if (!rq->rd->overload)
+			rq->rd->overload = true;
+#endif
+#ifdef CONFIG_NO_HZ_FULL
 		if (tick_nohz_full_cpu(rq->cpu)) {
 			/* Order rq->nr_running write against the IPI */
 			smp_wmb();
 			smp_send_reschedule(rq->cpu);
 		}
-       }
 #endif
+	}
 }
 
 static inline void sub_nr_running(struct rq *rq, unsigned count)
-- 
1.7.11.7




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ