lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 24 Jun 2015 09:11:12 +0200
From:	Vincent Guittot <vincent.guittot@...aro.org>
To:	yuyang.du@...el.com, linux-kernel@...r.kernel.org,
	mingo@...nel.org, peterz@...radead.org
Cc:	pjt@...gle.com, bsegall@...gle.com, morten.rasmussen@....com,
	dietmar.eggemann@....com, len.brown@...el.com,
	rafael.j.wysocki@...el.com, fengguang.wu@...el.com,
	boqun.feng@...il.com, srikar@...ux.vnet.ibm.com,
	Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH] sched: update blocked load of idle cpus

The load and the util of idle cpus must be updated periodically in order to
decay the blocked part.

If CONFIG_FAIR_GROUP_SCHED is not set, the load and util of idle cpus
are not decayed and stay at the values set before becoming idle.

Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
---
Hi Yuyang,

While testing your patchset without CONFIG_FAIR_GROUP_SCHED, i have noticed
that the load of idle cpus stays sometimes to an high value whereas they were
not used for a while because we are not decaying the blocked load.
Futhermore, the peridodic load balance was not pulling tasks onto some idle
cpus because their load stayed high.

This patchset fixes the issue. 

Regards,
Vincent

 kernel/sched/fair.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c5f18d9..665cc4b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5864,6 +5864,17 @@ static unsigned long task_h_load(struct task_struct *p)
 #else
 static inline void update_blocked_averages(int cpu)
 {
+	struct rq *rq = cpu_rq(cpu);
+	struct cfs_rq *cfs_rq = &rq->cfs;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&rq->lock, flags);
+	update_rq_clock(rq);
+
+	update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq))
+
+	raw_spin_unlock_irqrestore(&rq->lock, flags);
+
 }
 
 static unsigned long task_h_load(struct task_struct *p)
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ