lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 17 Apr 2008 15:06:34 -0400
From:	Gregory Haskins <ghaskins@...ell.com>
To:	suresh.b.siddha@...el.com
Cc:	mingo@...e.hu, rostedt@...dmis.org, chinang.ma@...el.com,
	arjan@...ux.intel.com, willy@...ux.intel.com, ghaskins@...ell.com,
	linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org
Subject: [PATCH] sched: push rt tasks only if newly activated tasks have been
	added

SCHED_RR can context-switch many times without having changed the run-queue.
Therefore trying to push on each context switch can just be wasted effort
since if it failed the first time, it will likely fail any subsequent times
as well.  Instead, set a flag when we have successfully pushed as many tasks
away as possible, and only clear it when the runqueue adds new tasks
(effectively becoming a run-queue "dirty bit").  If new tasks are added we
should try again.  If any remote run-queues downgrade their priority in the
meantime, they will try to pull from us (as they always do).

This attempts to address a regression reported by Suresh Siddha, et. al. in
the 2.6.25 series.  It applies to 2.6.25.

Signed-off-by: Gregory Haskins <ghaskins@...ell.com>
CC: suresh.b.siddha@...el.com
CC: mingo@...e.hu
CC: rostedt@...dmis.org
CC: chinang.ma@...el.com
CC: arjan@...ux.intel.com
CC: willy@...ux.intel.com
---

 kernel/sched.c    |    2 ++
 kernel/sched_rt.c |    6 +++++-
 2 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 8dcdec6..806881b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -331,6 +331,7 @@ struct rt_rq {
 #ifdef CONFIG_SMP
 	unsigned long rt_nr_migratory;
 	int overloaded;
+	int pushed;
 #endif
 	int rt_throttled;
 	u64 rt_time;
@@ -7142,6 +7143,7 @@ static void init_rt_rq(struct rt_rq *rt_rq, struct rq *rq)
 #ifdef CONFIG_SMP
 	rt_rq->rt_nr_migratory = 0;
 	rt_rq->overloaded = 0;
+	rt_rq->pushed = 0;
 #endif
 
 	rt_rq->rt_time = 0;
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 0a6d2e5..3828aa7 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -393,6 +393,8 @@ static void enqueue_task_rt(struct rq *rq, struct task_struct *p, int wakeup)
 	 */
 	for_each_sched_rt_entity(rt_se)
 		enqueue_rt_entity(rt_se);
+
+	rq->rt.pushed = 0;
 }
 
 static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int sleep)
@@ -789,7 +791,7 @@ static int push_rt_task(struct rq *rq)
 	int ret = 0;
 	int paranoid = RT_MAX_TRIES;
 
-	if (!rq->rt.overloaded)
+	if (!rq->rt.overloaded || rq->rt.pushed)
 		return 0;
 
 	next_task = pick_next_highest_task_rt(rq, -1);
@@ -863,6 +865,8 @@ static void push_rt_tasks(struct rq *rq)
 	/* push_rt_task will return true if it moved an RT */
 	while (push_rt_task(rq))
 		;
+
+	rq->rt.pushed = 1;
 }
 
 static int pull_rt_task(struct rq *this_rq)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ