lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1219056429.10800.302.camel@twins>
Date:	Mon, 18 Aug 2008 12:47:09 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Dario Faggioli <raistlin@...ux.it>
Cc:	Stefani Seibold <stefani@...bold.net>,
	linux-kernel@...r.kernel.org, mingo@...hat.com
Subject: [PATCH] sched: rt-bandwidth disable fixes

On Mon, 2008-08-18 at 00:15 +0200, Dario Faggioli wrote:
> On Sat, 2008-08-16 at 23:29 +0200, Stefani Seibold wrote:
> > After disabling kernel support for "Group CPU scheduler" and applying
> > 'echo -1 > /proc/sys/kernel/sched_rt_runtime_us' the behaviour is as
> > expected.

> > So the problem is located first in the new sched_rt_runtime_us default
> > value and second in the "Group CPU scheduler".
> Well, if you have group scheduling configured I think you should do both
> # echo -1 > /proc/sys/kernel/sched_rt_runtime_us
> # echo -1 > /dev/cgroup/cpu.rt_runtime_us
> 
> if /dev/cgroup is the mount point of the cgroup file system.
> 
> In situations like the one you are describing, this worked for me...
> Hope that it also helps you! :-)

Ah, right - I knew I was forgetting something..

(compile tested only)

---
Subject: sched: rt-bandwidth disable fixes
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Date: Mon Aug 18 12:39:07 CEST 2008

Currently there is no way to revert to the classical behaviour if
RT_GROUP_SCHED is set. Fix this by introducing rt_bandwidth_enabled(),
which will turn off all the bandwidth accounting if sched_rt_runtime_us
is set to a negative value.

Also fix a bug where we would still increase the used time when the limit
would be set to RUNTIME_INF - causing a long throttle period once it would
be lowered.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
---
 kernel/sched.c    |    9 ++++++++-
 kernel/sched_rt.c |   16 +++++++++-------
 2 files changed, 17 insertions(+), 8 deletions(-)

Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -204,11 +204,13 @@ void init_rt_bandwidth(struct rt_bandwid
 	rt_b->rt_period_timer.cb_mode = HRTIMER_CB_IRQSAFE_NO_SOFTIRQ;
 }
 
+static inline int rt_bandwidth_enabled(void);
+
 static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
 {
 	ktime_t now;
 
-	if (rt_b->rt_runtime == RUNTIME_INF)
+	if (rt_bandwidth_enabled() && rt_b->rt_runtime == RUNTIME_INF)
 		return;
 
 	if (hrtimer_active(&rt_b->rt_period_timer))
@@ -839,6 +841,11 @@ static inline u64 global_rt_runtime(void
 	return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC;
 }
 
+static inline int rt_bandwidth_enabled(void)
+{
+	return sysctl_sched_rt_runtime >= 0;
+}
+
 #ifndef prepare_arch_switch
 # define prepare_arch_switch(next)	do { } while (0)
 #endif
Index: linux-2.6/kernel/sched_rt.c
===================================================================
--- linux-2.6.orig/kernel/sched_rt.c
+++ linux-2.6/kernel/sched_rt.c
@@ -386,7 +386,7 @@ static int do_sched_rt_period_timer(stru
 	int i, idle = 1;
 	cpumask_t span;
 
-	if (rt_b->rt_runtime == RUNTIME_INF)
+	if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
 		return 1;
 
 	span = sched_rt_period_mask();
@@ -438,9 +438,6 @@ static int sched_rt_runtime_exceeded(str
 {
 	u64 runtime = sched_rt_runtime(rt_rq);
 
-	if (runtime == RUNTIME_INF)
-		return 0;
-
 	if (rt_rq->rt_throttled)
 		return rt_rq_throttled(rt_rq);
 
@@ -487,13 +484,18 @@ static void update_curr_rt(struct rq *rq
 	curr->se.exec_start = rq->clock;
 	cpuacct_charge(curr, delta_exec);
 
+	if (!rt_bandwidth_enabled())
+		return;
+
 	for_each_sched_rt_entity(rt_se) {
 		rt_rq = rt_rq_of_se(rt_se);
 
 		spin_lock(&rt_rq->rt_runtime_lock);
-		rt_rq->rt_time += delta_exec;
-		if (sched_rt_runtime_exceeded(rt_rq))
-			resched_task(curr);
+		if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
+			rt_rq->rt_time += delta_exec;
+			if (sched_rt_runtime_exceeded(rt_rq))
+				resched_task(curr);
+		}
 		spin_unlock(&rt_rq->rt_runtime_lock);
 	}
 }


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ