lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180907214047.26914-47-jschoenh@amazon.de>
Date:   Fri,  7 Sep 2018 23:40:33 +0200
From:   Jan H. Schönherr <jschoenh@...zon.de>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     Jan H. Schönherr <jschoenh@...zon.de>,
        linux-kernel@...r.kernel.org
Subject: [RFC 46/60] cosched: Warn on throttling attempts of non-CPU runqueues

Initially, coscheduling won't support throttling of CFS runqueues that
are not at CPU level. Print a warning to remind us of this fact and note
down everything that's currently known to be broken, if we wanted to
throttle higher level CFS runqueues (which would totally make sense
from a coscheduling perspective).

Signed-off-by: Jan H. Schönherr <jschoenh@...zon.de>
---
 kernel/sched/fair.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0bba924b40ba..2aa3a60dfca5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4493,12 +4493,25 @@ static int tg_throttle_down(struct task_group *tg, void *data)
 
 static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
 {
-	struct rq *rq = rq_of(cfs_rq);
+	struct rq *rq = hrq_of(cfs_rq);
 	struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
 	struct sched_entity *se;
 	long task_delta, dequeue = 1;
 	bool empty;
 
+	/*
+	 * FIXME: We can only handle CPU runqueues at the moment.
+	 *
+	 * rq->nr_running adjustments are incorrect for higher levels,
+	 * as well as tg_throttle_down/up() functionality. Also
+	 * update_runtime_enabled(), unthrottle_offline_cfs_rqs()
+	 * have not been adjusted (used for CPU hotplugging).
+	 *
+	 * Ideally, we would apply throttling only to is_root runqueues,
+	 * instead of the bottom level.
+	 */
+	SCHED_WARN_ON(!is_cpu_rq(rq));
+
 	se = cfs_rq->my_se;
 
 	/* freeze hierarchy runnable averages while throttled */
@@ -4547,12 +4560,14 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
 
 void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
 {
-	struct rq *rq = rq_of(cfs_rq);
+	struct rq *rq = hrq_of(cfs_rq);
 	struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
 	struct sched_entity *se;
 	int enqueue = 1;
 	long task_delta;
 
+	SCHED_WARN_ON(!is_cpu_rq(rq));
+
 	se = cfs_rq->my_se;
 
 	cfs_rq->throttled = 0;
@@ -5171,6 +5186,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 	throttled = enqueue_entity_fair(rq, &p->se, flags, 1);
 
+	/* FIXME: assumes, that only bottom level runqueues get throttled */
 	if (!throttled)
 		add_nr_running(rq, 1);
 
@@ -5237,6 +5253,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 {
 	bool throttled = dequeue_entity_fair(rq, &p->se, flags, 1);
 
+	/* FIXME: assumes, that only bottom level runqueues get throttled */
 	if (!throttled)
 		sub_nr_running(rq, 1);
 
-- 
2.9.3.1.gcba166c.dirty

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ