[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180913191938.30526-1-jschoenh@amazon.de>
Date: Thu, 13 Sep 2018 21:19:38 +0200
From: Jan H. Schönherr <jschoenh@...zon.de>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Jan H. Schönherr <jschoenh@...zon.de>,
Nishanth Aravamudan <naravamudan@...italocean.com>,
linux-kernel@...r.kernel.org
Subject: [RFC 61/60] cosched: Accumulated fixes and improvements
Here is an "extra" patch containing bug fixes and warning removals,
that I have accumulated up to this point.
It goes on top of the other 60 patches. (When it is time for v2,
these fixes will be integrated into the appropriate patches within
the series.)
The changes are:
1. Avoid a hang with nested scheduled task groups.
2. Get rid of a lockdep warning.
3. Get rid of warnings about missed clock updates.
4. Get rid of "untested code path" warnings/reminders (after testing
said code paths).
This should make experimenting with this patch series a little less bumpy.
Partly-reported-by: Nishanth Aravamudan <naravamudan@...italocean.com>
Signed-off-by: Jan H. Schönherr <jschoenh@...zon.de>
---
kernel/sched/cosched.c | 2 ++
kernel/sched/fair.c | 35 ++++++++++-------------------------
2 files changed, 12 insertions(+), 25 deletions(-)
diff --git a/kernel/sched/cosched.c b/kernel/sched/cosched.c
index a1f0d3a7b02a..88b7bc6d4bfa 100644
--- a/kernel/sched/cosched.c
+++ b/kernel/sched/cosched.c
@@ -588,6 +588,7 @@ static void sdrq_update_root(struct sdrq *sdrq)
/* Get proper locks */
rq_lock_irqsave(rq, &rf);
+ update_rq_clock(rq);
sdrq->is_root = is_root;
if (is_root)
@@ -644,6 +645,7 @@ static void sdrq_update_root(struct sdrq *sdrq)
}
if (sdrq->cfs_rq->curr) {
rq_lock(prq, &prf);
+ update_rq_clock(prq);
if (sdrq->data->leader == sdrq->sd_parent->data->leader)
put_prev_entity_fair(prq, sdrq->sd_parent->sd_se);
rq_unlock(prq, &prf);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8da2033596ff..4c823fa367ad 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7150,23 +7150,6 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
se = pick_next_entity(cfs_rq, curr);
cfs_rq = pick_next_cfs(se);
-
-#ifdef CONFIG_COSCHEDULING
- if (cfs_rq && is_sd_se(se) && cfs_rq->sdrq.is_root) {
- WARN_ON_ONCE(1); /* Untested code path */
- /*
- * Race with is_root update.
- *
- * We just moved downwards in the hierarchy via an
- * SD-SE, the CFS-RQ should have is_root set to zero.
- * However, a reconfiguration may be in progress. We
- * basically ignore that reconfiguration.
- *
- * Contrary to the case below, there is nothing to fix
- * as all the set_next_entity() calls are done later.
- */
- }
-#endif
} while (cfs_rq);
if (is_sd_se(se))
@@ -7189,23 +7172,26 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
while (!(cfs_rq = is_same_group(se, pse))) {
int se_depth = se->depth;
int pse_depth = pse->depth;
+ bool work = false;
- if (se_depth <= pse_depth && leader_of(pse) == cpu) {
+ if (se_depth <= pse_depth && __leader_of(pse) == cpu) {
put_prev_entity(cfs_rq_of(pse), pse);
pse = parent_entity(pse);
+ work = true;
}
- if (se_depth >= pse_depth && leader_of(se) == cpu) {
+ if (se_depth >= pse_depth && __leader_of(se) == cpu) {
set_next_entity(cfs_rq_of(se), se);
se = parent_entity(se);
+ work = true;
}
- if (leader_of(pse) != cpu && leader_of(se) != cpu)
+ if (!work)
break;
}
- if (leader_of(pse) == cpu)
- put_prev_entity(cfs_rq, pse);
- if (leader_of(se) == cpu)
- set_next_entity(cfs_rq, se);
+ if (__leader_of(pse) == cpu)
+ put_prev_entity(cfs_rq_of(pse), pse);
+ if (__leader_of(se) == cpu)
+ set_next_entity(cfs_rq_of(se), se);
}
goto done;
@@ -7243,7 +7229,6 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
struct sched_entity *se2 = cfs_rq->sdrq.tg_se;
bool top = false;
- WARN_ON_ONCE(1); /* Untested code path */
while (se && se != se2) {
if (!top) {
put_prev_entity(cfs_rq_of(se), se);
--
2.9.3.1.gcba166c.dirty
Powered by blists - more mailing lists