[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <89b4f0cd-d324-14bd-3991-576de9849e34@amazon.de>
Date: Thu, 13 Sep 2018 01:18:14 +0200
From: "Jan H. Schönherr" <jschoenh@...zon.de>
To: Nishanth Aravamudan <naravamudan@...italocean.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org
Subject: [RFC 00/60] Coscheduling for Linux
On 09/12/2018 09:34 PM, Jan H. Schönherr wrote:
> That said, I see a hang, too. It seems to happen, when there is a
> cpu.scheduled!=0 group that is not a direct child of the root task group.
> You seem to have "/sys/fs/cgroup/cpu/machine" as an intermediate group.
> (The case ==0 within !=0 within the root task group works for me.)
>
> I'm going to dive into the code.
With the patch below (which technically changes patch 55/60), the hang I experienced is gone.
Please let me know, if it works for you as well.
Regards
Jan
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8da2033596ff..2d8b3f9a275f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7189,23 +7189,26 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
while (!(cfs_rq = is_same_group(se, pse))) {
int se_depth = se->depth;
int pse_depth = pse->depth;
+ bool work = false;
- if (se_depth <= pse_depth && leader_of(pse) == cpu) {
+ if (se_depth <= pse_depth && __leader_of(pse) == cpu) {
put_prev_entity(cfs_rq_of(pse), pse);
pse = parent_entity(pse);
+ work = true;
}
- if (se_depth >= pse_depth && leader_of(se) == cpu) {
+ if (se_depth >= pse_depth && __leader_of(se) == cpu) {
set_next_entity(cfs_rq_of(se), se);
se = parent_entity(se);
+ work = true;
}
- if (leader_of(pse) != cpu && leader_of(se) != cpu)
+ if (!work)
break;
}
- if (leader_of(pse) == cpu)
- put_prev_entity(cfs_rq, pse);
- if (leader_of(se) == cpu)
- set_next_entity(cfs_rq, se);
+ if (__leader_of(pse) == cpu)
+ put_prev_entity(cfs_rq_of(pse), pse);
+ if (__leader_of(se) == cpu)
+ set_next_entity(cfs_rq_of(se), se);
}
goto done;
Powered by blists - more mailing lists