[an error occurred while processing this directive]
|
|
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <56B89AE0.9090603@redhat.com>
Date: Mon, 8 Feb 2016 14:40:48 +0100
From: Jan Stancek <jstancek@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: alex shi <alex.shi@...el.com>, guz fnst <guz.fnst@...fujitsu.com>,
mingo@...hat.com, jolsa@...hat.com, riel@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [BUG] scheduler doesn't balance thread to idle cpu for 3 seconds
On 01/29/2016 11:33 AM, Jan Stancek wrote:
>>
>> Also note that I don't think failing this test is a bug per se.
>> Undesirable maybe, but within spec, since SIGALRM is process wide, so it
>> being delivered to the SCHED_OTHER task is accepted, and SCHED_OTHER has
>> no timeliness guarantees.
>>
>> That said; if I could reliably reproduce I'd have a go at fixing this, I
>> suspect there's a 'fun' problem at the bottom of this.
>
> Thanks for trying, I'll see if I can find some more reliable way.
I think I have found a more reliably way, however it requires an older
stable kernel: 3.12.53 up to 4.1.17.
Consider following scenario:
- all tasks on system have RT sched class
- main thread of reproducer becomes the only SCHED_OTHER task on system
- when alarm(2) expires, main thread is woken up on cpu that is occupied by
busy looping RT thread (low_priority_thread)
- because main thread was sleeping for 2 seconds, its load has decayed to 0
- the only chance for main thread to run is if it gets balanced to idle CPU
- task_tick_fair() doesn't run, there is RT task running on this CPU
- main thread is on cfs run queue but its load stays 0
- load balancer never sees this CPU (group) as busy
Attached is reproducer and script, which tries to trigger scenario above.
I can reproduce it with 4.1.17 on baremetal 4 CPU x86_64 with about 1:50 chance.
In this setup failure state persists for a long time, perhaps indefinitely.
I tried extending RUNTIME to 10 minutes, main thread still wouldn't run.
One more clue: I could work around this issue if I forced an update_entity_load_avg()
on sched_entities that have not been updated for some time, as part of
periodic rebalance_domains() call.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c7c1d28..1b5fe80 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5264,6 +5264,7 @@ static void update_blocked_averages(int cpu)
struct rq *rq = cpu_rq(cpu);
struct cfs_rq *cfs_rq;
unsigned long flags;
+ struct rb_node *rb;
raw_spin_lock_irqsave(&rq->lock, flags);
update_rq_clock(rq);
@@ -5281,6 +5282,19 @@ static void update_blocked_averages(int cpu)
}
raw_spin_unlock_irqrestore(&rq->lock, flags);
+
+ cfs_rq = &(cpu_rq(cpu)->cfs);
+ for (rb = rb_first_postorder(&cfs_rq->tasks_timeline); rb; rb = rb_next_postorder(rb)) {
+ struct sched_entity *se = rb_entry(rb, struct sched_entity, run_node);
+
+ // Task on rq has not been updated for 500ms :-(
+ if ((cfs_rq_clock_task(cfs_rq) - se->avg.last_runnable_update) > 500L * (1 << 20))
+ update_entity_load_avg(se, 1);
+ }
}
/*
Regards,
Jan
View attachment "pthread_cond_wait_1_v3.c" of type "text/plain" (5201 bytes)
Download attachment "reproduce_v3.sh" of type "application/x-shellscript" (260 bytes)
Powered by blists - more mailing lists