[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220822073942.218045-2-ankitja@vmware.com>
Date: Mon, 22 Aug 2022 13:09:39 +0530
From: Ankit Jain <ankitja@...are.com>
To: juri.lelli@...hat.com, bristot@...hat.com, l.stach@...gutronix.de,
suhui_kernel@....com, msimmons@...hat.com, peterz@...radead.org,
glenn@...ora.tech, stable@...r.kernel.org,
linux-kernel@...r.kernel.org, gregkh@...uxfoundation.org
Cc: srivatsab@...are.com, srivatsa@...il.mit.edu, akaher@...are.com,
amakhalov@...are.com, vsirnapalli@...are.com,
sturlapati@...are.com, bordoloih@...are.com, keerthanak@...are.com,
Ankit Jain <ankitja@...are.com>
Subject: [PATCH v5.4.y 1/4] sched/deadline: Unthrottle PI boosted threads while enqueuing
From: Daniel Bristot de Oliveira <bristot@...hat.com>
commit feff2e65efd8d84cf831668e182b2ce73c604bbb upstream.
stress-ng has a test (stress-ng --cyclic) that creates a set of threads
under SCHED_DEADLINE with the following parameters:
dl_runtime = 10000 (10 us)
dl_deadline = 100000 (100 us)
dl_period = 100000 (100 us)
These parameters are very aggressive. When using a system without HRTICK
set, these threads can easily execute longer than the dl_runtime because
the throttling happens with 1/HZ resolution.
During the main part of the test, the system works just fine because
the workload does not try to run over the 10 us. The problem happens at
the end of the test, on the exit() path. During exit(), the threads need
to do some cleanups that require real-time mutex locks, mainly those
related to memory management, resulting in this scenario:
Note: locks are rt_mutexes...
------------------------------------------------------------------------
TASK A: TASK B: TASK C:
activation
activation
activation
lock(a): OK! lock(b): OK!
<overrun runtime>
lock(a)
-> block (task A owns it)
-> self notice/set throttled
+--< -> arm replenished timer
| switch-out
| lock(b)
| -> <C prio > B prio>
| -> boost TASK B
| unlock(a) switch-out
| -> handle lock a to B
| -> wakeup(B)
| -> B is throttled:
| -> do not enqueue
| switch-out
|
|
+---------------------> replenishment timer
-> TASK B is boosted:
-> do not enqueue
------------------------------------------------------------------------
BOOM: TASK B is runnable but !enqueued, holding TASK C: the system
crashes with hung task C.
This problem is avoided by removing the throttle state from the boosted
thread while boosting it (by TASK A in the example above), allowing it to
be queued and run boosted.
The next replenishment will take care of the runtime overrun, pushing
the deadline further away. See the "while (dl_se->runtime <= 0)" on
replenish_dl_entity() for more information.
Reported-by: Mark Simmons <msimmons@...hat.com>
Signed-off-by: Daniel Bristot de Oliveira <bristot@...hat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Juri Lelli <juri.lelli@...hat.com>
Tested-by: Mark Simmons <msimmons@...hat.com>
Link: https://lkml.kernel.org/r/5076e003450835ec74e6fa5917d02c4fa41687e6.1600170294.git.bristot@redhat.com
[Ankit: Regenerated the patch for v5.4.y]
Signed-off-by: Ankit Jain <ankitja@...are.com>
---
kernel/sched/deadline.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 2bda9fdba31c..fdeb2afffc48 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1484,6 +1484,27 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
*/
if (pi_task && dl_prio(pi_task->normal_prio) && p->dl.dl_boosted) {
pi_se = &pi_task->dl;
+ /*
+ * Because of delays in the detection of the overrun of a
+ * thread's runtime, it might be the case that a thread
+ * goes to sleep in a rt mutex with negative runtime. As
+ * a consequence, the thread will be throttled.
+ *
+ * While waiting for the mutex, this thread can also be
+ * boosted via PI, resulting in a thread that is throttled
+ * and boosted at the same time.
+ *
+ * In this case, the boost overrides the throttle.
+ */
+ if (p->dl.dl_throttled) {
+ /*
+ * The replenish timer needs to be canceled. No
+ * problem if it fires concurrently: boosted threads
+ * are ignored in dl_task_timer().
+ */
+ hrtimer_try_to_cancel(&p->dl.dl_timer);
+ p->dl.dl_throttled = 0;
+ }
} else if (!dl_prio(p->normal_prio)) {
/*
* Special case in which we have a !SCHED_DEADLINE task
--
2.34.1
Powered by blists - more mailing lists