[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a767637b-df85-912f-ba69-c90ee00a3fb6@oracle.com>
Date: Mon, 15 May 2017 14:14:13 -0500
From: Dave Kleikamp <dave.kleikamp@...cle.com>
To: LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>
Subject: [PATCH RESEND 1/1] sched/rt: minimize rq->lock contention in,
do_sched_rt_period_timer()
With CONFIG_RT_GROUP_SCHED defined, do_sched_rt_period_timer() sequentially
takes each cpu's rq->lock. On a large, busy system, the cumulative time it
takes to acquire each lock can be excessive, even triggering a watchdog
timeout.
If rt_rq_rt_time and rt_rq->rt_nr_running are both zero, this function does
nothing while holding the lock, so don't bother taking it at all.
Orabug: 25491970
Signed-off-by: Dave Kleikamp <dave.kleikamp@...cle.com>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
---
kernel/sched/rt.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 9f3e40226dec..ae4a8c529a02 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -840,6 +840,17 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
int enqueue = 0;
struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
struct rq *rq = rq_of_rt_rq(rt_rq);
+ int skip;
+
+ /*
+ * When span == cpu_online_mask, taking each rq->lock
+ * can be time-consuming. Try to avoid it when possible.
+ */
+ raw_spin_lock(&rt_rq->rt_runtime_lock);
+ skip = !rt_rq->rt_time && !rt_rq->rt_nr_running;
+ raw_spin_unlock(&rt_rq->rt_runtime_lock);
+ if (skip)
+ continue;
raw_spin_lock(&rq->lock);
if (rt_rq->rt_time) {
--
2.12.2
Powered by blists - more mailing lists