lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 23 May 2017 01:46:34 -0700
From:   tip-bot for Dave Kleikamp <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     peterz@...radead.org, torvalds@...ux-foundation.org, hpa@...or.com,
        dave.kleikamp@...cle.com, tglx@...utronix.de,
        linux-kernel@...r.kernel.org, mingo@...nel.org
Subject: [tip:sched/core] sched/rt: Minimize rq->lock contention in
 do_sched_rt_period_timer()

Commit-ID:  c249f255aab86b9b187ba319b9d2684841ac7c8d
Gitweb:     http://git.kernel.org/tip/c249f255aab86b9b187ba319b9d2684841ac7c8d
Author:     Dave Kleikamp <dave.kleikamp@...cle.com>
AuthorDate: Mon, 15 May 2017 14:14:13 -0500
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Tue, 23 May 2017 10:01:34 +0200

sched/rt: Minimize rq->lock contention in do_sched_rt_period_timer()

With CONFIG_RT_GROUP_SCHED=y, do_sched_rt_period_timer() sequentially
takes each CPU's rq->lock. On a large, busy system, the cumulative time it
takes to acquire each lock can be excessive, even triggering a watchdog
timeout.

If rt_rq->rt_time and rt_rq->rt_nr_running are both zero, this function does
nothing while holding the lock, so don't bother taking it at all.

Signed-off-by: Dave Kleikamp <dave.kleikamp@...cle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/a767637b-df85-912f-ba69-c90ee00a3fb6@oracle.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 kernel/sched/rt.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index c18b500..581d5c7 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -840,6 +840,17 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
 		int enqueue = 0;
 		struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
 		struct rq *rq = rq_of_rt_rq(rt_rq);
+		int skip;
+
+		/*
+		 * When span == cpu_online_mask, taking each rq->lock
+		 * can be time-consuming. Try to avoid it when possible.
+		 */
+		raw_spin_lock(&rt_rq->rt_runtime_lock);
+		skip = !rt_rq->rt_time && !rt_rq->rt_nr_running;
+		raw_spin_unlock(&rt_rq->rt_runtime_lock);
+		if (skip)
+			continue;
 
 		raw_spin_lock(&rq->lock);
 		if (rt_rq->rt_time) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ