lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220325150419.337871326@linuxfoundation.org>
Date:   Fri, 25 Mar 2022 16:05:06 +0100
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Tim Murray <timmurray@...gle.com>,
        Joel Fernandes <joelaf@...gle.com>,
        Neeraj Upadhyay <quic_neeraju@...cinc.com>,
        "Uladzislau Rezki (Sony)" <urezki@...il.com>,
        Todd Kjos <tkjos@...gle.com>,
        Sandeep Patil <sspatil@...gle.com>,
        "Paul E. McKenney" <paulmck@...nel.org>
Subject: [PATCH 5.4 26/29] rcu: Dont deboost before reporting expedited quiescent state

From: Paul E. McKenney <paulmck@...nel.org>

commit 10c535787436d62ea28156a4b91365fd89b5a432 upstream.

Currently rcu_preempt_deferred_qs_irqrestore() releases rnp->boost_mtx
before reporting the expedited quiescent state.  Under heavy real-time
load, this can result in this function being preempted before the
quiescent state is reported, which can in turn prevent the expedited grace
period from completing.  Tim Murray reports that the resulting expedited
grace periods can take hundreds of milliseconds and even more than one
second, when they should normally complete in less than a millisecond.

This was fine given that there were no particular response-time
constraints for synchronize_rcu_expedited(), as it was designed
for throughput rather than latency.  However, some users now need
sub-100-millisecond response-time constratints.

This patch therefore follows Neeraj's suggestion (seconded by Tim and
by Uladzislau Rezki) of simply reversing the two operations.

Reported-by: Tim Murray <timmurray@...gle.com>
Reported-by: Joel Fernandes <joelaf@...gle.com>
Reported-by: Neeraj Upadhyay <quic_neeraju@...cinc.com>
Reviewed-by: Neeraj Upadhyay <quic_neeraju@...cinc.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@...il.com>
Tested-by: Tim Murray <timmurray@...gle.com>
Cc: Todd Kjos <tkjos@...gle.com>
Cc: Sandeep Patil <sspatil@...gle.com>
Cc: <stable@...r.kernel.org> # 5.4.x
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
 kernel/rcu/tree_plugin.h |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -523,16 +523,17 @@ rcu_preempt_deferred_qs_irqrestore(struc
 			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		}
 
-		/* Unboost if we were boosted. */
-		if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
-			rt_mutex_futex_unlock(&rnp->boost_mtx);
-
 		/*
 		 * If this was the last task on the expedited lists,
 		 * then we need to report up the rcu_node hierarchy.
 		 */
 		if (!empty_exp && empty_exp_now)
 			rcu_report_exp_rnp(rnp, true);
+
+		/* Unboost if we were boosted. */
+		if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
+			rt_mutex_futex_unlock(&rnp->boost_mtx);
+
 	} else {
 		local_irq_restore(flags);
 	}


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ