lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Oct 2021 02:08:13 +0200
From:   Frederic Weisbecker <>
To:     "Paul E . McKenney" <>
Cc:     LKML <>,
        Frederic Weisbecker <>,
        Sebastian Andrzej Siewior <>,
        Valentin Schneider <>,
        Peter Zijlstra <>,
        Uladzislau Rezki <>,
        Thomas Gleixner <>,
        Valentin Schneider <>,
        Boqun Feng <>,
        Neeraj Upadhyay <>,
        Josh Triplett <>,
        Joel Fernandes <>
Subject: [PATCH 07/10] rcu/nocb: Limit number of softirq callbacks only on softirq

The current condition to limit the number of callbacks executed in a
row checks the offloaded state of the rdp. Not only is it volatile
but it is also misleading: the rcu_core() may well be executing
callbacks concurrently with NOCB kthreads, and the offloaded state
would then be verified on both cases. As a result the limit would
spuriously not apply anymore on softirq while in the middle of
(de-)offloading process.

Fix and clarify the condition with those constraints in mind:

_ If callbacks are processed either by rcuc or NOCB kthread, the call
  to cond_resched_tasks_rcu_qs() is enough to take care of the overload.

_ If instead callbacks are processed by softirqs:
  * If need_resched(), exit the callbacks processing
  * Otherwise if CPU is idle we can continue
  * Otherwise exit because a softirq shouldn't interrupt a task for too
    long nor deprive other pending softirq vectors of the CPU.

Tested-by: Valentin Schneider <>
Tested-by: Sebastian Andrzej Siewior <>
Signed-off-by: Frederic Weisbecker <>
Cc: Valentin Schneider <>
Cc: Peter Zijlstra <>
Cc: Sebastian Andrzej Siewior <>
Cc: Josh Triplett <>
Cc: Joel Fernandes <>
Cc: Boqun Feng <>
Cc: Neeraj Upadhyay <>
Cc: Uladzislau Rezki <>
Cc: Thomas Gleixner <>
Signed-off-by: Paul E. McKenney <>
 kernel/rcu/tree.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index eaa9c7ce91bb..716dead1509d 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2535,9 +2535,8 @@ static void rcu_do_batch(struct rcu_data *rdp)
 		 * Stop only if limit reached and CPU has something to do.
-		if (count >= bl && !offloaded &&
-		    (need_resched() ||
-		     (!is_idle_task(current) && !rcu_is_callbacks_kthread())))
+		if (count >= bl && in_serving_softirq() &&
+		    (need_resched() || !is_idle_task(current)))
 		if (unlikely(tlimit)) {
 			/* only call local_clock() every 32 callbacks */

Powered by blists - more mailing lists