lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210414121226.2650-4-urezki@gmail.com>
Date:   Wed, 14 Apr 2021 14:12:24 +0200
From:   "Uladzislau Rezki (Sony)" <urezki@...il.com>
To:     LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
        "Paul E . McKenney" <paulmck@...nel.org>
Cc:     Michal Hocko <mhocko@...e.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Daniel Axtens <dja@...ens.net>,
        Frederic Weisbecker <frederic@...nel.org>,
        Neeraj Upadhyay <neeraju@...eaurora.org>,
        Joel Fernandes <joel@...lfernandes.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Theodore Y . Ts'o" <tytso@....edu>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Uladzislau Rezki <urezki@...il.com>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: [PATCH 4/6] kvfree_rcu: add a bulk-list check when a scheduler is run

RCU_SCHEDULER_RUNNING is set when a scheduling is available.
That signal is used in order to check and queue a "monitor work"
to reclaim freed objects(if they are) during a boot-up phase.

We have it because, the main path of the kvfree_rcu() call can
not queue the work untill the scheduler is up and running.

Currently in such helper only "krcp->head" is checked to figure
out if there are outstanding objects to be released. And this is
only one channel. After adding a bulk interface there are two
extra which have to be checked also: "krcp->bkvhead[0]" as well
as "krcp->bkvhead[1]". So, we have to queue the "monitor work"
if _any_ corresponding channel is not empty.

Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
---
 kernel/rcu/tree.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 31ee820c3d9e..da3605067cc1 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3723,13 +3723,11 @@ void __init kfree_rcu_scheduler_running(void)
 		struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
 
 		raw_spin_lock_irqsave(&krcp->lock, flags);
-		if (!krcp->head || test_and_set_bit(KRC_MONITOR_TODO, &krcp->flags)) {
-			raw_spin_unlock_irqrestore(&krcp->lock, flags);
-			continue;
+		if (krcp->bkvhead[0] || krcp->bkvhead[1] || krcp->head ||
+				!test_and_set_bit(KRC_MONITOR_TODO, &krcp->flags)) {
+			schedule_delayed_work_on(cpu, &krcp->monitor_work,
+				KFREE_DRAIN_JIFFIES);
 		}
-
-		schedule_delayed_work_on(cpu, &krcp->monitor_work,
-					 KFREE_DRAIN_JIFFIES);
 		raw_spin_unlock_irqrestore(&krcp->lock, flags);
 	}
 }
-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ