lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240802003046.4134043-1-paulmck@kernel.org>
Date: Thu,  1 Aug 2024 17:30:45 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: linux-kernel@...r.kernel.org
Cc: Tejun Heo <tj@...nel.org>,
	Lai Jiangshan <jiangshanlai@...il.com>,
	Breno Leitao <leitao@...ian.org>,
	Rik van Riel <riel@...riel.com>,
	Anhad Jai Singh <ffledgling@...a.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Jens Axboe <axboe@...nel.dk>,
	Christian Brauner <brauner@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@...radead.org>,
	Chris Mason <clm@...com>,
	"Paul E. McKenney" <paulmck@...nel.org>
Subject: [PATCH misc 1/2] workqueue: Add check for clocks going backwards to wq_worker_tick()

Experimental, might never go to mainline.

There has been some evidence of clocks going backwards, producing
"workqueue: kfree_rcu_monitor hogged CPU" diagnostics on idle systems
just after a change in clocksource.  This diagnostic commit checks for
this, ignoring differences that would be negative if interpreted as a
signed 64-bit integer.

Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
Cc: Tejun Heo <tj@...nel.org>
Cc: Lai Jiangshan <jiangshanlai@...il.com>
Cc: Breno Leitao <leitao@...ian.org>
Cc: Rik van Riel <riel@...riel.com>
---
 kernel/workqueue.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1745ca788ede3..4f7b4b32e6b4e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1482,6 +1482,7 @@ void wq_worker_tick(struct task_struct *task)
 	 * If the current worker is concurrency managed and hogged the CPU for
 	 * longer than wq_cpu_intensive_thresh_us, it's automatically marked
 	 * CPU_INTENSIVE to avoid stalling other concurrency-managed work items.
+	 * If the time is negative, ignore, assuming a backwards clock.
 	 *
 	 * Set @worker->sleeping means that @worker is in the process of
 	 * switching out voluntarily and won't be contributing to
@@ -1491,6 +1492,7 @@ void wq_worker_tick(struct task_struct *task)
 	 * We probably want to make this prettier in the future.
 	 */
 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
+	    WARN_ON_ONCE((s64)(worker->task->se.sum_exec_runtime - worker->current_at) < 0) ||
 	    worker->task->se.sum_exec_runtime - worker->current_at <
 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
 		return;
-- 
2.40.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ