[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aYMn1iTRdbP3JkcH@pathway.suse.cz>
Date: Wed, 4 Feb 2026 12:04:54 +0100
From: Petr Mladek <pmladek@...e.com>
To: Lance Yang <lance.yang@...ux.dev>
Cc: Aaron Tomlin <atomlin@...mlin.com>, neelx@...e.com, sean@...e.io,
akpm@...ux-foundation.org, mproche@...il.com, chjohnst@...il.com,
nick.lange@...il.com, linux-kernel@...r.kernel.org,
mhiramat@...nel.org, joel.granados@...nel.org,
gregkh@...uxfoundation.org
Subject: [PATCH] hung_task: Increment the global counter immediately
A recent change allowed to reset the global counter of hung tasks using
the sysctl interface. A potential race with the regular check has been
solved by updating the global counter only once at the end of the check.
However, the hung task check can take a significant amount of time,
particularly when task information is being dumped to slow serial
consoles. Some users monitor this global counter to trigger immediate
migration of critical containers. Delaying the increment until the
full check completes postpones these high-priority rescue operations.
Update the global counter as soon as a hung task is detected. Since
the value is read asynchronously, a relaxed atomic operation is
sufficient.
Reported-by: Lance Yang <lance.yang@...ux.dev>
Closes: https://lore.kernel.org/r/f239e00f-4282-408d-b172-0f9885f4b01b@linux.dev
Signed-off-by: Petr Mladek <pmladek@...e.com>
---
This is a followup patch for
https://lore.kernel.org/r/20260125135848.3356585-1-atomlin@atomlin.com
Note that I could not use commit IDs because the original
patchset is not in a stable tree yet. In fact, it seems
that it is not even in linux-next at the moment.
Best Regards,
Petr
kernel/hung_task.c | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index 350093de0535..8bc043fbe89c 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -302,15 +302,10 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
int max_count = sysctl_hung_task_check_count;
unsigned long last_break = jiffies;
struct task_struct *g, *t;
- unsigned long total_count, this_round_count;
+ unsigned long this_round_count;
int need_warning = sysctl_hung_task_warnings;
unsigned long si_mask = hung_task_si_mask;
- /*
- * The counter might get reset. Remember the initial value.
- * Acquire prevents reordering task checks before this point.
- */
- total_count = atomic_long_read_acquire(&sysctl_hung_task_detect_count);
/*
* If the system crashed already then all bets are off,
* do not report extra hung tasks:
@@ -330,6 +325,13 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
}
if (task_is_hung(t, timeout)) {
+ /*
+ * Increment the global counter so that userspace could
+ * start migrating tasks ASAP. But count the current
+ * round separately because userspace could reset
+ * the global counter at any time.
+ */
+ atomic_long_inc(&sysctl_hung_task_detect_count);
this_round_count++;
hung_task_info(t, timeout, this_round_count);
}
@@ -340,15 +342,6 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
if (!this_round_count)
return;
- /*
- * Do not count this round when the global counter has been reset
- * during this check. Release ensures we see all hang details
- * recorded during the scan.
- */
- atomic_long_cmpxchg_release(&sysctl_hung_task_detect_count,
- total_count, total_count +
- this_round_count);
-
if (need_warning || hung_task_call_panic) {
si_mask |= SYS_INFO_LOCKS;
--
2.52.0
Powered by blists - more mailing lists