lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aV_CGOl_rXziqdHZ@pathway.suse.cz>
Date: Thu, 8 Jan 2026 15:41:28 +0100
From: Petr Mladek <pmladek@...e.com>
To: Aaron Tomlin <atomlin@...mlin.com>
Cc: akpm@...ux-foundation.org, lance.yang@...ux.dev, mhiramat@...nel.org,
	gregkh@...uxfoundation.org, joel.granados@...nel.org, sean@...e.io,
	linux-kernel@...r.kernel.org
Subject: Re: [v5 PATCH 2/2] hung_task: Enable runtime reset of
 hung_task_detect_count

On Tue 2025-12-30 19:41:25, Aaron Tomlin wrote:
> Introduce support for writing to /proc/sys/kernel/hung_task_detect_count.
> 
> Writing a value of zero to this file atomically resets the counter of
> detected hung tasks. This grants system administrators the ability to
> clear the cumulative diagnostic history after resolving an incident,
> simplifying monitoring without requiring a system restart.

> --- a/kernel/hung_task.c
> +++ b/kernel/hung_task.c
> @@ -36,7 +37,7 @@ static int __read_mostly sysctl_hung_task_check_count = PID_MAX_LIMIT;
>  /*
>   * Total number of tasks detected as hung since boot:
>   */
> -static unsigned long __read_mostly sysctl_hung_task_detect_count;
> +static atomic_long_t sysctl_hung_task_detect_count = ATOMIC_LONG_INIT(0);
>  
>  /*
>   * Limit number of tasks checked in a batch.
> @@ -246,20 +247,26 @@ static inline void hung_task_diagnostics(struct task_struct *t)
>  }
>  
>  static void check_hung_task(struct task_struct *t, unsigned long timeout,
> -		unsigned long prev_detect_count)
> +			    unsigned long prev_detect_count)
>  {
> -	unsigned long total_hung_task;
> +	unsigned long total_hung_task, cur_detect_count;
>  
>  	if (!task_is_hung(t, timeout))
>  		return;
>  
>  	/*
>  	 * This counter tracks the total number of tasks detected as hung
> -	 * since boot.
> +	 * since boot. If a reset occurred during the scan, we treat the
> +	 * current count as the new delta to avoid an underflow error.
> +	 * Ensure hang details are globally visible before the counter
> +	 * update.
>  	 */
> -	sysctl_hung_task_detect_count++;
> +	cur_detect_count = atomic_long_inc_return_release(&sysctl_hung_task_detect_count);

The _release() feels a bit weird here because the counter might
get incremented more times during one scan.

IMHO, it should be perfectly fine to use the _relaxed version here
because it is in the middle of the acquire/release, see below.
The important thing here is that the load/modify/store operation
is done atomically.

> +	if (cur_detect_count >= prev_detect_count)
> +		total_hung_task = cur_detect_count - prev_detect_count;
> +	else
> +		total_hung_task = cur_detect_count;
>  
> -	total_hung_task = sysctl_hung_task_detect_count - prev_detect_count;
>  	trace_sched_process_hang(t);
>  
>  	if (sysctl_hung_task_panic && total_hung_task >= sysctl_hung_task_panic) {
> @@ -318,10 +325,12 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
>  	int max_count = sysctl_hung_task_check_count;
>  	unsigned long last_break = jiffies;
>  	struct task_struct *g, *t;
> -	unsigned long prev_detect_count = sysctl_hung_task_detect_count;
> +	unsigned long cur_detect_count, prev_detect_count, delta;
>  	int need_warning = sysctl_hung_task_warnings;
>  	unsigned long si_mask = hung_task_si_mask;
>  
> +	/* Acquire prevents reordering task checks before this point. */
> +	prev_detect_count = atomic_long_read_acquire(&sysctl_hung_task_detect_count);

This value is read before the scan started => _acquire
semantic/barrier fits here.

>  	/*
>  	 * If the system crashed already then all bets are off,
>  	 * do not report extra hung tasks:
> @@ -346,7 +355,14 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
>   unlock:
>  	rcu_read_unlock();
>  
> -	if (!(sysctl_hung_task_detect_count - prev_detect_count))
> +	/* Ensures we see all hang details recorded during the scan. */
> +	cur_detect_count = atomic_long_read_acquire(&sysctl_hung_task_detect_count);

This value is read at the end of the scan => _release
semantic/barrier should be here.

> +	if (cur_detect_count < prev_detect_count)
> +		delta = cur_detect_count;
> +	else
> +		delta = cur_detect_count - prev_detect_count;
> +
> +	if (!delta)
>  		return;
>  
>  	if (need_warning || hung_task_call_panic) {

Otherwise, I do not have anything more to add. I agree with the other
proposals, for example:

   + remove 1st patch
   + split 2nd patch into two
   + changes in the sysctl code proposed by Joel

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ