lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 20 Jun 2024 13:20:09 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: alexjlzheng@...il.com
Cc: brauner@...nel.org, axboe@...nel.dk, oleg@...hat.com,
 tandersen@...flix.com, willy@...radead.org, mjguzik@...il.com,
 alexjlzheng@...cent.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: optimize the redundant loop of
 mm_update_owner_next()

On Thu, 20 Jun 2024 20:21:24 +0800 alexjlzheng@...il.com wrote:

> From: Jinliang Zheng <alexjlzheng@...cent.com>
> 
> When mm_update_owner_next() is racing with swapoff (try_to_unuse()) or /proc or
> ptrace or page migration (get_task_mm()), it is impossible to find an
> appropriate task_struct in the loop whose mm_struct is the same as the target
> mm_struct.
> 
> If the above race condition is combined with the stress-ng-zombie and
> stress-ng-dup tests, such a long loop can easily cause a Hard Lockup in
> write_lock_irq() for tasklist_lock.

This is not an optimization!  Userspace-triggerable hard lockup is a
serious bug.

> Recognize this situation in advance and exit early.
> 
> Signed-off-by: Jinliang Zheng <alexjlzheng@...cent.com>
> ---
>  kernel/exit.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/kernel/exit.c b/kernel/exit.c
> index f95a2c1338a8..81fcee45d630 100644
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -484,6 +484,8 @@ void mm_update_next_owner(struct mm_struct *mm)
>  	 * Search through everything else, we should not get here often.
>  	 */
>  	for_each_process(g) {
> +		if (atomic_read(&mm->mm_users) <= 1)
> +			break;
>  		if (g->flags & PF_KTHREAD)
>  			continue;
>  		for_each_thread(g, c) {

I agree that the patch is an optimization in some cases.  But does it
really fix the issue?  Isn't the problem simply that this search is too
lengthy?

Isn't it still possible for this search to have taken too much time
even before the new test triggers?

I wonder if this loop really does anything useful.  "we should not get
here often".  Well, under what circumstances *do* we get here?  What
goes wrong if we simply remove the entire loop?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ