lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0c632df4-7128-405a-bf92-083a335831f0@lucifer.local>
Date: Sun, 18 Jan 2026 12:06:53 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Deepanshu Kartikey <kartikey406@...il.com>
Cc: akpm@...ux-foundation.org, david@...nel.org, riel@...riel.com,
        Liam.Howlett@...cle.com, vbabka@...e.cz, harry.yoo@...cle.com,
        jannh@...gle.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        syzbot+c27fa543e10a45d4e149@...kaller.appspotmail.com
Subject: Re: [PATCH] mm/rmap: fix unlink_anon_vmas() handling of error case
 from anon_vma_fork

On Sun, Jan 18, 2026 at 04:28:17PM +0530, Deepanshu Kartikey wrote:
> When anon_vma_fork() encounters a memory allocation failure after
> anon_vma_clone() has succeeded, unlink_anon_vmas() is called with
> vma->anon_vma being NULL but the anon_vma_chain populated with entries
> that are present in the anon_vma interval trees.
>
> This happens in the following sequence:
> 1. anon_vma_clone() succeeds, populating vma->anon_vma_chain and
>    inserting entries into interval trees
> 2. maybe_reuse_anon_vma() does not set vma->anon_vma because reuse
>    conditions are not met (common case for active processes)
> 3. anon_vma_alloc() or anon_vma_chain_alloc() fails due to memory
>    pressure
> 4. Error path invokes unlink_anon_vmas() with vma->anon_vma == NULL
>
> The existing code triggered VM_WARN_ON_ONCE and returned without
> performing cleanup, leaving entries in interval trees and causing
> memory leaks.
>
> Fix this by detecting the condition and properly cleaning up:
> - Iterate through the populated chain
> - Lock each anon_vma
> - Remove entries from interval trees
> - Unlock and free chain entries
>
> This prevents both the warning and the resource leaks.

BTW this reads rather like AI generated it, can you indicate whether that
was the case or not? :) Thanks.

We generally require acknowledgment of substantial AI-assistance in
submission.

Cheers, Lorenzo

>
> Reported-by: syzbot+c27fa543e10a45d4e149@...kaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=c27fa543e10a45d4e149
> Tested-by: syzbot+c27fa543e10a45d4e149@...kaller.appspotmail.com
> Signed-off-by: Deepanshu Kartikey <kartikey406@...il.com>
> ---
>  mm/rmap.c | 25 ++++++++++++++++++++++++-
>  1 file changed, 24 insertions(+), 1 deletion(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index f13480cb9f2e..acc8df6ad4a7 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -477,7 +477,31 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
>
>  	/* Unfaulted is a no-op. */
>  	if (!active_anon_vma) {
> -		VM_WARN_ON_ONCE(!list_empty(&vma->anon_vma_chain));
> +		/*
> +		 * Handle anon_vma_fork() error path where anon_vma_clone()
> +		 * succeeded and populated the chain (with entries in interval
> +		 * trees), but maybe_reuse_anon_vma() didn't set vma->anon_vma
> +		 * because reuse conditions weren't met, and a later allocation
> +		 * failed before we could allocate and assign a new anon_vma.
> +		 *
> +		 * We must properly remove entries from interval trees before
> +		 * freeing to avoid leaving dangling pointers.
> +		 */
> +		if (!list_empty(&vma->anon_vma_chain)) {
> +			struct anon_vma_chain *avc, *next;
> +
> +			list_for_each_entry_safe(avc, next, &vma->anon_vma_chain,
> +						same_vma) {
> +				struct anon_vma *anon_vma = avc->anon_vma;
> +
> +				anon_vma_lock_write(anon_vma);
> +				anon_vma_interval_tree_remove(avc, &anon_vma->rb_root);
> +				anon_vma_unlock_write(anon_vma);
> +				list_del(&avc->same_vma);
> +				anon_vma_chain_free(avc);
> +			}
> +		}
> +
>  		return;
>  	}
>
> --
> 2.43.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ