lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aWTLGxeFm5BkSb4b@pc636>
Date: Mon, 12 Jan 2026 11:21:15 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Deepanshu Kartikey <kartikey406@...il.com>
Cc: akpm@...ux-foundation.org, urezki@...il.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	syzbot+d8d4c31d40f868eaea30@...kaller.appspotmail.com
Subject: Re: [PATCH] mm/vmalloc: prevent RCU stalls in
 kasan_release_vmalloc_node

On Mon, Jan 12, 2026 at 02:17:23PM +0530, Deepanshu Kartikey wrote:
> When CONFIG_PAGE_OWNER is enabled, freeing KASAN shadow pages during
> vmalloc cleanup triggers expensive stack unwinding that acquires RCU
> read locks. Processing a large purge_list without rescheduling can
> cause the task to hold CPU for extended periods (10+ seconds), leading
> to RCU stalls and potential OOM conditions.
> 
> The issue manifests in purge_vmap_node() -> kasan_release_vmalloc_node()
> where iterating through hundreds or thousands of vmap_area entries and
> freeing their associated shadow pages causes:
> 
>   rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
>   rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P6229/1:b..l
>   ...
>   task:kworker/0:17 state:R running task stack:28840 pid:6229
>   ...
>   kasan_release_vmalloc_node+0x1ba/0xad0 mm/vmalloc.c:2299
>   purge_vmap_node+0x1ba/0xad0 mm/vmalloc.c:2299
> 
> Each call to kasan_release_vmalloc() can free many pages, and with
> page_owner tracking, each free triggers save_stack() which performs
> stack unwinding under RCU read lock. Without yielding, this creates
> an unbounded RCU critical section.
> 
> Add periodic cond_resched() calls within the loop to allow:
> - RCU grace periods to complete
> - Other tasks to run
> - Scheduler to preempt when needed
> 
> The fix uses need_resched() for immediate response under load, with
> a batch count of 32 as a guaranteed upper bound to prevent worst-case
> stalls even under light load.
> 
> Reported-by: syzbot+d8d4c31d40f868eaea30@...kaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=d8d4c31d40f868eaea30
> Signed-off-by: Deepanshu Kartikey <kartikey406@...il.com>
> ---
>  mm/vmalloc.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 41dd01e8430c..a9161007cf02 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2273,6 +2273,7 @@ kasan_release_vmalloc_node(struct vmap_node *vn)
>  {
>  	struct vmap_area *va;
>  	unsigned long start, end;
> +	unsigned int batch_count = 0;
>  
>  	start = list_first_entry(&vn->purge_list, struct vmap_area, list)->va_start;
>  	end = list_last_entry(&vn->purge_list, struct vmap_area, list)->va_end;
> @@ -2282,6 +2283,11 @@ kasan_release_vmalloc_node(struct vmap_node *vn)
>  			kasan_release_vmalloc(va->va_start, va->va_end,
>  				va->va_start, va->va_end,
>  				KASAN_VMALLOC_PAGE_RANGE);
> +
> +			if (need_resched() || (++batch_count >= 32)) {
> +				cond_resched();
> +				batch_count = 0;
> +			}
>  	}
>  
>  	kasan_release_vmalloc(start, end, start, end, KASAN_VMALLOC_TLB_FLUSH);
> -- 
> 2.43.0
> 
Introduce a macro to represent an upper-bound? 

Thanks!

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ