[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <404add29-2d3f-45db-9103-0c5b66fb254e@linux.alibaba.com>
Date: Wed, 8 May 2024 21:41:10 +0800
From: Gao Xiang <hsiangkao@...ux.alibaba.com>
To: hailong.liu@...o.com, akpm@...ux-foundation.org,
Michal Hocko <mhocko@...e.com>
Cc: urezki@...il.com, hch@...radead.org, lstoakes@...il.com,
21cnbao@...il.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
xiang@...nel.org, chao@...nel.org, Oven <liyangouwen1@...o.com>
Subject: Re: [RFC PATCH] mm/vmalloc: fix vmalloc which may return null if
called with __GFP_NOFAIL
+Cc Michal,
On 2024/5/8 20:58, hailong.liu@...o.com wrote:
> From: "Hailong.Liu" <hailong.liu@...o.com>
>
> Commit a421ef303008 ("mm: allow !GFP_KERNEL allocations for kvmalloc")
> includes support for __GFP_NOFAIL, but it presents a conflict with
> commit dd544141b9eb ("vmalloc: back off when the current task is
> OOM-killed"). A possible scenario is as belows:
>
> process-a
> kvcalloc(n, m, GFP_KERNEL | __GFP_NOFAIL)
> __vmalloc_node_range()
> __vmalloc_area_node()
> vm_area_alloc_pages()
> --> oom-killer send SIGKILL to process-a
> if (fatal_signal_pending(current)) break;
> --> return NULL;
>
> to fix this, do not check fatal_signal_pending() in vm_area_alloc_pages()
> if __GFP_NOFAIL set.
>
> Reported-by: Oven <liyangouwen1@...o.com>
> Signed-off-by: Hailong.Liu <hailong.liu@...o.com>
Why taging this as RFC here? It seems a corner-case fix of
commit a421ef303008
Thanks,
Gao Xiang
> ---
> mm/vmalloc.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6641be0ca80b..2f359d08bf8d 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3560,7 +3560,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>
> /* High-order pages or fallback path if "bulk" fails. */
> while (nr_allocated < nr_pages) {
> - if (fatal_signal_pending(current))
> + if (!(gfp & __GFP_NOFAIL) && fatal_signal_pending(current))
> break;
>
> if (nid == NUMA_NO_NODE)
> ---
> This issue occurred during OPLUS KASAN test. Below is part of the log
>
> -> send signal
> [65731.222840] [ T1308] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/apps/uid_10198,task=gs.intelligence,pid=32454,uid=10198
>
> [65731.259685] [T32454] Call trace:
> [65731.259698] [T32454] dump_backtrace+0xf4/0x118
> [65731.259734] [T32454] show_stack+0x18/0x24
> [65731.259756] [T32454] dump_stack_lvl+0x60/0x7c
> [65731.259781] [T32454] dump_stack+0x18/0x38
> [65731.259800] [T32454] mrdump_common_die+0x250/0x39c [mrdump]
> [65731.259936] [T32454] ipanic_die+0x20/0x34 [mrdump]
> [65731.260019] [T32454] atomic_notifier_call_chain+0xb4/0xfc
> [65731.260047] [T32454] notify_die+0x114/0x198
> [65731.260073] [T32454] die+0xf4/0x5b4
> [65731.260098] [T32454] die_kernel_fault+0x80/0x98
> [65731.260124] [T32454] __do_kernel_fault+0x160/0x2a8
> [65731.260146] [T32454] do_bad_area+0x68/0x148
> [65731.260174] [T32454] do_mem_abort+0x151c/0x1b34
> [65731.260204] [T32454] el1_abort+0x3c/0x5c
> [65731.260227] [T32454] el1h_64_sync_handler+0x54/0x90
> [65731.260248] [T32454] el1h_64_sync+0x68/0x6c
> [65731.260269] [T32454] z_erofs_decompress_queue+0x7f0/0x2258
> --> be->decompressed_pages = kvcalloc(be->nr_pages, sizeof(struct page *), GFP_KERNEL | __GFP_NOFAIL);
> kernel panic by NULL pointer dereference.
> erofs assume kvmalloc with __GFP_NOFAIL never return NULL.
>
> [65731.260293] [T32454] z_erofs_runqueue+0xf30/0x104c
> [65731.260314] [T32454] z_erofs_readahead+0x4f0/0x968
> [65731.260339] [T32454] read_pages+0x170/0xadc
> [65731.260364] [T32454] page_cache_ra_unbounded+0x874/0xf30
> [65731.260388] [T32454] page_cache_ra_order+0x24c/0x714
> [65731.260411] [T32454] filemap_fault+0xbf0/0x1a74
> [65731.260437] [T32454] __do_fault+0xd0/0x33c
> [65731.260462] [T32454] handle_mm_fault+0xf74/0x3fe0
> [65731.260486] [T32454] do_mem_abort+0x54c/0x1b34
> [65731.260509] [T32454] el0_da+0x44/0x94
> [65731.260531] [T32454] el0t_64_sync_handler+0x98/0xb4
> [65731.260553] [T32454] el0t_64_sync+0x198/0x19c
>
> --
> 2.34.1
Powered by blists - more mailing lists