lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4zpnpnVYgDrNoMXDuRqNpn36ZTvkC8Ge5681o5ty6WHXA@mail.gmail.com>
Date: Thu, 9 May 2024 14:30:11 +1200
From: Barry Song <21cnbao@...il.com>
To: hailong.liu@...o.com, Michal Hocko <mhocko@...e.com>, vasily.averin@...ux.dev
Cc: akpm@...ux-foundation.org, urezki@...il.com, hch@...radead.org, 
	lstoakes@...il.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	xiang@...nel.org, chao@...nel.org, Oven <liyangouwen1@...o.com>
Subject: Re: [RFC PATCH] mm/vmalloc: fix vmalloc which may return null if
 called with __GFP_NOFAIL

On Thu, May 9, 2024 at 2:26 PM Barry Song <21cnbao@...il.com> wrote:
>
> On Thu, May 9, 2024 at 2:20 PM Barry Song <21cnbao@...il.com> wrote:
> >
> > On Thu, May 9, 2024 at 12:58 AM <hailong.liu@...o.com> wrote:
> > >
> > > From: "Hailong.Liu" <hailong.liu@...o.com>
> > >
> > > Commit a421ef303008 ("mm: allow !GFP_KERNEL allocations for kvmalloc")
> > > includes support for __GFP_NOFAIL, but it presents a conflict with
> > > commit dd544141b9eb ("vmalloc: back off when the current task is
> > > OOM-killed"). A possible scenario is as belows:
> > >
> > > process-a
> > > kvcalloc(n, m, GFP_KERNEL | __GFP_NOFAIL)
> > >     __vmalloc_node_range()
> > >         __vmalloc_area_node()
> > >             vm_area_alloc_pages()
> > >             --> oom-killer send SIGKILL to process-a
> > >             if (fatal_signal_pending(current)) break;
> > > --> return NULL;
> > >
> > > to fix this, do not check fatal_signal_pending() in vm_area_alloc_pages()
> > > if __GFP_NOFAIL set.
> > >
> > > Reported-by: Oven <liyangouwen1@...o.com>
> > > Signed-off-by: Hailong.Liu <hailong.liu@...o.com>
> > > ---
> > >  mm/vmalloc.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 6641be0ca80b..2f359d08bf8d 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -3560,7 +3560,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
> > >
> > >         /* High-order pages or fallback path if "bulk" fails. */
> > >         while (nr_allocated < nr_pages) {
> > > -               if (fatal_signal_pending(current))
> > > +               if (!(gfp & __GFP_NOFAIL) && fatal_signal_pending(current))
> > >                         break;
> >
> > why not !nofail ?
> >
> > This seems a correct fix, but it undermines the assumption made in
> > commit dd544141b9eb
> >  ("vmalloc: back off when the current task is OOM-killed")
> >
> > "
> >     This may trigger some hidden problems, when caller does not handle
> >     vmalloc failures, or when rollaback after failed vmalloc calls own
> >     vmallocs inside.  However all of these scenarios are incorrect: vmalloc
> >     does not guarantee successful allocation, it has never been called with
> >     __GFP_NOFAIL and threfore either should not be used for any rollbacks or
> >     should handle such errors correctly and not lead to critical failures.
> > "
> >
> > If a significant kvmalloc operation is performed with the NOFAIL flag, it risks
> > reverting the fix intended to address the OOM-killer issue in commit
> > dd544141b9eb.
> > Should we indeed permit the NOFAIL flag for large kvmalloc allocations?
>
> + Vasily, Michal.

Sorry for my oversight. Fixed the email of Vasily.

>
> >
> > >
> > >                 if (nid == NUMA_NO_NODE)
> > > ---
> > > This issue occurred during OPLUS KASAN test. Below is part of the log
> > >
> > > -> send signal
> > > [65731.222840] [ T1308] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/apps/uid_10198,task=gs.intelligence,pid=32454,uid=10198
> > >
> > > [65731.259685] [T32454] Call trace:
> > > [65731.259698] [T32454]  dump_backtrace+0xf4/0x118
> > > [65731.259734] [T32454]  show_stack+0x18/0x24
> > > [65731.259756] [T32454]  dump_stack_lvl+0x60/0x7c
> > > [65731.259781] [T32454]  dump_stack+0x18/0x38
> > > [65731.259800] [T32454]  mrdump_common_die+0x250/0x39c [mrdump]
> > > [65731.259936] [T32454]  ipanic_die+0x20/0x34 [mrdump]
> > > [65731.260019] [T32454]  atomic_notifier_call_chain+0xb4/0xfc
> > > [65731.260047] [T32454]  notify_die+0x114/0x198
> > > [65731.260073] [T32454]  die+0xf4/0x5b4
> > > [65731.260098] [T32454]  die_kernel_fault+0x80/0x98
> > > [65731.260124] [T32454]  __do_kernel_fault+0x160/0x2a8
> > > [65731.260146] [T32454]  do_bad_area+0x68/0x148
> > > [65731.260174] [T32454]  do_mem_abort+0x151c/0x1b34
> > > [65731.260204] [T32454]  el1_abort+0x3c/0x5c
> > > [65731.260227] [T32454]  el1h_64_sync_handler+0x54/0x90
> > > [65731.260248] [T32454]  el1h_64_sync+0x68/0x6c
> > > [65731.260269] [T32454]  z_erofs_decompress_queue+0x7f0/0x2258
> > > --> be->decompressed_pages = kvcalloc(be->nr_pages, sizeof(struct page *), GFP_KERNEL | __GFP_NOFAIL);
> > >         kernel panic by NULL pointer dereference.
> > >         erofs assume kvmalloc with __GFP_NOFAIL never return NULL.
> > >
> > > [65731.260293] [T32454]  z_erofs_runqueue+0xf30/0x104c
> > > [65731.260314] [T32454]  z_erofs_readahead+0x4f0/0x968
> > > [65731.260339] [T32454]  read_pages+0x170/0xadc
> > > [65731.260364] [T32454]  page_cache_ra_unbounded+0x874/0xf30
> > > [65731.260388] [T32454]  page_cache_ra_order+0x24c/0x714
> > > [65731.260411] [T32454]  filemap_fault+0xbf0/0x1a74
> > > [65731.260437] [T32454]  __do_fault+0xd0/0x33c
> > > [65731.260462] [T32454]  handle_mm_fault+0xf74/0x3fe0
> > > [65731.260486] [T32454]  do_mem_abort+0x54c/0x1b34
> > > [65731.260509] [T32454]  el0_da+0x44/0x94
> > > [65731.260531] [T32454]  el0t_64_sync_handler+0x98/0xb4
> > > [65731.260553] [T32454]  el0t_64_sync+0x198/0x19c
> > >
> > > --
> > > 2.34.1
> >
> > Thanks
> > Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ