lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 9 May 2024 20:32:04 +1200
From: Barry Song <21cnbao@...il.com>
To: Hailong Liu <hailong.liu@...o.com>
Cc: Michal Hocko <mhocko@...e.com>, akpm@...ux-foundation.org, urezki@...il.com, 
	hch@...radead.org, lstoakes@...il.com, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, xiang@...nel.org, chao@...nel.org, 
	Oven <liyangouwen1@...o.com>
Subject: Re: [RFC PATCH] mm/vmalloc: fix vmalloc which may return null if
 called with __GFP_NOFAIL

On Thu, May 9, 2024 at 8:21 PM Hailong Liu <hailong.liu@...o.com> wrote:
>
> On Thu, 09. May 09:48, Michal Hocko wrote:
> > On Wed 08-05-24 20:58:08, hailong.liu@...o.com wrote:
> > > From: "Hailong.Liu" <hailong.liu@...o.com>
> > >
> > > Commit a421ef303008 ("mm: allow !GFP_KERNEL allocations for kvmalloc")
> > > includes support for __GFP_NOFAIL, but it presents a conflict with
> > > commit dd544141b9eb ("vmalloc: back off when the current task is
> > > OOM-killed"). A possible scenario is as belows:
> > >
> > > process-a
> > > kvcalloc(n, m, GFP_KERNEL | __GFP_NOFAIL)
> > >     __vmalloc_node_range()
> > >     __vmalloc_area_node()
> > >         vm_area_alloc_pages()
> > >             --> oom-killer send SIGKILL to process-a
> > >             if (fatal_signal_pending(current)) break;
> > > --> return NULL;
> > >
> > > to fix this, do not check fatal_signal_pending() in vm_area_alloc_pages()
> > > if __GFP_NOFAIL set.
> > >
> > > Reported-by: Oven <liyangouwen1@...o.com>
> > > Signed-off-by: Hailong.Liu <hailong.liu@...o.com>
> > > ---
> > >  mm/vmalloc.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 6641be0ca80b..2f359d08bf8d 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -3560,7 +3560,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
> > >
> > >     /* High-order pages or fallback path if "bulk" fails. */
> > >     while (nr_allocated < nr_pages) {
> > > -           if (fatal_signal_pending(current))
> > > +           if (!(gfp & __GFP_NOFAIL) && fatal_signal_pending(current))
> >
> > Use nofail instead of gfp & __GFP_NOFAIL.
> >
> > Other than that looks good to me. After that is fixed, please feel free
> > to add Acked-by: Michal Hocko <mhocko@...e.com>
> >
> > I believe this should also have Fixes: 9376130c390a ("mm/vmalloc: add support for __GFP_NOFAIL")
> > --
> > Michal Hocko
> > SUSE Labs
>
> Thanks for the review and the Ack!
>
> Add Fixes in V2 patch.
>
> IIUC, nofail could not used for this case.
>
>         /*
>          * For order-0 pages we make use of bulk allocator, if
>          * the page array is partly or not at all populated due
>          * to fails, fallback to a single page allocator that is
>          * more permissive.
>          */
>         if (!order) {
>                 /* bulk allocator doesn't support nofail req. officially */
>                 xxx
> -> nofail = false;

isn't it another bug that needs a fix?

>         } else if (gfp & __GFP_NOFAIL) {
>                 /*
>                  * Higher order nofail allocations are really expensive and
>                  * potentially dangerous (pre-mature OOM, disruptive reclaim
>                  * and compaction etc.
>                  */
>                 alloc_gfp &= ~__GFP_NOFAIL;
>                 nofail = true;
>         }
>
>         /* High-order pages or fallback path if "bulk" fails. */
>         while (nr_allocated < nr_pages) {
>
> -> nofail is false here if bulk allocator fails.
>                 if (fatal_signal_pending(current))
>                         break;
>
> --
>
> Best Regards,
> Hailong.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ