lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZAYWrlSKHjq/vMPP@pc636>
Date:   Mon, 6 Mar 2023 17:37:02 +0100
From:   Uladzislau Rezki <urezki@...il.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Uladzislau Rezki <urezki@...il.com>,
        Gao Xiang <hsiangkao@...ux.alibaba.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Mel Gorman <mgorman@...hsingularity.net>,
        Vlastimil Babka <vbabka@...e.cz>, Baoquan He <bhe@...hat.com>,
        Christoph Hellwig <hch@....de>
Subject: Re: [PATCH] mm, vmalloc: fix high order __GFP_NOFAIL allocations

On Mon, Mar 06, 2023 at 03:03:10PM +0100, Michal Hocko wrote:
> On Mon 06-03-23 13:14:43, Uladzislau Rezki wrote:
> [...]
> > Some questions:
> > 
> > 1. Could you please add a comment why you want the bulk_gfp without
> > the __GFP_NOFAIL(bulk path)?
> 
> The bulk allocator is not documented to fully support __GFP_NOFAIL
> semantic IIRC. While it uses alloc_pages as fallback I didn't want
> to make any assumptions based on the current implementation. At least
> that is my recollection. If we do want to support NOFAIL by the batch
> allocator then we can drop the special casing here.
> 
> > 2. Could you please add a comment why a high order pages do not want
> > __GFP_NOFAIL? You have already explained.
> 
> See below
> > 3. Looking at the patch:
> > 
> > <snip>
> > +       } else {
> > +               alloc_gfp &= ~__GFP_NOFAIL;
> > +               nofail = true;
> > <snip>
> > 
> > if user does not want to go with __GFP_NOFAIL flag why you force it in
> > case a high order allocation fails and you switch to 0 order allocations? 
> 
> Not intended. The above should have been else if (gfp & __GFP_NOFAIL).
> Thanks for catching that!
> 
> This would be the full patch with the description:
> --- 
> From 3ccfaa15bf2587b8998c129533a0404fedf5a484 Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@...e.com>
> Date: Mon, 6 Mar 2023 09:15:17 +0100
> Subject: [PATCH] mm, vmalloc: fix high order __GFP_NOFAIL allocations
> 
> Gao Xiang has reported that the page allocator complains about high
> order __GFP_NOFAIL request coming from the vmalloc core:
> 
>  __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5549
>  alloc_pages+0x1aa/0x270 mm/mempolicy.c:2286
>  vm_area_alloc_pages mm/vmalloc.c:2989 [inline]
>  __vmalloc_area_node mm/vmalloc.c:3057 [inline]
>  __vmalloc_node_range+0x978/0x13c0 mm/vmalloc.c:3227
>  kvmalloc_node+0x156/0x1a0 mm/util.c:606
>  kvmalloc include/linux/slab.h:737 [inline]
>  kvmalloc_array include/linux/slab.h:755 [inline]
>  kvcalloc include/linux/slab.h:760 [inline]
> 
> it seems that I have completely missed high order allocation backing
> vmalloc areas case when implementing __GFP_NOFAIL support. This means
> that [k]vmalloc at al. can allocate higher order allocations with
> __GFP_NOFAIL which can trigger OOM killer for non-costly orders easily
> or cause a lot of reclaim/compaction activity if those requests cannot
> be satisfied.
> 
> Fix the issue by falling back to zero order allocations for __GFP_NOFAIL
> requests if the high order request fails.
> 
> Fixes: 9376130c390a ("mm/vmalloc: add support for __GFP_NOFAIL")
> Reported-by: Gao Xiang <hsiangkao@...ux.alibaba.com>
> Signed-off-by: Michal Hocko <mhocko@...e.com>
> ---
>  mm/vmalloc.c | 28 +++++++++++++++++++++++-----
>  1 file changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ef910bf349e1..bef6cf2b4d46 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2883,6 +2883,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  		unsigned int order, unsigned int nr_pages, struct page **pages)
>  {
>  	unsigned int nr_allocated = 0;
> +	gfp_t alloc_gfp = gfp;
> +	bool nofail = false;
>  	struct page *page;
>  	int i;
>  
> @@ -2893,6 +2895,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  	 * more permissive.
>  	 */
>  	if (!order) {
> +		/* bulk allocator doesn't support nofail req. officially */
>  		gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL;
>  
>  		while (nr_allocated < nr_pages) {
> @@ -2931,20 +2934,35 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  			if (nr != nr_pages_request)
>  				break;
>  		}
> +	} else if (gfp & __GFP_NOFAIL) {
> +		/*
> +		 * Higher order nofail allocations are really expensive and
> +		 * potentially dangerous (pre-mature OOM, disruptive reclaim
> +		 * and compaction etc.
> +		 */
> +		alloc_gfp &= ~__GFP_NOFAIL;
> +		nofail = true;
>  	}
>  
>  	/* High-order pages or fallback path if "bulk" fails. */
> -
>  	while (nr_allocated < nr_pages) {
>  		if (fatal_signal_pending(current))
>  			break;
>  
>  		if (nid == NUMA_NO_NODE)
> -			page = alloc_pages(gfp, order);
> +			page = alloc_pages(alloc_gfp, order);
>  		else
> -			page = alloc_pages_node(nid, gfp, order);
> -		if (unlikely(!page))
> -			break;
> +			page = alloc_pages_node(nid, alloc_gfp, order);
> +		if (unlikely(!page)) {
> +			if (!nofail)
> +				break;
> +
> +			/* fall back to the zero order allocations */
> +			alloc_gfp |= __GFP_NOFAIL;
> +			order = 0;
> +			continue;
> +		}
> +
>  		/*
>  		 * Higher order allocations must be able to be treated as
>  		 * indepdenent small pages by callers (as they can with
> -- 
> 2.30.2
> 
Reviewed-by: Uladzislau Rezki (Sony) <urezki@...il.com>

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ