[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191016110604.GT317@dhcp22.suse.cz>
Date: Wed, 16 Oct 2019 13:06:04 +0200
From: Michal Hocko <mhocko@...nel.org>
To: "Uladzislau Rezki (Sony)" <urezki@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Daniel Wagner <dwagner@...e.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Hillf Danton <hdanton@...a.com>,
Matthew Wilcox <willy@...radead.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH v3 2/3] mm/vmalloc: respect passed gfp_mask when do
preloading
On Wed 16-10-19 11:54:37, Uladzislau Rezki (Sony) wrote:
> alloc_vmap_area() is given a gfp_mask for the page allocator.
> Let's respect that mask and consider it even in the case when
> doing regular CPU preloading, i.e. where a context can sleep.
This is explaining what but it doesn't say why. I would go with
"
Allocation functions should comply with the given gfp_mask as much as
possible. The preallocation code in alloc_vmap_area doesn't follow that
pattern and it is using a hardcoded GFP_KERNEL. Although this doesn't
really make much difference because vmalloc is not GFP_NOWAIT compliant
in general (e.g. page table allocations are GFP_KERNEL) there is no
reason to spread that bad habit and it is good to fix the antipattern.
"
>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
Acked-by: Michal Hocko <mhocko@...e.com>
> ---
> mm/vmalloc.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index b7b443bfdd92..593bf554518d 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1064,9 +1064,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> return ERR_PTR(-EBUSY);
>
> might_sleep();
> + gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
>
> - va = kmem_cache_alloc_node(vmap_area_cachep,
> - gfp_mask & GFP_RECLAIM_MASK, node);
> + va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
> if (unlikely(!va))
> return ERR_PTR(-ENOMEM);
>
> @@ -1074,7 +1074,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> * Only scan the relevant parts containing pointers to other objects
> * to avoid false negatives.
> */
> - kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK);
> + kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
>
> retry:
> /*
> @@ -1100,7 +1100,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> * Just proceed as it is. If needed "overflow" path
> * will refill the cache we allocate from.
> */
> - pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
> + pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
>
> spin_lock(&vmap_area_lock);
>
> --
> 2.20.1
>
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists