[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5a0e1e5-026a-19ba-ac5e-7a0012acd8ee@suse.cz>
Date: Tue, 14 Jun 2022 15:05:28 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Daniel Vetter <daniel.vetter@...ll.ch>,
LKML <linux-kernel@...r.kernel.org>
Cc: DRI Development <dri-devel@...ts.freedesktop.org>,
Daniel Vetter <daniel.vetter@...el.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org
Subject: Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
On 6/5/22 17:25, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@...el.com>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: Roman Gushchin <roman.gushchin@...ux.dev>
> Cc: linux-mm@...ck.org
Thanks, added to slab/for-5.20/cleanup as it's slab-specific and independent
from 1/3 and 3/3.
> ---
> mm/slab.c | 10 ----------
> 1 file changed, 10 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index b04e40078bdf..75779ac5f5ba 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
> return ac->entry[--ac->avail];
> }
>
> -static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
> - gfp_t flags)
> -{
> - might_sleep_if(gfpflags_allow_blocking(flags));
> -}
> -
> #if DEBUG
> static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
> gfp_t flags, void *objp, unsigned long caller)
> @@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
> if (unlikely(ptr))
> goto out_hooks;
>
> - cache_alloc_debugcheck_before(cachep, flags);
> local_irq_save(save_flags);
>
> if (nodeid == NUMA_NO_NODE)
> @@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
> if (unlikely(objp))
> goto out;
>
> - cache_alloc_debugcheck_before(cachep, flags);
> local_irq_save(save_flags);
> objp = __do_cache_alloc(cachep, flags);
> local_irq_restore(save_flags);
> @@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> if (!s)
> return 0;
>
> - cache_alloc_debugcheck_before(s, flags);
> -
> local_irq_disable();
> for (i = 0; i < size; i++) {
> void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
Powered by blists - more mailing lists