[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB=+i9RU-VOcnWOOuHSYp3ybRjcrxxLqsqN6aSL1=Lac83c-AQ@mail.gmail.com>
Date: Wed, 22 Nov 2023 09:26:13 +0900
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: chengming.zhou@...ux.dev
Cc: vbabka@...e.cz, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, iamjoonsoo.kim@....com,
akpm@...ux-foundation.org, roman.gushchin@...ux.dev,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Chengming Zhou <zhouchengming@...edance.com>
Subject: Re: [PATCH v5 1/9] slub: Reflow ___slab_alloc()
On Thu, Nov 2, 2023 at 12:24 PM <chengming.zhou@...ux.dev> wrote:
>
> From: Chengming Zhou <zhouchengming@...edance.com>
>
> The get_partial() interface used in ___slab_alloc() may return a single
> object in the "kmem_cache_debug(s)" case, in which we will just return
> the "freelist" object.
>
> Move this handling up to prepare for later changes.
>
> And the "pfmemalloc_match()" part is not needed for node partial slab,
> since we already check this in the get_partial_node().
>
> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
> Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> ---
> mm/slub.c | 31 +++++++++++++++----------------
> 1 file changed, 15 insertions(+), 16 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 63d281dfacdb..0b0fdc8c189f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3216,8 +3216,21 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> pc.slab = &slab;
> pc.orig_size = orig_size;
> freelist = get_partial(s, node, &pc);
> - if (freelist)
> - goto check_new_slab;
> + if (freelist) {
> + if (kmem_cache_debug(s)) {
> + /*
> + * For debug caches here we had to go through
> + * alloc_single_from_partial() so just store the
> + * tracking info and return the object.
> + */
> + if (s->flags & SLAB_STORE_USER)
> + set_track(s, freelist, TRACK_ALLOC, addr);
> +
> + return freelist;
> + }
> +
> + goto retry_load_slab;
> + }
>
> slub_put_cpu_ptr(s->cpu_slab);
> slab = new_slab(s, gfpflags, node);
> @@ -3253,20 +3266,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>
> inc_slabs_node(s, slab_nid(slab), slab->objects);
>
> -check_new_slab:
> -
> - if (kmem_cache_debug(s)) {
> - /*
> - * For debug caches here we had to go through
> - * alloc_single_from_partial() so just store the tracking info
> - * and return the object
> - */
> - if (s->flags & SLAB_STORE_USER)
> - set_track(s, freelist, TRACK_ALLOC, addr);
> -
> - return freelist;
> - }
> -
> if (unlikely(!pfmemalloc_match(slab, gfpflags))) {
> /*
> * For !pfmemalloc_match() case we don't load freelist so that
Looks good to me,
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> --
> 2.20.1
>
Powered by blists - more mailing lists