[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210609113903.1421-8-vbabka@suse.cz>
Date: Wed, 9 Jun 2021 13:38:36 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Mel Gorman <mgorman@...hsingularity.net>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Jann Horn <jannh@...gle.com>, Vlastimil Babka <vbabka@...e.cz>
Subject: [RFC v2 07/34] mm, slub: extract get_partial() from new_slab_objects()
The later patches will need more fine grained control over individual actions
in ___slab_alloc(), the only caller of new_slab_objects(), so this is a first
preparatory step with no functional change.
This adds a goto label that appears unnecessary at this point, but will be
useful for later changes.
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
Acked-by: Christoph Lameter <cl@...ux.com>
---
mm/slub.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 76af5065baeb..38256e87dde8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2553,17 +2553,12 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags,
int node, struct kmem_cache_cpu **pc)
{
- void *freelist;
+ void *freelist = NULL;
struct kmem_cache_cpu *c = *pc;
struct page *page;
WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO));
- freelist = get_partial(s, flags, node, c);
-
- if (freelist)
- return freelist;
-
page = new_slab(s, flags, node);
if (page) {
c = raw_cpu_ptr(s->cpu_slab);
@@ -2727,6 +2722,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
goto redo;
}
+ freelist = get_partial(s, gfpflags, node, c);
+ if (freelist)
+ goto check_new_page;
+
freelist = new_slab_objects(s, gfpflags, node, &c);
if (unlikely(!freelist)) {
@@ -2734,6 +2733,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
return NULL;
}
+check_new_page:
page = c->page;
if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags)))
goto load_freelist;
--
2.31.1
Powered by blists - more mailing lists