[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190308041426.16654-6-tobin@kernel.org>
Date: Fri, 8 Mar 2019 15:14:16 +1100
From: "Tobin C. Harding" <tobin@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Tobin C. Harding" <tobin@...nel.org>,
Christopher Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Matthew Wilcox <willy@...radead.org>,
Tycho Andersen <tycho@...ho.ws>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [RFC 05/15] slub: Sort slab cache list
It is advantageous to have all defragmentable slabs together at the
beginning of the list of slabs so that there is no need to scan the
complete list. Put defragmentable caches first when adding a slab cache
and others last.
Co-developed-by: Christoph Lameter <cl@...ux.com>
Signed-off-by: Tobin C. Harding <tobin@...nel.org>
---
mm/slab_common.c | 2 +-
mm/slub.c | 6 ++++++
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 754acdb292e4..1d492b59eee1 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -397,7 +397,7 @@ static struct kmem_cache *create_cache(const char *name,
goto out_free_cache;
s->refcount = 1;
- list_add(&s->list, &slab_caches);
+ list_add_tail(&s->list, &slab_caches);
memcg_link_cache(s);
out:
if (err)
diff --git a/mm/slub.c b/mm/slub.c
index 6ce866b420f1..f37103e22d3f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4427,6 +4427,8 @@ void kmem_cache_setup_mobility(struct kmem_cache *s,
return;
}
+ mutex_lock(&slab_mutex);
+
s->isolate = isolate;
s->migrate = migrate;
@@ -4435,6 +4437,10 @@ void kmem_cache_setup_mobility(struct kmem_cache *s,
* to disable fast cmpxchg based processing.
*/
s->flags &= ~__CMPXCHG_DOUBLE;
+
+ list_move(&s->list, &slab_caches); /* Move to top */
+
+ mutex_unlock(&slab_mutex);
}
EXPORT_SYMBOL(kmem_cache_setup_mobility);
--
2.21.0
Powered by blists - more mailing lists