[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201228130853.1871516-1-jannh@google.com>
Date: Mon, 28 Dec 2020 14:08:53 +0100
From: Jann Horn <jannh@...gle.com>
To: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH] mm, slub: Consider rest of partial list if acquire_slab() fails
acquire_slab() fails if there is contention on the freelist of the page
(probably because some other CPU is concurrently freeing an object from the
page). In that case, it might make sense to look for a different page
(since there might be more remote frees to the page from other CPUs, and we
don't want contention on struct page).
However, the current code accidentally stops looking at the partial list
completely in that case. Especially on kernels without CONFIG_NUMA set,
this means that get_partial() fails and new_slab_objects() falls back to
new_slab(), allocating new pages. This could lead to an unnecessary
increase in memory fragmentation.
Fixes: 7ced37197196 ("slub: Acquire_slab() avoid loop")
Signed-off-by: Jann Horn <jannh@...gle.com>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 0c8b43a5b3b0..b1777ba06735 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1974,7 +1974,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
t = acquire_slab(s, n, page, object == NULL, &objects);
if (!t)
- break;
+ continue; /* cmpxchg raced */
available += objects;
if (!object) {
base-commit: 5c8fe583cce542aa0b84adc939ce85293de36e5e
--
2.29.2.729.g45daf8777d-goog
Powered by blists - more mailing lists