[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <b1808b92-86df-9f53-bfb2-8862a9c554e9@google.com>
Date: Tue, 27 Dec 2022 22:05:48 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vlastimil Babka <vbabka@...e.cz>
cc: Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [patch] mm, slab: periodically resched in drain_freelist()
drain_freelist() can be called with a very large number of slabs to free,
such as for kmem_cache_shrink(), or depending on various settings of the
slab cache when doing periodic reaping.
If there is a potentially long list of slabs to drain, periodically
schedule to ensure we aren't saturating the cpu for too long.
Signed-off-by: David Rientjes <rientjes@...gle.com>
---
mm/slab.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache,
raw_spin_unlock_irq(&n->list_lock);
slab_destroy(cache, slab);
nr_freed++;
+
+ cond_resched();
}
out:
return nr_freed;
Powered by blists - more mailing lists