[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220825015722.1697209-1-42.hyeyoo@gmail.com>
Date: Thu, 25 Aug 2022 10:57:22 +0900
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <efault@....de>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] mm/slub: fix comments about fastpath limitation on PREEMPT_RT
With PREEMPT_RT disabling interrupt is unnecessary as there is
no user of slab in hardirq context on PREEMPT_RT.
The limitation of lockless fastpath on PREEMPT_RT comes from the fact
that local_lock does not disable preemption on PREEMPT_RT.
Fix comments accordingly.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
---
mm/slub.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 30c2ee9e8a29..aa42ac6013b8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -100,7 +100,7 @@
* except the stat counters. This is a percpu structure manipulated only by
* the local cpu, so the lock protects against being preempted or interrupted
* by an irq. Fast path operations rely on lockless operations instead.
- * On PREEMPT_RT, the local lock does not actually disable irqs (and thus
+ * On PREEMPT_RT, the local lock does not actually disable preemption (and thus
* prevent the lockless operations), so fastpath operations also need to take
* the lock and are no longer lockless.
*
@@ -3185,10 +3185,12 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l
slab = c->slab;
/*
* We cannot use the lockless fastpath on PREEMPT_RT because if a
- * slowpath has taken the local_lock_irqsave(), it is not protected
- * against a fast path operation in an irq handler. So we need to take
- * the slow path which uses local_lock. It is still relatively fast if
- * there is a suitable cpu freelist.
+ * slowpath has taken the local_lock which does not disable preemption
+ * on PREEMPT_RT, it is not protected against a fast path operation in
+ * another thread that does not take the local_lock.
+ *
+ * So we need to take the slow path which uses local_lock. It is still
+ * relatively fast if there is a suitable cpu freelist.
*/
if (IS_ENABLED(CONFIG_PREEMPT_RT) ||
unlikely(!object || !slab || !node_match(slab, node))) {
@@ -3457,10 +3459,13 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
#else /* CONFIG_PREEMPT_RT */
/*
* We cannot use the lockless fastpath on PREEMPT_RT because if
- * a slowpath has taken the local_lock_irqsave(), it is not
- * protected against a fast path operation in an irq handler. So
- * we need to take the local_lock. We shouldn't simply defer to
- * __slab_free() as that wouldn't use the cpu freelist at all.
+ * a slowpath has taken the local_lock which does not disable
+ * preemption on PREEMPT_RT, it is not protected against a
+ * fast path operation in another thread that does not take
+ * the local_lock.
+ *
+ * So we need to take the local_lock. We shouldn't simply defer
+ * to __slab_free() as that wouldn't use the cpu freelist at all.
*/
void **freelist;
--
2.32.0
Powered by blists - more mailing lists