[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <eff58088-e291-6960-cbfd-b6cc428f5f06@suse.cz>
Date: Thu, 25 Aug 2022 15:11:32 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <efault@....de>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/slub: fix comments about fastpath limitation on
PREEMPT_RT
On 8/25/22 03:57, Hyeonggon Yoo wrote:
> With PREEMPT_RT disabling interrupt is unnecessary as there is
> no user of slab in hardirq context on PREEMPT_RT.
>
> The limitation of lockless fastpath on PREEMPT_RT comes from the fact
> that local_lock does not disable preemption on PREEMPT_RT.
>
> Fix comments accordingly.
>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
Just FTR, as "slub: Make PREEMPT_RT support less convoluted" patch dealt
with these comments already, there's now nothing left to apply from below.
> ---
> mm/slub.c | 23 ++++++++++++++---------
> 1 file changed, 14 insertions(+), 9 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 30c2ee9e8a29..aa42ac6013b8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -100,7 +100,7 @@
> * except the stat counters. This is a percpu structure manipulated only by
> * the local cpu, so the lock protects against being preempted or interrupted
> * by an irq. Fast path operations rely on lockless operations instead.
> - * On PREEMPT_RT, the local lock does not actually disable irqs (and thus
> + * On PREEMPT_RT, the local lock does not actually disable preemption (and thus
> * prevent the lockless operations), so fastpath operations also need to take
> * the lock and are no longer lockless.
> *
> @@ -3185,10 +3185,12 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l
> slab = c->slab;
> /*
> * We cannot use the lockless fastpath on PREEMPT_RT because if a
> - * slowpath has taken the local_lock_irqsave(), it is not protected
> - * against a fast path operation in an irq handler. So we need to take
> - * the slow path which uses local_lock. It is still relatively fast if
> - * there is a suitable cpu freelist.
> + * slowpath has taken the local_lock which does not disable preemption
> + * on PREEMPT_RT, it is not protected against a fast path operation in
> + * another thread that does not take the local_lock.
> + *
> + * So we need to take the slow path which uses local_lock. It is still
> + * relatively fast if there is a suitable cpu freelist.
> */
> if (IS_ENABLED(CONFIG_PREEMPT_RT) ||
> unlikely(!object || !slab || !node_match(slab, node))) {
> @@ -3457,10 +3459,13 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
> #else /* CONFIG_PREEMPT_RT */
> /*
> * We cannot use the lockless fastpath on PREEMPT_RT because if
> - * a slowpath has taken the local_lock_irqsave(), it is not
> - * protected against a fast path operation in an irq handler. So
> - * we need to take the local_lock. We shouldn't simply defer to
> - * __slab_free() as that wouldn't use the cpu freelist at all.
> + * a slowpath has taken the local_lock which does not disable
> + * preemption on PREEMPT_RT, it is not protected against a
> + * fast path operation in another thread that does not take
> + * the local_lock.
> + *
> + * So we need to take the local_lock. We shouldn't simply defer
> + * to __slab_free() as that wouldn't use the cpu freelist at all.
> */
> void **freelist;
>
Powered by blists - more mailing lists