[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7e9ccf34-57d1-786b-2dfd-3b9ba78e1b32@suse.cz>
Date: Tue, 17 Aug 2021 17:56:49 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Mike Galbraith <efault@....de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Mel Gorman <mgorman@...hsingularity.net>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Jann Horn <jannh@...gle.com>
Subject: Re: [PATCH v4 35/35] mm, slub: convert kmem_cpu_slab protection to
local_lock
On 8/5/21 5:20 PM, Vlastimil Babka wrote:
> Embed local_lock into struct kmem_cpu_slab and use the irq-safe versions of
> local_lock instead of plain local_irq_save/restore. On !PREEMPT_RT that's
> equivalent, with better lockdep visibility. On PREEMPT_RT that means better
> preemption.
>
> However, the cost on PREEMPT_RT is the loss of lockless fast paths which only
> work with cpu freelist. Those are designed to detect and recover from being
> preempted by other conflicting operations (both fast or slow path), but the
> slow path operations assume they cannot be preempted by a fast path operation,
> which is guaranteed naturally with disabled irqs. With local locks on
> PREEMPT_RT, the fast paths now also need to take the local lock to avoid races.
>
> In the allocation fastpath slab_alloc_node() we can just defer to the slowpath
> __slab_alloc() which also works with cpu freelist, but under the local lock.
> In the free fastpath do_slab_free() we have to add a new local lock protected
> version of freeing to the cpu freelist, as the existing slowpath only works
> with the page freelist.
>
> Also update the comment about locking scheme in SLUB to reflect changes done
> by this series.
>
> [ Mike Galbraith <efault@....de>: use local_lock() without irq in PREEMPT_RT
> scope; debugging of RT crashes resulting in put_cpu_partial() locking changes ]
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
And improvements in the RT land made the following fixup-cleanup
possible.
----8<----
>From 8b87e5de5d79a9d3ab4627f5530f1888fa7824f8 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@...e.cz>
Date: Tue, 17 Aug 2021 17:51:54 +0200
Subject: [PATCH] mm, slab: simplify lockdep_assert_held in
lockdep_assert_held()
Sebastian reports [1] that the special version of lockdep_assert_held() for a
local lock with PREEMPT_RT is no longer necessary, and we can simplify.
[1] https://lore.kernel.org/linux-mm/20210817153937.hxnuh7mqp6vuiyws@linutronix.de/
This is a fixup for mmotm patch
mm-slub-convert-kmem_cpu_slab-protection-to-local_lock.patch
Reported-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
mm/slub.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index be57687062aa..df1ac8aff86f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2913,11 +2913,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
load_freelist:
-#ifdef CONFIG_PREEMPT_RT
- lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock.lock));
-#else
lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock));
-#endif
/*
* freelist is pointing to the list of objects to be used.
--
2.32.0
Powered by blists - more mailing lists