[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210805152000.12817-29-vbabka@suse.cz>
Date: Thu, 5 Aug 2021 17:19:53 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Mike Galbraith <efault@....de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Mel Gorman <mgorman@...hsingularity.net>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Jann Horn <jannh@...gle.com>, Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH v4 28/35] mm, slab: make flush_slab() possible to call with irqs enabled
Currently flush_slab() is always called with disabled IRQs if it's needed, but
the following patches will change that, so add a parameter to control IRQ
disabling within the function, which only protects the kmem_cache_cpu
manipulation and not the call to deactivate_slab() which doesn't need it.
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
mm/slub.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index c10f2c9b9352..dceb289cb052 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2477,16 +2477,28 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
#endif /* CONFIG_SLUB_CPU_PARTIAL */
}
-static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c)
+static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c,
+ bool lock)
{
- void *freelist = c->freelist;
- struct page *page = c->page;
+ unsigned long flags;
+ void *freelist;
+ struct page *page;
+
+ if (lock)
+ local_irq_save(flags);
+
+ freelist = c->freelist;
+ page = c->page;
c->page = NULL;
c->freelist = NULL;
c->tid = next_tid(c->tid);
- deactivate_slab(s, page, freelist);
+ if (lock)
+ local_irq_restore(flags);
+
+ if (page)
+ deactivate_slab(s, page, freelist);
stat(s, CPUSLAB_FLUSH);
}
@@ -2496,7 +2508,7 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu)
struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
if (c->page)
- flush_slab(s, c);
+ flush_slab(s, c, false);
unfreeze_partials_cpu(s, c);
}
@@ -2512,7 +2524,7 @@ static void flush_cpu_slab(void *d)
struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
if (c->page)
- flush_slab(s, c);
+ flush_slab(s, c, false);
unfreeze_partials(s);
}
--
2.32.0
Powered by blists - more mailing lists