[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <028651d9-5e3b-8348-00af-e6acf8ea4ced@suse.cz>
Date: Mon, 7 Jun 2021 14:32:48 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Muchun Song <songmuchun@...edance.com>, cl@...ux.com,
penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: slub: replace local_irq_save with local_irq_disable
On 6/6/21 6:17 AM, Muchun Song wrote:
> The caller of slub_cpu_dead cannot be irq disabled (because slab_mutex is
> holding during the processing), there is no need to use irq_save. Just use
> irq_disable directly.
Well, we shouldn't need to disable irq at all. We are cleaning up for a dead
cpu, so there's nobody else accesing the data. irq save/disable will protect
only the local cpu's data, not of the cpu we are flushing. But we can't simply
remove the irq disable/enable because there are some nested
VM_BUG_ON(!irqs_disabled()) under __flush_cpu_slab(). We basically only disable
irqs here to avoid those from triggering.
My series [1] addresses this completely (among other things), but it's early
stage RFC (v2 should be soon). Your patch is not wrong, but also not urgent or
perf critical. So with that context I'll leave the decision to others :)
[1] https://lore.kernel.org/lkml/20210524233946.20352-1-vbabka@suse.cz/
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> mm/slub.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index ee51857d8e9b..fbf592ef14ff 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2529,13 +2529,12 @@ static void flush_all(struct kmem_cache *s)
> static int slub_cpu_dead(unsigned int cpu)
> {
> struct kmem_cache *s;
> - unsigned long flags;
>
> mutex_lock(&slab_mutex);
> list_for_each_entry(s, &slab_caches, list) {
> - local_irq_save(flags);
> + local_irq_disable();
> __flush_cpu_slab(s, cpu);
> - local_irq_restore(flags);
> + local_irq_enable();
> }
> mutex_unlock(&slab_mutex);
> return 0;
>
Powered by blists - more mailing lists