[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4fa0e9016f746e070150f6db78202d744a3f9c4c.camel@gmx.de>
Date: Tue, 10 Aug 2021 03:07:01 +0200
From: Mike Galbraith <efault@....de>
To: Vlastimil Babka <vbabka@...e.cz>,
Qian Cai <quic_qiancai@...cinc.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Mel Gorman <mgorman@...hsingularity.net>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Jann Horn <jannh@...gle.com>
Subject: Re: [PATCH v4 29/35] mm: slub: Move flush_cpu_slab() invocations
__free_slab() invocations out of IRQ context
On Mon, 2021-08-09 at 22:08 +0200, Vlastimil Babka wrote:
> On 8/9/2021 8:44 PM, Mike Galbraith wrote:
> > >
> > > slab_mutex -> flush_lock
> >
> > Bugger. That chain ending with cpu_hotplug_lock makes slub_cpu_dead()
> > taking slab_mutex a non-starter for cpu hotplug as well. It's
> > established early by kernel_init_freeable()..kmem_cache_destroy() as
> > well as by slab_mem_going_offline_callback().
>
> I suck at reading the lockdep splats, so I don't see yet how the "existing
> reverse order" occurs - I do understand the order in the "lsbug".
> What I also wonder is why didn't this occur also in the older RT trees with this
> patch.
Apparently (oops) nobody got around to hotplug+lockdep testing, RT or
otherwise. I know I didn't, goldfish like attention span being used up
by explosion testing ;-)
-Mike
Powered by blists - more mailing lists