[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1569427262.5576.225.camel@lca.pw>
Date: Wed, 25 Sep 2019 12:01:02 -0400
From: Qian Cai <cai@....pw>
To: David Hildenbrand <david@...hat.com>,
Michal Hocko <mhocko@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Oscar Salvador <osalvador@...e.de>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Dan Williams <dan.j.williams@...el.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock
On Wed, 2019-09-25 at 09:02 +0200, David Hildenbrand wrote:
> On 24.09.19 20:54, Qian Cai wrote:
> > On Tue, 2019-09-24 at 17:11 +0200, Michal Hocko wrote:
> > > On Tue 24-09-19 11:03:21, Qian Cai wrote:
> > > [...]
> > > > While at it, it might be a good time to rethink the whole locking over there, as
> > > > it right now read files under /sys/kernel/slab/ could trigger a possible
> > > > deadlock anyway.
> > > >
> > >
> > > [...]
> > > > [ 442.452090][ T5224] -> #0 (mem_hotplug_lock.rw_sem){++++}:
> > > > [ 442.459748][ T5224] validate_chain+0xd10/0x2bcc
> > > > [ 442.464883][ T5224] __lock_acquire+0x7f4/0xb8c
> > > > [ 442.469930][ T5224] lock_acquire+0x31c/0x360
> > > > [ 442.474803][ T5224] get_online_mems+0x54/0x150
> > > > [ 442.479850][ T5224] show_slab_objects+0x94/0x3a8
> > > > [ 442.485072][ T5224] total_objects_show+0x28/0x34
> > > > [ 442.490292][ T5224] slab_attr_show+0x38/0x54
> > > > [ 442.495166][ T5224] sysfs_kf_seq_show+0x198/0x2d4
> > > > [ 442.500473][ T5224] kernfs_seq_show+0xa4/0xcc
> > > > [ 442.505433][ T5224] seq_read+0x30c/0x8a8
> > > > [ 442.509958][ T5224] kernfs_fop_read+0xa8/0x314
> > > > [ 442.515007][ T5224] __vfs_read+0x88/0x20c
> > > > [ 442.519620][ T5224] vfs_read+0xd8/0x10c
> > > > [ 442.524060][ T5224] ksys_read+0xb0/0x120
> > > > [ 442.528586][ T5224] __arm64_sys_read+0x54/0x88
> > > > [ 442.533634][ T5224] el0_svc_handler+0x170/0x240
> > > > [ 442.538768][ T5224] el0_svc+0x8/0xc
> > >
> > > I believe the lock is not really needed here. We do not deallocated
> > > pgdat of a hotremoved node nor destroy the slab state because an
> > > existing slabs would prevent hotremove to continue in the first place.
> > >
> > > There are likely details to be checked of course but the lock just seems
> > > bogus.
> >
> > Check 03afc0e25f7f ("slab: get_online_mems for
> > kmem_cache_{create,destroy,shrink}"). It actually talk about the races during
> > memory as well cpu hotplug, so it might even that cpu_hotplug_lock removal is
> > problematic?
> >
>
> Which removal are you referring to? get_online_mems() does not mess with
> the cpu hotplug lock (and therefore this patch).
The one in your patch. I suspect there might be races among the whole NUMA node
hotplug, kmem_cache_create, and show_slab_objects(). See bfc8c90139eb ("mem-
hotplug: implement get/put_online_mems")
"kmem_cache_{create,destroy,shrink} need to get a stable value of cpu/node
online mask, because they init/destroy/access per-cpu/node kmem_cache parts,
which can be allocated or destroyed on cpu/mem hotplug."
Both online_pages() and show_slab_objects() need to get a stable value of
cpu/node online mask.
Powered by blists - more mailing lists