[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17ba6fc6-72ce-992b-7cc4-812acbdedbeb@redhat.com>
Date: Thu, 26 Sep 2019 09:26:13 +0200
From: David Hildenbrand <david@...hat.com>
To: Qian Cai <cai@....pw>, Michal Hocko <mhocko@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Oscar Salvador <osalvador@...e.de>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Dan Williams <dan.j.williams@...el.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock
On 25.09.19 22:32, Qian Cai wrote:
> On Wed, 2019-09-25 at 21:48 +0200, David Hildenbrand wrote:
>> On 25.09.19 20:20, Qian Cai wrote:
>>> On Wed, 2019-09-25 at 19:48 +0200, Michal Hocko wrote:
>>>> On Wed 25-09-19 12:01:02, Qian Cai wrote:
>>>>> On Wed, 2019-09-25 at 09:02 +0200, David Hildenbrand wrote:
>>>>>> On 24.09.19 20:54, Qian Cai wrote:
>>>>>>> On Tue, 2019-09-24 at 17:11 +0200, Michal Hocko wrote:
>>>>>>>> On Tue 24-09-19 11:03:21, Qian Cai wrote:
>>>>>>>> [...]
>>>>>>>>> While at it, it might be a good time to rethink the whole locking over there, as
>>>>>>>>> it right now read files under /sys/kernel/slab/ could trigger a possible
>>>>>>>>> deadlock anyway.
>>>>>>>>>
>>>>>>>>
>>>>>>>> [...]
>>>>>>>>> [ 442.452090][ T5224] -> #0 (mem_hotplug_lock.rw_sem){++++}:
>>>>>>>>> [ 442.459748][ T5224] validate_chain+0xd10/0x2bcc
>>>>>>>>> [ 442.464883][ T5224] __lock_acquire+0x7f4/0xb8c
>>>>>>>>> [ 442.469930][ T5224] lock_acquire+0x31c/0x360
>>>>>>>>> [ 442.474803][ T5224] get_online_mems+0x54/0x150
>>>>>>>>> [ 442.479850][ T5224] show_slab_objects+0x94/0x3a8
>>>>>>>>> [ 442.485072][ T5224] total_objects_show+0x28/0x34
>>>>>>>>> [ 442.490292][ T5224] slab_attr_show+0x38/0x54
>>>>>>>>> [ 442.495166][ T5224] sysfs_kf_seq_show+0x198/0x2d4
>>>>>>>>> [ 442.500473][ T5224] kernfs_seq_show+0xa4/0xcc
>>>>>>>>> [ 442.505433][ T5224] seq_read+0x30c/0x8a8
>>>>>>>>> [ 442.509958][ T5224] kernfs_fop_read+0xa8/0x314
>>>>>>>>> [ 442.515007][ T5224] __vfs_read+0x88/0x20c
>>>>>>>>> [ 442.519620][ T5224] vfs_read+0xd8/0x10c
>>>>>>>>> [ 442.524060][ T5224] ksys_read+0xb0/0x120
>>>>>>>>> [ 442.528586][ T5224] __arm64_sys_read+0x54/0x88
>>>>>>>>> [ 442.533634][ T5224] el0_svc_handler+0x170/0x240
>>>>>>>>> [ 442.538768][ T5224] el0_svc+0x8/0xc
>>>>>>>>
>>>>>>>> I believe the lock is not really needed here. We do not deallocated
>>>>>>>> pgdat of a hotremoved node nor destroy the slab state because an
>>>>>>>> existing slabs would prevent hotremove to continue in the first place.
>>>>>>>>
>>>>>>>> There are likely details to be checked of course but the lock just seems
>>>>>>>> bogus.
>>>>>>>
>>>>>>> Check 03afc0e25f7f ("slab: get_online_mems for
>>>>>>> kmem_cache_{create,destroy,shrink}"). It actually talk about the races during
>>>>>>> memory as well cpu hotplug, so it might even that cpu_hotplug_lock removal is
>>>>>>> problematic?
>>>>>>>
>>>>>>
>>>>>> Which removal are you referring to? get_online_mems() does not mess with
>>>>>> the cpu hotplug lock (and therefore this patch).
>>>>>
>>>>> The one in your patch. I suspect there might be races among the whole NUMA node
>>>>> hotplug, kmem_cache_create, and show_slab_objects(). See bfc8c90139eb ("mem-
>>>>> hotplug: implement get/put_online_mems")
>>>>>
>>>>> "kmem_cache_{create,destroy,shrink} need to get a stable value of cpu/node
>>>>> online mask, because they init/destroy/access per-cpu/node kmem_cache parts,
>>>>> which can be allocated or destroyed on cpu/mem hotplug."
>>>>
>>>> I still have to grasp that code but if the slub allocator really needs
>>>> a stable cpu mask then it should be using the explicit cpu hotplug
>>>> locking rather than rely on side effect of memory hotplug locking.
>>>>
>>>>> Both online_pages() and show_slab_objects() need to get a stable value of
>>>>> cpu/node online mask.
>>>>
>>>> Could tou be more specific why online_pages need a stable cpu online
>>>> mask? I do not think that show_slab_objects is a real problem because a
>>>> potential race shouldn't be critical.
>>>
>>> build_all_zonelists()
>>> __build_all_zonelists()
>>> for_each_online_cpu(cpu)
>>>
>>
>> Two things:
>>
>> a) We currently always hold the device hotplug lock when onlining memory
>> and when onlining cpus (for CPUs at least via user space - we would have
>> to double check other call paths). So theoretically, that should guard
>> us from something like that already.
>>
>> b)
>>
>> commit 11cd8638c37f6c400cc472cc52b6eccb505aba6e
>> Author: Michal Hocko <mhocko@...e.com>
>> Date: Wed Sep 6 16:20:34 2017 -0700
>>
>> mm, page_alloc: remove stop_machine from build_all_zonelists
>>
>> Tells me:
>>
>> "Updates of the zonelists happen very seldom, basically only when a zone
>> becomes populated during memory online or when it loses all the memory
>> during offline. A racing iteration over zonelists could either miss a
>> zone or try to work on one zone twice. Both of these are something we
>> can live with occasionally because there will always be at least one
>> zone visible so we are not likely to fail allocation too easily for
>> example."
>>
>> Sounds like if there would be a race, we could live with it if I am not
>> getting that totally wrong.
>>
>
> What's the problem you are trying to solve? Why it is more important to live
> with races than to keep the correct code?
I am trying to understand, fix, cleanup and document the locking mess we
have in the memory hotplug code.
The cpu hotplug lock is one of these things nobody really has a clue why
it is still needed. It imposes locking orders (e.g., has to be taken
before the memory hotplug lock) and we are taking the cpu hotplug lock
even if we do add_memory()/remove_memory(), not only when onlining pages)
So if we agree that we need it here, I'll add documentation - especially
to build_all_zonelists(). If we agree it can go, I'll add documentation
why we don't need it in build_all_zonelists().
I am not yet convinced that we need the lock here. As I said, we do hold
the device_hotplug_lock which all sysfs
/sys/devices/system/whatever/online modifications take, and Michal even
documented why the we can live with very very rare races (again, if they
are possible at all).
I'd like to hear what Michal thinks. If we do want the cpu hotplug lock,
we can at least restrict it to the call paths (e.g., online_pages())
where the lock is really needed and document that.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists