[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <91f48b11-b6ff-39ab-947e-341920771e0f@suse.cz>
Date: Mon, 11 Jan 2021 18:55:26 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Christoph Lameter <cl@...ux.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Qian Cai <cai@...hat.com>,
David Hildenbrand <david@...hat.com>,
Michal Hocko <mhocko@...nel.org>
Subject: Re: [RFC 0/3] mm, slab, slub: remove cpu and memory hotplug locks
On 1/6/21 8:09 PM, Christoph Lameter wrote:
> On Wed, 6 Jan 2021, Vlastimil Babka wrote:
>
>> rather accept some wasted memory in scenarios that should be rare anyway (full
>> memory hot remove), as we do the same in other contexts already. It's all RFC
>> for now, as I might have missed some reason why it's not safe.
>
> Looks good to me. My only concern is the kernel that has hotplug disabled.
> Current code allows the online/offline checks to be optimized away.
>
> Can this patch be enhanced to do the same?
Thanks, indeed I didn't think about that.
But on closer inspection, there doesn't seem to be need for such enhancement:
- Patch 1 adds the new slab_nodes nodemask, which is basically a copy of
N_NORMAL_MEMORY. Without memory hotplug, the callbacks that would update it
don't occur (maybe are even eliminated as dead code?), other code that uses the
nodemask is unaffected wtf performance, it's just using a different nodemask for
the same operations. The extra memory usage of adding the nodemask is negligible
and not worth complicating the code to distinguish between the new nodemask and
N_NORMAL_MEMORY depending on hotplug being disabled or enabled.
- Patch 1 also restores slab_mutex lock in kmem_cache_shrink(). Commit
03afc0e25f7f removed it because the memory hotplug lock was deemed to be
sufficient replacement, but probably didn't consider the case where hotplug is
disabled and thus the hotplug lock is no-op. Maybe it's safe not to take
slab_mutex in kmem_cache_shrink() in that case, but kmem_cache_shrink() is only
called from a sysfs handler, thus a very cold path anyway.
But, I found out that lockdep complains about it, so I have to rethink this part
anyway... (when kmem_cache_shrink() is called from write to the 'shrink' file we
already have kernfs lock "kn->active#28" and try to lock slab_mutex, but there's
existing dependency in reverse order where in kmem_cache_create() we start with
slab_mutex and sysfs_slab_add takes the kernfs lock, I wonder how this wasn't a
problem before 03afc0e25f7f)
- Patch 2 purely just removes calls to cpu hotplug lock.
- Patch 3 only affects memory hotplug callbacks so nothing to enhance if hotplug
is disabled
Powered by blists - more mailing lists