[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210106174029.12654-1-vbabka@suse.cz>
Date: Wed, 6 Jan 2021 18:40:26 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Qian Cai <cai@...hat.com>,
David Hildenbrand <david@...hat.com>,
Michal Hocko <mhocko@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>
Subject: [RFC 0/3] mm, slab, slub: remove cpu and memory hotplug locks
Hi,
some related work caused me to look at how we use get/put_mems_online() and
get/put_online_cpus() during kmem cache creation/descruction/shrinking, and
realize that it should be actually safe to remove all of that with rather small
effort (as e.g. Michal Hocko suspected in some of the past discussions
already). This has the benefit to avoid rather heavy locks that have caused
locking order issues already in the past. So this is the result, Patches 1 and
2 remove memory hotplug and cpu hotplug locking, respectively. Patch 3 is due
to realization that in fact some races exist despite the locks (even if not
removed), but the most sane solution is not to introduce more of them, but
rather accept some wasted memory in scenarios that should be rare anyway (full
memory hot remove), as we do the same in other contexts already. It's all RFC
for now, as I might have missed some reason why it's not safe.
Vlastimil Babka (3):
mm, slab, slub: stop taking memory hotplug lock
mm, slab, slub: stop taking cpu hotplug lock
mm, slub: stop freeing kmem_cache_node structures on node offline
mm/slab_common.c | 20 ++++--------------
mm/slub.c | 54 ++++++++++++++++++++++++++++++++----------------
2 files changed, 40 insertions(+), 34 deletions(-)
--
2.29.2
Powered by blists - more mailing lists