[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aa8a8deb-0fdb-9408-48d4-adadb5602d72@redhat.com>
Date: Wed, 8 Dec 2021 14:00:01 -0500
From: Nico Pache <npache@...hat.com>
To: Michal Hocko <mhocko@...e.com>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, shakeelb@...gle.com,
shy828301@...il.com, guro@...com, vbabka@...e.cz,
vdavydov.dev@...il.com, raquini@...hat.com
Subject: Re: [RFC PATCH 2/2] mm/vmscan.c: Prevent allocating shrinker_info on
offlined nodes
On 12/6/21 08:24, Michal Hocko wrote:
> On Mon 06-12-21 16:19:12, Kirill Tkhai wrote:
>> On 06.12.2021 13:45, David Hildenbrand wrote:
>>>> This doesn't seen complete. Slab shrinkers are used in the reclaim
>>>> context. Previously offline nodes could be onlined later and this would
>>>> lead to NULL ptr because there is no hook to allocate new shrinker
>>>> infos. This would be also really impractical because this would have to
>>>> update all existing memcgs...
>>>
>>> Instead of going through the trouble of updating...
>>>
>>> ... maybe just keep for_each_node() and check if the target node is
>>> offline. If it's offline, just allocate from the first online node.
>>> After all, we're not using __GFP_THISNODE, so there are no guarantees
>>> either way ...
>>
>> Hm, can't we add shrinker maps allocation to __try_online_node() in addition
>> to this patch?
>
> Either that or through hotplug notifier (which would be a better
> solution). But allocating a new shrinker map for each memcg would have
> to be done as has been mentioned earlier.
I took a stab at this approach. It may be incomplete but please let me know what
you think. This would go on top of this series.
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 0c5c403f4be6..6c842382fa73 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -520,6 +520,7 @@ static inline struct mem_cgroup *page_memcg_check(struct
page *page)
return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
}
+int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node);
#ifdef CONFIG_MEMCG_KMEM
/*
* folio_memcg_kmem - Check if the folio has the memcg_kmem flag set.
diff --git a/include/linux/node.h b/include/linux/node.h
index bb21fd631b16..5e8c737ea751 100644
--- a/include/linux/node.h
+++ b/include/linux/node.h
@@ -19,7 +19,7 @@
#include <linux/cpumask.h>
#include <linux/list.h>
#include <linux/workqueue.h>
-
+#include <linux/memcontrol.h>
/**
* struct node_hmem_attrs - heterogeneous memory performance attributes
*
@@ -118,6 +118,7 @@ extern int __register_one_node(int nid);
/* Registers an online node */
static inline int register_one_node(int nid)
{
+ struct mem_cgroup *memcg;
int error = 0;
if (node_online(nid)) {
@@ -130,6 +131,14 @@ static inline int register_one_node(int nid)
return error;
/* link memory sections under this node */
link_mem_sections(nid, start_pfn, end_pfn, MEMINIT_EARLY);
+ /* Iterate over memcgs and update nodeinfo */
+ memcg = mem_cgroup_iter(NULL, NULL, NULL);
+ do {
+ if (alloc_mem_cgroup_per_node_info(memcg,nid)) {
+ mem_cgroup_iter_break(NULL, memcg);
+ return error;
+ }
+ } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
}
return error;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6863a834ed42..2d55fad3229b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5041,18 +5041,11 @@ struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
return idr_find(&mem_cgroup_idr, id);
}
-static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node)
+int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node)
{
struct mem_cgroup_per_node *pn;
int tmp = node;
- /*
- * This routine is called against possible nodes.
- * But it's BUG to call kmalloc() against offline node.
- *
- * TODO: this routine can waste much memory for nodes which will
- * never be onlined. It's better to use memory hotplug callback
- * function.
- */
+
if (!node_state(node, N_NORMAL_MEMORY))
tmp = -1;
pn = kzalloc_node(sizeof(*pn), GFP_KERNEL, tmp);
@@ -5130,7 +5123,7 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
if (!memcg->vmstats_percpu)
goto fail;
- for_each_node(node)
+ for_each_online_node(node)
if (alloc_mem_cgroup_per_node_info(memcg, node))
goto fail;
Powered by blists - more mailing lists