[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230720070825.992023-2-yosryahmed@google.com>
Date: Thu, 20 Jul 2023 07:08:18 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>
Cc: Muchun Song <muchun.song@...ux.dev>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
Yu Zhao <yuzhao@...gle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
"T.J. Mercier" <tjmercier@...gle.com>,
Greg Thelen <gthelen@...gle.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, cgroups@...r.kernel.org,
Yosry Ahmed <yosryahmed@...gle.com>
Subject: [RFC PATCH 1/8] memcg: refactor updating memcg->moving_account
memcg->moving_account is used to signal that a memcg move is taking
place, so that folio_memcg_lock() would start acquiring the per-memcg
move lock instead of just initiating an rcu read section.
Refactor incrementing and decrementing memcg->moving_account, together
with rcu synchornization and the elaborate comment into helpers, to
allow for reuse by incoming patches.
Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
---
mm/memcontrol.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e8ca4bdcb03c..ffdb848f4003 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6305,16 +6305,26 @@ static const struct mm_walk_ops charge_walk_ops = {
.pmd_entry = mem_cgroup_move_charge_pte_range,
};
-static void mem_cgroup_move_charge(void)
+static void mem_cgroup_start_move_charge(struct mem_cgroup *memcg)
{
- lru_add_drain_all();
/*
* Signal folio_memcg_lock() to take the memcg's move_lock
* while we're moving its pages to another memcg. Then wait
* for already started RCU-only updates to finish.
*/
- atomic_inc(&mc.from->moving_account);
+ atomic_inc(&memcg->moving_account);
synchronize_rcu();
+}
+
+static void mem_cgroup_end_move_charge(struct mem_cgroup *memcg)
+{
+ atomic_dec(&memcg->moving_account);
+}
+
+static void mem_cgroup_move_charge(void)
+{
+ lru_add_drain_all();
+ mem_cgroup_start_move_charge(mc.from);
retry:
if (unlikely(!mmap_read_trylock(mc.mm))) {
/*
@@ -6334,7 +6344,7 @@ static void mem_cgroup_move_charge(void)
*/
walk_page_range(mc.mm, 0, ULONG_MAX, &charge_walk_ops, NULL);
mmap_read_unlock(mc.mm);
- atomic_dec(&mc.from->moving_account);
+ mem_cgroup_end_move_charge(mc.from);
}
static void mem_cgroup_move_task(void)
--
2.41.0.255.g8b1d071c50-goog
Powered by blists - more mailing lists