[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230830175335.1536008-4-yosryahmed@google.com>
Date: Wed, 30 Aug 2023 17:53:34 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Ivan Babrou <ivan@...udflare.com>, Tejun Heo <tj@...nel.org>,
"Michal Koutný" <mkoutny@...e.com>,
Waiman Long <longman@...hat.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
Yosry Ahmed <yosryahmed@...gle.com>
Subject: [PATCH v3 3/4] mm: memcg: let non-unified root stats flushes help
unified flushes
Unified flushing of memcg stats keeps track of the magnitude of pending
updates, and only allows a flush if that magnitude exceeds a threshold.
It also keeps track of the time at which ratelimited flushing should be
allowed as flush_next_time.
A non-unified flush on the root memcg has the same effect as a unified
flush, so let it help unified flushing by resetting pending updates and
kicking flush_next_time forward. Move the logic into the common
do_stats_flush() helper, and do it for all root flushes, unified or
not.
There is a subtle change here, we reset stats_flush_threshold before a
flush rather than after a flush. This probably okay because:
(a) For flushers: only unified flushers check stats_flush_threshold, and
those flushers skip anyway if there is another unified flush ongoing.
Having them also skip if there is an ongoing non-unified root flush is
actually more consistent.
(b) For updaters: Resetting stats_flush_threshold early may lead to more
atomic updates of stats_flush_threshold, as we start updating it
earlier. This should not be significant in practice because we stop
updating stats_flush_threshold when it reaches the threshold anyway. If
we start early and stop early, the number of atomic updates remain the
same. The only difference is the scenario where we reset
stats_flush_threshold early, start doing atomic updates early, and then
the periodic flusher kicks in before we reach the threshold. In this
case, we will have done more atomic updates. However, since the
threshold wasn't reached, then we did not do a lot of updates anyway.
Suggested-by: Michal Koutný <mkoutny@...e.com>
Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
---
mm/memcontrol.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8c046feeaae7..94d5a6751a9e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -647,6 +647,11 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
*/
static void do_stats_flush(struct mem_cgroup *memcg)
{
+ /* for unified flushing, root non-unified flushing can help as well */
+ if (mem_cgroup_is_root(memcg)) {
+ WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
+ atomic_set(&stats_flush_threshold, 0);
+ }
cgroup_rstat_flush(memcg->css.cgroup);
}
@@ -665,11 +670,8 @@ static void do_unified_stats_flush(void)
atomic_xchg(&stats_unified_flush_ongoing, 1))
return;
- WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
-
do_stats_flush(root_mem_cgroup);
- atomic_set(&stats_flush_threshold, 0);
atomic_set(&stats_unified_flush_ongoing, 0);
}
--
2.42.0.rc2.253.gd59a3bf2b4-goog
Powered by blists - more mailing lists