[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNNTgZVPZipTL/UM@dhcp22.suse.cz>
Date: Wed, 9 Aug 2023 10:51:13 +0200
From: Michal Hocko <mhocko@...e.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: memcg: provide accurate stats for userspace reads
On Wed 09-08-23 04:58:10, Yosry Ahmed wrote:
> Over time, the memcg code added multiple optimizations to the stats
> flushing path that introduce a tradeoff between accuracy and
> performance. In some contexts (e.g. dirty throttling, refaults, etc), a
> full rstat flush of the stats in the tree can be too expensive. Such
> optimizations include [1]:
> (a) Introducing a periodic background flusher to keep the size of the
> update tree from growing unbounded.
> (b) Allowing only one thread to flush at a time, and other concurrent
> flushers just skip the flush. This avoids a thundering herd problem
> when multiple reclaim/refault threads attempt to flush the stats at
> once.
> (c) Only executing a flush if the magnitude of the stats updates exceeds
> a certain threshold.
>
> These optimizations were necessary to make flushing feasible in
> performance-critical paths, and they come at the cost of some accuracy
> that we choose to live without. On the other hand, for flushes invoked
> when userspace is reading the stats, the tradeoff is less appealing
> This code path is not performance-critical, and the inaccuracies can
> affect userspace behavior. For example, skipping flushing when there is
> another ongoing flush is essentially a coin flip. We don't know if the
> ongoing flush is done with the subtree of interest or not.
I am not convinced by this much TBH. What kind of precision do you
really need and how much off is what we provide?
More expensive read of stats from userspace is quite easy to notice
and usually reported as a regression. So you should have a convincing
argument that an extra time spent is really worth it. AFAIK there are
many monitoring (top like) tools which simply read those files regularly
just to show numbers and they certainly do not need a high level of
precision.
[...]
> @@ -639,17 +639,24 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
> }
> }
>
> -static void do_flush_stats(void)
> +static void do_flush_stats(bool full)
> {
> + if (!atomic_read(&stats_flush_ongoing) &&
> + !atomic_xchg(&stats_flush_ongoing, 1))
> + goto flush;
> +
> /*
> - * We always flush the entire tree, so concurrent flushers can just
> - * skip. This avoids a thundering herd problem on the rstat global lock
> - * from memcg flushers (e.g. reclaim, refault, etc).
> + * We always flush the entire tree, so concurrent flushers can choose to
> + * skip if accuracy is not critical. Otherwise, wait for the ongoing
> + * flush to complete. This avoids a thundering herd problem on the rstat
> + * global lock from memcg flushers (e.g. reclaim, refault, etc).
> */
> - if (atomic_read(&stats_flush_ongoing) ||
> - atomic_xchg(&stats_flush_ongoing, 1))
> - return;
> -
> + while (full && atomic_read(&stats_flush_ongoing) == 1) {
> + if (!cond_resched())
> + cpu_relax();
You are reinveting a mutex with spinning waiter. Why don't you simply
make stats_flush_ongoing a real mutex and make use try_lock for !full
flush and normal lock otherwise?
> + }
> + return;
> +flush:
> WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
>
> cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
[...]
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists