[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod6j5E2MXFz463LRcrP6XY8FedzLUKW88c=ZY39B6mYyMA@mail.gmail.com>
Date: Thu, 14 Sep 2023 10:36:28 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Ivan Babrou <ivan@...udflare.com>, Tejun Heo <tj@...nel.org>,
Michal Koutný <mkoutny@...e.com>,
Waiman Long <longman@...hat.com>, kernel-team@...udflare.com,
Wei Xu <weixugc@...gle.com>, Greg Thelen <gthelen@...gle.com>,
linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm: memcg: optimize stats flushing for latency and accuracy
On Wed, Sep 13, 2023 at 12:38 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> Stats flushing for memcg currently follows the following rules:
> - Always flush the entire memcg hierarchy (i.e. flush the root).
> - Only one flusher is allowed at a time. If someone else tries to flush
> concurrently, they skip and return immediately.
> - A periodic flusher flushes all the stats every 2 seconds.
>
> The reason this approach is followed is because all flushes are
> serialized by a global rstat spinlock. On the memcg side, flushing is
> invoked from userspace reads as well as in-kernel flushers (e.g.
> reclaim, refault, etc). This approach aims to avoid serializing all
> flushers on the global lock, which can cause a significant performance
> hit under high concurrency.
>
> This approach has the following problems:
> - Occasionally a userspace read of the stats of a non-root cgroup will
> be too expensive as it has to flush the entire hierarchy [1].
This is a real world workload exhibiting the issue which is good.
> - Sometimes the stats accuracy are compromised if there is an ongoing
> flush, and we skip and return before the subtree of interest is
> actually flushed. This is more visible when reading stats from
> userspace, but can also affect in-kernel flushers.
Please provide similar data/justification for the above. In addition:
1. How much delayed/stale stats have you observed on real world workload?
2. What is acceptable staleness in the stats for your use-case?
3. What is your use-case?
4. Does your use-case care about staleness of all the stats in
memory.stat or some specific stats?
5. If some specific stats in memory.stat, does it make sense to
decouple them from rstat and just pay the price up front to maintain
them accurately?
Most importantly please please please be concise in your responses.
I know I am going back on some of the previous agreements but this
whole locking back and forth has made in question the original
motivation.
thanks,
Shakeel
Powered by blists - more mailing lists