[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkaTa8XTVjga0jijjajXzRAGFN4QMvEspiJ8M1f9YA_yBA@mail.gmail.com>
Date: Wed, 29 Mar 2023 11:45:05 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Tejun Heo <tj@...nel.org>, Josef Bacik <josef@...icpanda.com>,
Jens Axboe <axboe@...nel.dk>,
Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Koutný <mkoutny@...e.com>,
Vasily Averin <vasily.averin@...ux.dev>,
cgroups@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
bpf@...r.kernel.org
Subject: Re: [PATCH v2 5/9] memcg: replace stats_flush_lock with an atomic
On Wed, Mar 29, 2023 at 8:58 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Tue 28-03-23 22:16:40, Yosry Ahmed wrote:
> > As Johannes notes in [1], stats_flush_lock is currently used to:
> > (a) Protect updated to stats_flush_threshold.
> > (b) Protect updates to flush_next_time.
> > (c) Serializes calls to cgroup_rstat_flush() based on those ratelimits.
> >
> > However:
> >
> > 1. stats_flush_threshold is already an atomic
> >
> > 2. flush_next_time is not atomic. The writer is locked, but the reader
> > is lockless. If the reader races with a flush, you could see this:
> >
> > if (time_after(jiffies, flush_next_time))
> > spin_trylock()
> > flush_next_time = now + delay
> > flush()
> > spin_unlock()
> > spin_trylock()
> > flush_next_time = now + delay
> > flush()
> > spin_unlock()
> >
> > which means we already can get flushes at a higher frequency than
> > FLUSH_TIME during races. But it isn't really a problem.
> >
> > The reader could also see garbled partial updates, so it needs at
> > least READ_ONCE and WRITE_ONCE protection.
>
> Just a nit. Sounds more serious than it is actually. This would only
> happen if compiler decides to split the write.
Thanks for the note, Michal. I honestly quoted Johannes here as I do
not have much expertise when it comes to this. I will add "if the
compiler decides to split the write" to the commit log if I respin.
>
> > 3. Serializing cgroup_rstat_flush() calls against the ratelimit
> > factors is currently broken because of the race in 2. But the race
> > is actually harmless, all we might get is the occasional earlier
> > flush. If there is no delta, the flush won't do much. And if there
> > is, the flush is justified.
> >
> > So the lock can be removed all together. However, the lock also served
> > the purpose of preventing a thundering herd problem for concurrent
> > flushers, see [2]. Use an atomic instead to serve the purpose of
> > unifying concurrent flushers.
> >
> > [1]https://lore.kernel.org/lkml/20230323172732.GE739026@cmpxchg.org/
> > [2]https://lore.kernel.org/lkml/20210716212137.1391164-2-shakeelb@google.com/
> >
> > Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
> > Acked-by: Johannes Weiner <hannes@...xchg.org>
>
> Acked-by: Michal Hocko <mhocko@...e.com>
>
> > ---
> > mm/memcontrol.c | 18 +++++++++++-------
> > 1 file changed, 11 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index ff39f78f962e..65750f8b8259 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -585,8 +585,8 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
> > */
> > static void flush_memcg_stats_dwork(struct work_struct *w);
> > static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
> > -static DEFINE_SPINLOCK(stats_flush_lock);
> > static DEFINE_PER_CPU(unsigned int, stats_updates);
> > +static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
> > static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
> > static u64 flush_next_time;
> >
> > @@ -636,15 +636,19 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
> >
> > static void __mem_cgroup_flush_stats(void)
> > {
> > - unsigned long flag;
> > -
> > - if (!spin_trylock_irqsave(&stats_flush_lock, flag))
> > + /*
> > + * We always flush the entire tree, so concurrent flushers can just
> > + * skip. This avoids a thundering herd problem on the rstat global lock
> > + * from memcg flushers (e.g. reclaim, refault, etc).
> > + */
> > + if (atomic_read(&stats_flush_ongoing) ||
> > + atomic_xchg(&stats_flush_ongoing, 1))
> > return;
> >
> > - flush_next_time = jiffies_64 + 2*FLUSH_TIME;
> > + WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
> > cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup);
> > atomic_set(&stats_flush_threshold, 0);
> > - spin_unlock_irqrestore(&stats_flush_lock, flag);
> > + atomic_set(&stats_flush_ongoing, 0);
> > }
> >
> > void mem_cgroup_flush_stats(void)
> > @@ -655,7 +659,7 @@ void mem_cgroup_flush_stats(void)
> >
> > void mem_cgroup_flush_stats_ratelimited(void)
> > {
> > - if (time_after64(jiffies_64, flush_next_time))
> > + if (time_after64(jiffies_64, READ_ONCE(flush_next_time)))
> > mem_cgroup_flush_stats();
> > }
> >
> > --
> > 2.40.0.348.gf938b09366-goog
>
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists