[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkaOfsKC=F1inymxz8C0UT5=Sjo830bYLsoPd6WOOShyDQ@mail.gmail.com>
Date: Tue, 3 Oct 2023 01:03:53 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Michal Koutný <mkoutny@...e.com>,
linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/2] mm: memcg: fix tracking of pending stats updates values
On Tue, Oct 3, 2023 at 12:57 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Mon 25-09-23 10:11:05, Yosry Ahmed wrote:
> > On Mon, Sep 25, 2023 at 6:50 AM Michal Hocko <mhocko@...e.com> wrote:
> > >
> > > On Fri 22-09-23 17:57:38, Yosry Ahmed wrote:
> > > > While working on adjacent code [1], I realized that the values passed
> > > > into memcg_rstat_updated() to keep track of the magnitude of pending
> > > > updates is consistent. It is mostly in pages, but sometimes it can be in
> > > > bytes or KBs. Fix that.
> > >
> > > What kind of practical difference does this change make? Is it worth
> > > additional code?
> >
> > As explained in patch 2's commit message, the value passed into
> > memcg_rstat_updated() is used for the "flush only if not worth it"
> > heuristic. As we have discussed in different threads in the past few
> > weeks, unnecessary flushes can cause increased global lock contention
> > and/or latency.
> >
> > Byte-sized paths (percpu, slab, zswap, ..) feed bytes into the
> > heuristic, but those are interpreted as pages, which means we will
> > flush earlier than we should. This was noticed by code inspection. How
> > much does this matter in practice? I would say it depends on the
> > workload: how many percpu/slab allocations are being made vs. how many
> > flushes are requested.
> >
> > On a system with 100 cpus, 25M of stat updates are needed for a flush
> > usually, but ~6K of slab/percpu updates will also (mistakenly) cause a
> > flush.
>
> This surely depends on workload and that is understandable. But it would
> be really nice to provide some numbers for typical workloads which
> exercise slab heavily.
If you have a workload in mind I can run it and see how many flushes
we get with/without this patch. The first thing that pops into my head
is creating a bunch of empty files but I don't know if that's the best
thing to get numbers from.
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists