[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod71oM0=B_u6-fSMFBqwP2tEOgxKxygHkcshmNF-BMCwzg@mail.gmail.com>
Date: Wed, 28 Jul 2021 13:37:28 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <guro@...com>,
Cgroups <cgroups@...r.kernel.org>, Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] memcg: cleanup racy sum avoidance code
On Wed, Jul 28, 2021 at 12:43 PM Andrew Morton
<akpm@...ux-foundation.org> wrote:
>
> On Tue, 27 Jul 2021 18:22:43 -0700 Shakeel Butt <shakeelb@...gle.com> wrote:
>
> > We used to have per-cpu memcg and lruvec stats and the readers have to
> > traverse and sum the stats from each cpu. This summing was racy and may
> > expose transient negative values. So, an explicit check was added to
> > avoid such scenarios. Now these stats are moved to rstat infrastructure
> > and are no more per-cpu, so we can remove the fixup for transient
> > negative values.
>
> We can't do anything about the same code in lruvec_page_state_local()?
lruvec_page_state_local() is used by cgroup v1's memory.numa_stat for
cgroup local stats (not hierarchical) and are still per-cpu. To make
it non-per-cpu, we have to add 'long
state_local[NR_VM_NODE_STAT_ITEMS]' in 'struct lruvec_stats' and do
aggregation in rstat flushing. So, paying the cpu traversal cost with
more memory usage. I am not sure if it is worth it.
Powered by blists - more mailing lists