[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZpO1nEvdh7qPWt4Pg=FU1KZfEd3vA9ucEpqdc-7kF0Jg@mail.gmail.com>
Date: Mon, 28 Oct 2024 10:08:26 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Usama Arif <usamaarif642@...il.com>
Cc: Nhat Pham <nphamcs@...il.com>, Barry Song <21cnbao@...il.com>, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Barry Song <v-songbaohua@...o.com>, Chengming Zhou <chengming.zhou@...ux.dev>,
Johannes Weiner <hannes@...xchg.org>, David Hildenbrand <david@...hat.com>, Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>, Shakeel Butt <shakeel.butt@...ux.dev>,
Andi Kleen <ak@...ux.intel.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Chris Li <chrisl@...nel.org>, "Huang, Ying" <ying.huang@...el.com>,
Kairui Song <kasong@...cent.com>, Ryan Roberts <ryan.roberts@....com>, joshua.hahnjy@...il.com
Subject: Re: [PATCH RFC] mm: count zeromap read and set for swapout and swapin
On Mon, Oct 28, 2024 at 10:00 AM Usama Arif <usamaarif642@...il.com> wrote:
>
>
>
> On 28/10/2024 16:33, Nhat Pham wrote:
> > On Mon, Oct 28, 2024 at 5:23 AM Usama Arif <usamaarif642@...il.com> wrote:
> >>
> >> I wonder if instead of having counters, it might be better to keep track
> >> of the number of zeropages currently stored in zeromap, similar to how
> >> zswap_same_filled_pages did it. It will be more complicated then this
> >> patch, but would give more insight of the current state of the system.
> >>
> >> Joshua (in CC) was going to have a look at that.
> >
> > I don't think one can substitute for the other.
>
> Yes agreed, they have separate uses and provide different information, but
> maybe wasteful to have both types of counters? They are counters so maybe
> dont consume too much resources but I think we should still think about
> it..
Not for or against here, but I would say that statement is debatable
at best for memcg stats :)
Each new counter consumes 2 longs per-memcg per-CPU (see
memcg_vmstats_percpu), about 16 bytes, which is not a lot but it can
quickly add up with a large number of CPUs/memcgs/stats.
Also, when flushing the stats we iterate all of them to propagate
updates from per-CPU counters. This is already a slowpath so adding
one stat is not a big deal, but again because we iterate all stats on
multiple CPUs (and sometimes on each node as well), the overall flush
latency becomes a concern sometimes.
All of that is not to say we shouldn't add more memcg stats, but we
have to be mindful of the resources.
Powered by blists - more mailing lists