[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF6N3nXPw8qv-Rmg6CX1afpkc7DmTQEL06LeDvY=Hcj0AnVx_w@mail.gmail.com>
Date: Tue, 12 Nov 2024 16:19:41 -0800
From: Kinsey Ho <kinseyho@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Shakeel Butt <shakeel.butt@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, Pasha Tatashin <pasha.tatashin@...een.com>,
David Rientjes <rientjes@...gle.com>, willy@...radead.org, Vlastimil Babka <vbabka@...e.cz>,
David Hildenbrand <david@...hat.com>, Joel Granados <joel.granados@...nel.org>,
Kaiyang Zhao <kaiyang2@...cmu.edu>, Sourav Panda <souravpanda@...gle.com>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH mm-unstable v1 0/2] Track pages allocated for struct
Hi Andrew,
Thank you for the review and comments!
On Fri, Nov 1, 2024 at 6:57 AM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> hm.
>
> On Thu, 31 Oct 2024 22:45:49 +0000 Kinsey Ho <kinseyho@...gle.com> wrote:
>
> > We noticed high overhead for pages allocated for struct swap_cgroup in
> > our fleet.
>
> This is scanty. Please describe the problem further.
In our fleet, we had machines with multiple large swap files
configured, and we noticed that we hadn't accounted for the overhead
from the pages allocated for struct swap_cgroup. In some cases, we saw
a couple GiB of overhead from these pages, so this patchset's goal is
to expose this overhead value for easier detection.
> And: "the existing use case" is OK with a global counter, but what about
> future use cases?
>
> And: what are the future use cases?
As global counting already exists with memmap/memmap_boot pages, the
introduction of a generic global counter interface was just to try and
aggregate the global counting code when introducing another use case.
However, since the value of pages allocated for swap_cgroup can be
derived from /proc/swaps, it doesn't seem warranted that a new entry
be added to vmstat. We've decided to drop this patchset. Thanks again!
Powered by blists - more mailing lists