[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=NFtcoiqiLa2ov-AR1coYnJE-gXVf32DihJcTYTOJcQdQ@mail.gmail.com>
Date: Sat, 26 Oct 2024 19:45:42 -0700
From: Nhat Pham <nphamcs@...il.com>
To: Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Barry Song <v-songbaohua@...o.com>,
Usama Arif <usamaarif642@...il.com>, Chengming Zhou <chengming.zhou@...ux.dev>,
Yosry Ahmed <yosryahmed@...gle.com>, Johannes Weiner <hannes@...xchg.org>,
David Hildenbrand <david@...hat.com>, Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>, Shakeel Butt <shakeel.butt@...ux.dev>,
Andi Kleen <ak@...ux.intel.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Chris Li <chrisl@...nel.org>, "Huang, Ying" <ying.huang@...el.com>,
Kairui Song <kasong@...cent.com>, Ryan Roberts <ryan.roberts@....com>
Subject: Re: [PATCH RFC] mm: count zeromap read and set for swapout and swapin
On Sat, Oct 26, 2024 at 6:20 PM Barry Song <21cnbao@...il.com> wrote:
>
> From: Barry Song <v-songbaohua@...o.com>
>
> When the proportion of folios from the zero map is small, missing their
> accounting may not significantly impact profiling. However, it’s easy
> to construct a scenario where this becomes an issue—for example,
> allocating 1 GB of memory, writing zeros from userspace, followed by
> MADV_PAGEOUT, and then swapping it back in. In this case, the swap-out
> and swap-in counts seem to vanish into a black hole, potentially
> causing semantic ambiguity.
I agree. It also makes developing around this area more challenging.
I'm working on the swap abstraction, and sometimes I can't tell if I
screwed up somewhere, or if a proportion of these allocated entries go
towards this optimization...
Thanks for taking a stab at fixing this, Barry!
>
> We have two ways to address this:
>
> 1. Add a separate counter specifically for the zero map.
> 2. Continue using the current accounting, treating the zero map like
> a normal backend. (This aligns with the current behavior of zRAM
> when supporting same-page fills at the device level.)
Hmm, my understanding of the pswpout/pswpin counters is that they only
apply to IO done directly to the backend device, no? That's why we
have a separate set of counters for zswap, and do not count them
towards pswp(in|out).
For users who have swap files on physical disks, the performance
difference between reading directly from the swapfile and going
through these optimizations could be really large. I think it makes
sense to have a separate set of counters for zero-mapped pages
(ideally, both at the host level and at the cgroup level?)
Powered by blists - more mailing lists