[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkYPB=2c23LMi1+=qrPO+rcr5zJB4+2TPrcjAZHhsm=Vsw@mail.gmail.com>
Date: Mon, 28 Oct 2024 15:32:33 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Barry Song <21cnbao@...il.com>
Cc: Usama Arif <usamaarif642@...il.com>, Nhat Pham <nphamcs@...il.com>, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Barry Song <v-songbaohua@...o.com>, Chengming Zhou <chengming.zhou@...ux.dev>,
Johannes Weiner <hannes@...xchg.org>, David Hildenbrand <david@...hat.com>, Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>, Shakeel Butt <shakeel.butt@...ux.dev>,
Andi Kleen <ak@...ux.intel.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Chris Li <chrisl@...nel.org>, "Huang, Ying" <ying.huang@...el.com>,
Kairui Song <kasong@...cent.com>, Ryan Roberts <ryan.roberts@....com>, joshua.hahnjy@...il.com
Subject: Re: [PATCH RFC] mm: count zeromap read and set for swapout and swapin
[..]
> > > By the way, I recently had an idea: if we can conduct the zeromap check
> > > earlier - for example - before allocating swap slots and pageout(), could
> > > we completely eliminate swap slot occupation and allocation/release
> > > for zeromap data? For example, we could use a special swap
> > > entry value in the PTE to indicate zero content and directly fill it with
> > > zeros when swapping back. We've observed that swap slot allocation and
> > > freeing can consume a lot of CPU and slow down functions like
> > > zap_pte_range and swap-in. If we can entirely skip these steps, it
> > > could improve performance. However, I'm uncertain about the benefits we
> > > would gain if we only have 1-2% zeromap data.
> >
> > If I remember correctly this was one of the ideas floated around in the
> > initial version of the zeromap series, but it was evaluated as a lot more
> > complicated to do than what the current zeromap code looks like. But I
> > think its definitely worth looking into!
Yup, I did suggest this on the first version:
https://lore.kernel.org/linux-mm/CAJD7tkYcTV_GOZV3qR6uxgFEvYXw1rP-h7WQjDnsdwM=g9cpAw@mail.gmail.com/
, and Usama took a stab at implementing it in the second version:
https://lore.kernel.org/linux-mm/20240604105950.1134192-1-usamaarif642@gmail.com/
David and Shakeel pointed out a few problems. I think they are
fixable, but the complexity/benefit tradeoff was getting unclear at
that point.
If we can make it work without too much complexity, that would be
great of course.
>
> Sorry for the noise. I didn't review the initial discussion. But my feeling
> is that it might be valuable considering the report from Zhiguo:
>
> https://lore.kernel.org/linux-mm/20240805153639.1057-1-justinjiang@vivo.com/
>
> In fact, our recent benchmark also indicates that swap free could account
> for a significant portion in do_swap_page().
As Shakeel mentioned in a reply to Usama's patch mentioned above, we
would need to check the contents of the page after it's unmapped. So
likely we need to allocate a swap slot, walk the rmap and unmap, check
contents, walk the rmap again and update the PTEs, free the swap slot.
So the swap free will be essentially moved from the fault path to the
reclaim path, not eliminated. It may still be worth it, not sure. We
also need to make sure we keep the rmap intact after the first walk
and unmap in case we need to go back and update the PTEs again.
Overall, I think the complexity is unlikely to be low.
Powered by blists - more mailing lists