[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zzq8jsAQNYgDKSGN@casper.infradead.org>
Date: Mon, 18 Nov 2024 04:03:26 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Chen Ridong <chenridong@...weicloud.com>
Cc: akpm@...ux-foundation.org, mhocko@...e.com, hannes@...xchg.org,
yosryahmed@...gle.com, yuzhao@...gle.com, david@...hat.com,
ryan.roberts@....com, baohua@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, chenridong@...wei.com,
wangweiyang2@...wei.com, xieym_ict@...mail.com
Subject: Re: [RFC PATCH v2 1/1] mm/vmscan: move the written-back folios to
the tail of LRU after shrinking
On Sat, Nov 16, 2024 at 09:16:58AM +0000, Chen Ridong wrote:
> 2. In shrink_page_list function, if folioN is THP(2M), it may be splited
> and added to swap cache folio by folio. After adding to swap cache,
> it will submit io to writeback folio to swap, which is asynchronous.
> When shrink_page_list is finished, the isolated folios list will be
> moved back to the head of inactive lru. The inactive lru may just look
> like this, with 512 filioes have been move to the head of inactive lru.
I was hoping that we'd be able to stop splitting the folio when adding
to the swap cache. Ideally. we'd add the whole 2MB and write it back
as a single unit.
This is going to become much more important with memdescs. We'd have to
allocate 512 struct folios to do this, which would be about 10 4kB pages,
and if we're trying to swap out memory, we're probably low on memory.
So I don't like this solution you have at all because it doesn't help us
get to the solution we're going to need in about a year's time.
Powered by blists - more mailing lists