[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4x0OrdhorQdz8PyLD84GOYVZJ7kLfGV_5yupLG_ZQ_B3w@mail.gmail.com>
Date: Mon, 18 Nov 2024 17:14:14 +1300
From: Barry Song <21cnbao@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Chen Ridong <chenridong@...weicloud.com>, akpm@...ux-foundation.org, mhocko@...e.com,
hannes@...xchg.org, yosryahmed@...gle.com, yuzhao@...gle.com,
david@...hat.com, ryan.roberts@....com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, chenridong@...wei.com, wangweiyang2@...wei.com,
xieym_ict@...mail.com
Subject: Re: [RFC PATCH v2 1/1] mm/vmscan: move the written-back folios to the
tail of LRU after shrinking
On Mon, Nov 18, 2024 at 5:03 PM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Sat, Nov 16, 2024 at 09:16:58AM +0000, Chen Ridong wrote:
> > 2. In shrink_page_list function, if folioN is THP(2M), it may be splited
> > and added to swap cache folio by folio. After adding to swap cache,
> > it will submit io to writeback folio to swap, which is asynchronous.
> > When shrink_page_list is finished, the isolated folios list will be
> > moved back to the head of inactive lru. The inactive lru may just look
> > like this, with 512 filioes have been move to the head of inactive lru.
>
> I was hoping that we'd be able to stop splitting the folio when adding
> to the swap cache. Ideally. we'd add the whole 2MB and write it back
> as a single unit.
This is already the case: adding to the swapcache doesn’t require splitting
THPs, but failing to allocate 2MB of contiguous swap slots will.
>
> This is going to become much more important with memdescs. We'd have to
> allocate 512 struct folios to do this, which would be about 10 4kB pages,
> and if we're trying to swap out memory, we're probably low on memory.
>
> So I don't like this solution you have at all because it doesn't help us
> get to the solution we're going to need in about a year's time.
>
Ridong might need to clarify why this splitting is occurring. If it’s due to the
failure to allocate swap slots, we still need a solution to address it.
Thanks
Barry
Powered by blists - more mailing lists