[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7e617fe7-388f-43a1-b0fa-e2998194b90c@huawei.com>
Date: Mon, 25 Nov 2024 09:19:11 +0800
From: chenridong <chenridong@...wei.com>
To: Matthew Wilcox <willy@...radead.org>, Barry Song <21cnbao@...il.com>,
Chris Li <chrisl@...nel.org>
CC: Chen Ridong <chenridong@...weicloud.com>, <akpm@...ux-foundation.org>,
<mhocko@...e.com>, <hannes@...xchg.org>, <yosryahmed@...gle.com>,
<yuzhao@...gle.com>, <david@...hat.com>, <ryan.roberts@....com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<wangweiyang2@...wei.com>, <xieym_ict@...mail.com>, Chris Li
<chrisl@...nel.org>
Subject: Re: [RFC PATCH v2 1/1] mm/vmscan: move the written-back folios to the
tail of LRU after shrinking
On 2024/11/18 12:21, Matthew Wilcox wrote:
> On Mon, Nov 18, 2024 at 05:14:14PM +1300, Barry Song wrote:
>> On Mon, Nov 18, 2024 at 5:03 PM Matthew Wilcox <willy@...radead.org> wrote:
>>>
>>> On Sat, Nov 16, 2024 at 09:16:58AM +0000, Chen Ridong wrote:
>>>> 2. In shrink_page_list function, if folioN is THP(2M), it may be splited
>>>> and added to swap cache folio by folio. After adding to swap cache,
>>>> it will submit io to writeback folio to swap, which is asynchronous.
>>>> When shrink_page_list is finished, the isolated folios list will be
>>>> moved back to the head of inactive lru. The inactive lru may just look
>>>> like this, with 512 filioes have been move to the head of inactive lru.
>>>
>>> I was hoping that we'd be able to stop splitting the folio when adding
>>> to the swap cache. Ideally. we'd add the whole 2MB and write it back
>>> as a single unit.
>>
>> This is already the case: adding to the swapcache doesn’t require splitting
>> THPs, but failing to allocate 2MB of contiguous swap slots will.
>
> Agreed we need to understand why this is happening. As I've said a few
> times now, we need to stop requiring contiguity. Real filesystems don't
> need the contiguity (they become less efficient, but they can scatter a
> single 2MB folio to multiple places).
>
> Maybe Chris has a solution to this in the works?
>
Hi, Chris, do you have a better idea to solve this issue?
Best regards,
Ridong
Powered by blists - more mailing lists