[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3d8087e4-ff84-48cc-823a-a6ce2a3c76b4@linux.alibaba.com>
Date: Wed, 12 Jun 2024 14:23:13 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: akpm@...ux-foundation.org, willy@...radead.org, david@...hat.com,
wangkefeng.wang@...wei.com, chrisl@...nel.org, ying.huang@...el.com,
21cnbao@...il.com, ryan.roberts@....com, shy828301@...il.com,
ziy@...dia.com, ioworker0@...il.com, da.gomez@...sung.com,
p.raghav@...sung.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/7] support large folio swap-out and swap-in for shmem
Hi Hugh,
On 2024/6/12 13:46, Hugh Dickins wrote:
> On Thu, 6 Jun 2024, Baolin Wang wrote:
>
>> Shmem will support large folio allocation [1] [2] to get a better performance,
>> however, the memory reclaim still splits the precious large folios when trying
>> to swap-out shmem, which may lead to the memory fragmentation issue and can not
>> take advantage of the large folio for shmeme.
>>
>> Moreover, the swap code already supports for swapping out large folio without
>> split, and large folio swap-in[3] series is queued into mm-unstable branch.
>> Hence this patch set also supports the large folio swap-out and swap-in for
>> shmem.
>>
>> [1] https://lore.kernel.org/all/cover.1717495894.git.baolin.wang@linux.alibaba.com/
>> [2] https://lore.kernel.org/all/20240515055719.32577-1-da.gomez@samsung.com/
>> [3] https://lore.kernel.org/all/20240508224040.190469-6-21cnbao@gmail.com/T/
>>
>> Changes from RFC:
>> - Rebased to the latest mm-unstable.
>> - Drop the counter name fixing patch, which was queued into mm-hotfixes-stable
>> branch.
>>
>> Baolin Wang (7):
>> mm: vmscan: add validation before spliting shmem large folio
>> mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM
>> flag setting
>> mm: shmem: support large folio allocation for shmem_replace_folio()
>> mm: shmem: extend shmem_partial_swap_usage() to support large folio
>> swap
>> mm: add new 'orders' parameter for find_get_entries() and
>> find_lock_entries()
>> mm: shmem: use swap_free_nr() to free shmem swap entries
>> mm: shmem: support large folio swap out
>>
>> drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 1 +
>> include/linux/swap.h | 4 +-
>> include/linux/writeback.h | 1 +
>> mm/filemap.c | 27 ++++++-
>> mm/internal.h | 4 +-
>> mm/shmem.c | 58 ++++++++------
>> mm/swapfile.c | 98 ++++++++++++-----------
>> mm/truncate.c | 8 +-
>> mm/vmscan.c | 22 ++++-
>> 9 files changed, 140 insertions(+), 83 deletions(-)
>
> I wanted to have some tests running, while looking through these
> and your shmem mTHP patches; but I wasted too much time on that by
> applying these on top and hitting crash, OOMs and dreadful thrashing -
> testing did not get very far at all.
Thanks for testing. I am sorry I haven't found the issues with my testing.
> Perhaps all easily fixed, but I don't have more time to spend on it,
> and think this series cannot expect to go into 6.11: I'll have another
> try with it next cycle.
>
> I really must turn my attention to your shmem mTHP series: no doubt
> I'll have minor adjustments to ask there - but several other people
> are also waiting for me to respond (or given up on me completely).
Sure. Thanks.
>
> The little crash fix needed in this series appears to be:
>
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2053,7 +2053,8 @@ static int shmem_swapin_folio(struct ino
> goto failed;
> }
>
> - error = shmem_add_to_page_cache(folio, mapping, index,
> + error = shmem_add_to_page_cache(folio, mapping,
> + round_down(index, nr_pages),
> swp_to_radix_entry(swap), gfp);
> if (error)
> goto failed;
Good catch. I missed this.
> Then the OOMs and dreadful thrashing are due to refcount confusion:
> I did not even glance at these patches to work out what's wanted,
> but a printk in __remove_mapping() showed that folio->_refcount was
> 1024 where 513 was expected, so reclaim was freeing none of them.
I will look at this issue and continue to do more tesing before sending
out new version. Thanks.
Powered by blists - more mailing lists