[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f899d6b3-e607-480b-9acc-d64dfbc755b5@linux.alibaba.com>
Date: Wed, 19 Feb 2025 18:04:57 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Zi Yan <ziy@...dia.com>, Matthew Wilcox <willy@...radead.org>,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>, Hugh Dickins
<hughd@...gle.com>, Kairui Song <kasong@...cent.com>,
Miaohe Lin <linmiaohe@...wei.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] mm/shmem: use xas_try_split() in
shmem_split_large_entry()
Hi Zi,
Sorry for the late reply due to being busy with other things:)
On 2025/2/19 07:54, Zi Yan wrote:
> During shmem_split_large_entry(), large swap entries are covering n slots
> and an order-0 folio needs to be inserted.
>
> Instead of splitting all n slots, only the 1 slot covered by the folio
> need to be split and the remaining n-1 shadow entries can be retained with
> orders ranging from 0 to n-1. This method only requires
> (n/XA_CHUNK_SHIFT) new xa_nodes instead of (n % XA_CHUNK_SHIFT) *
> (n/XA_CHUNK_SHIFT) new xa_nodes, compared to the original
> xas_split_alloc() + xas_split() one.
>
> For example, to split an order-9 large swap entry (assuming XA_CHUNK_SHIFT
> is 6), 1 xa_node is needed instead of 8.
>
> xas_try_split_min_order() is used to reduce the number of calls to
> xas_try_split() during split.
For shmem swapin, if we cannot swap in the whole large folio by skipping
the swap cache, we will split the large swap entry stored in the shmem
mapping into order-0 swap entries, rather than splitting it into other
orders of swap entries. This is because the next time we swap in a shmem
folio through shmem_swapin_cluster(), it will still be an order 0 folio.
Moreover I did a quick test with swapping in order 6 shmem folios,
however, my test hung, and the console was continuously filled with the
following information. It seems there are some issues with shmem swapin
handling. Anyway, I need more time to debug and test.
[ 1037.364644] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364650] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364652] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364654] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364656] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364658] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364659] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364661] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364663] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1037.364665] Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF
[ 1042.368539] pagefault_out_of_memory: 9268696 callbacks suppressed
.......
Powered by blists - more mailing lists