[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o7ab2j94.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Mon, 15 Apr 2024 15:11:03 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
baolin.wang@...ux.alibaba.com, chrisl@...nel.org, david@...hat.com,
hanchuanhua@...o.com, hannes@...xchg.org, hughd@...gle.com,
kasong@...cent.com, ryan.roberts@....com, surenb@...gle.com,
v-songbaohua@...o.com, willy@...radead.org, xiang@...nel.org,
yosryahmed@...gle.com, yuzhao@...gle.com, ziy@...dia.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/5] mm: swap: make should_try_to_free_swap() support
large-folio
Barry Song <21cnbao@...il.com> writes:
> From: Chuanhua Han <hanchuanhua@...o.com>
>
> The function should_try_to_free_swap() operates under the assumption that
> swap-in always occurs at the normal page granularity, i.e., folio_nr_pages
~~~~~~~~~~~~~~
nits: folio_nr_pages() is better for understanding.
Otherwise, LGTM, Thanks!
Reviewed-by: "Huang, Ying" <ying.huang@...el.com>
> = 1. However, in reality, for large folios, add_to_swap_cache() will
> invoke folio_ref_add(folio, nr). To accommodate large folio swap-in,
> this patch eliminates this assumption.
>
> Signed-off-by: Chuanhua Han <hanchuanhua@...o.com>
> Co-developed-by: Barry Song <v-songbaohua@...o.com>
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> Acked-by: Chris Li <chrisl@...nel.org>
> Reviewed-by: Ryan Roberts <ryan.roberts@....com>
> ---
> mm/memory.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 78422d1c7381..2702d449880e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3856,7 +3856,7 @@ static inline bool should_try_to_free_swap(struct folio *folio,
> * reference only in case it's likely that we'll be the exlusive user.
> */
> return (fault_flags & FAULT_FLAG_WRITE) && !folio_test_ksm(folio) &&
> - folio_ref_count(folio) == 2;
> + folio_ref_count(folio) == (1 + folio_nr_pages(folio));
> }
>
> static vm_fault_t pte_marker_clear(struct vm_fault *vmf)
Powered by blists - more mailing lists