[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b6fb68dc-7203-47a2-8316-5a5db6978ccf@redhat.com>
Date: Tue, 18 Jun 2024 11:54:36 +0200
From: David Hildenbrand <david@...hat.com>
To: Barry Song <21cnbao@...il.com>, akpm@...ux-foundation.org,
linux-mm@...ck.org
Cc: baolin.wang@...ux.alibaba.com, chrisl@...nel.org,
linux-kernel@...r.kernel.org, mhocko@...e.com, ryan.roberts@....com,
shy828301@...il.com, surenb@...gle.com, v-songbaohua@...o.com,
willy@...radead.org, ying.huang@...el.com, yosryahmed@...gle.com,
yuzhao@...gle.com, Shuai Yuan <yuanshuai@...o.com>
Subject: Re: [PATCH v2 2/3] mm: use folio_add_new_anon_rmap() if
folio_test_anon(folio)==false
On 18.06.24 01:11, Barry Song wrote:
> From: Barry Song <v-songbaohua@...o.com>
>
> For the !folio_test_anon(folio) case, we can now invoke folio_add_new_anon_rmap()
> with the rmap flags set to either EXCLUSIVE or non-EXCLUSIVE. This action will
> suppress the VM_WARN_ON_FOLIO check within __folio_add_anon_rmap() while initiating
> the process of bringing up mTHP swapin.
>
> static __always_inline void __folio_add_anon_rmap(struct folio *folio,
> struct page *page, int nr_pages, struct vm_area_struct *vma,
> unsigned long address, rmap_t flags, enum rmap_level level)
> {
> ...
> if (unlikely(!folio_test_anon(folio))) {
> VM_WARN_ON_FOLIO(folio_test_large(folio) &&
> level != RMAP_LEVEL_PMD, folio);
> }
> ...
> }
>
> It also improves the code’s readability. Currently, all new anonymous
> folios calling folio_add_anon_rmap_ptes() are order-0. This ensures
> that new folios cannot be partially exclusive; they are either entirely
> exclusive or entirely shared.
>
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> Tested-by: Shuai Yuan <yuanshuai@...o.com>
> ---
> mm/memory.c | 8 ++++++++
> mm/swapfile.c | 13 +++++++++++--
> 2 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 1f24ecdafe05..620654c13b2f 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4339,6 +4339,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> if (unlikely(folio != swapcache && swapcache)) {
> folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
> folio_add_lru_vma(folio, vma);
> + } else if (!folio_test_anon(folio)) {
> + /*
> + * We currently only expect small !anon folios, for which we now
s/now/know/
> + * that they are either fully exclusive or fully shared. If we
> + * ever get large folios here, we have to be careful.
> + */
> + VM_WARN_ON_ONCE(folio_test_large(folio));
> + folio_add_new_anon_rmap(folio, vma, address, rmap_flags);
> } else {
> folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address,
> rmap_flags);
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index ae1d2700f6a3..69efa1a57087 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1908,8 +1908,17 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio);
> if (pte_swp_exclusive(old_pte))
> rmap_flags |= RMAP_EXCLUSIVE;
> -
> - folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags);
> + /*
> + * We currently only expect small !anon folios, for which we now that
s/now/know/
> + * they are either fully exclusive or fully shared. If we ever get
> + * large folios here, we have to be careful.
> + */
> + if (!folio_test_anon(folio)) {
> + VM_WARN_ON_ONCE(folio_test_large(folio));
> + folio_add_new_anon_rmap(folio, vma, addr, rmap_flags);
> + } else {
> + folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags);
> + }
> } else { /* ksm created a completely new copy */
> folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
> folio_add_lru_vma(folio, vma);
Thanks!
Acked-by: David Hildenbrand <david@...hat.com>
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists