[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4zwUpL7LihRBOefg-cmY2mgNjMm-MPkq9VFBdXS_4b=uQ@mail.gmail.com>
Date: Tue, 23 Jan 2024 14:49:08 +0800
From: Barry Song <21cnbao@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: ryan.roberts@....com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, mhocko@...e.com, shy828301@...il.com,
wangkefeng.wang@...wei.com, willy@...radead.org, xiang@...nel.org,
ying.huang@...el.com, yuzhao@...gle.com, surenb@...gle.com,
steven.price@....com, Barry Song <v-songbaohua@...o.com>,
Chuanhua Han <hanchuanhua@...o.com>
Subject: Re: [PATCH RFC 5/6] mm: rmap: weaken the WARN_ON in __folio_add_anon_rmap()
On Thu, Jan 18, 2024 at 7:54 PM David Hildenbrand <david@...hat.com> wrote:
>
> On 18.01.24 12:10, Barry Song wrote:
> > From: Barry Song <v-songbaohua@...o.com>
> >
> > In do_swap_page(), while supporting large folio swap-in, we are using the helper
> > folio_add_anon_rmap_ptes. This is triggerring a WARN_ON in __folio_add_anon_rmap.
> > We can make the warning quiet by two ways
> > 1. in do_swap_page, we call folio_add_new_anon_rmap() if we are sure the large
> > folio is new allocated one; we call folio_add_anon_rmap_ptes() if we find the
> > large folio in swapcache.
> > 2. we always call folio_add_anon_rmap_ptes() in do_swap_page but weaken the
> > WARN_ON in __folio_add_anon_rmap() by letting the WARN_ON less sensitive.
> >
> > Option 2 seems to be better for do_swap_page() as it can use unified code for
> > all cases.
> >
> > Signed-off-by: Barry Song <v-songbaohua@...o.com>
> > Tested-by: Chuanhua Han <hanchuanhua@...o.com>
> > ---
> > mm/rmap.c | 5 ++++-
> > 1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index f5d43edad529..469fcfd32317 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1304,7 +1304,10 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
> > * page.
> > */
> > VM_WARN_ON_FOLIO(folio_test_large(folio) &&
> > - level != RMAP_LEVEL_PMD, folio);
> > + level != RMAP_LEVEL_PMD &&
> > + (!IS_ALIGNED(address, nr_pages * PAGE_SIZE) ||
> > + (folio_test_swapcache(folio) && !IS_ALIGNED(folio->index, nr_pages)) ||
> > + page != &folio->page), folio);
> > __folio_set_anon(folio, vma, address,
> > !!(flags & RMAP_EXCLUSIVE));
> > } else if (likely(!folio_test_ksm(folio))) {
>
>
> I have on my todo list to move all that !anon handling out of
> folio_add_anon_rmap_ptes(), and instead make swapin code call add
> folio_add_new_anon_rmap(), where we'll have to pass an exclusive flag
> then (-> whole new folio exclusive).
>
> That's the cleaner approach.
>
one tricky thing is that sometimes it is hard to know who is the first
one to add rmap and thus should
call folio_add_new_anon_rmap.
especially when we want to support swapin_readahead(), the one who
allocated large filio might not
be that one who firstly does rmap.
is it an acceptable way to do the below in do_swap_page?
if (!folio_test_anon(folio))
folio_add_new_anon_rmap()
else
folio_add_anon_rmap_ptes()
> --
> Cheers,
>
> David / dhildenb
>
Thanks
Barry
Powered by blists - more mailing lists