[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2K+y7wnhC4vbnP2@x1n>
Date: Wed, 2 Nov 2022 15:02:35 -0400
From: Peter Xu <peterx@...hat.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: "Vishal Moola (Oracle)" <vishal.moola@...il.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, akpm@...ux-foundation.org,
Hugh Dickins <hughd@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>
Subject: Re: [PATCH 3/5] userfualtfd: Replace lru_cache functions with
folio_add functions
On Tue, Nov 01, 2022 at 06:31:26PM +0000, Matthew Wilcox wrote:
> On Tue, Nov 01, 2022 at 10:53:24AM -0700, Vishal Moola (Oracle) wrote:
> > Replaces lru_cache_add() and lru_cache_add_inactive_or_unevictable()
> > with folio_add_lru() and folio_add_lru_vma(). This is in preparation for
> > the removal of lru_cache_add().
>
> Ummmmm. Reviewing this patch reveals a bug (not introduced by your
> patch). Look:
>
> mfill_atomic_install_pte:
> bool page_in_cache = page->mapping;
>
> mcontinue_atomic_pte:
> ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC);
> ...
> page = folio_file_page(folio, pgoff);
> ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr,
> page, false, wp_copy);
>
> That says pretty plainly that mfill_atomic_install_pte() can be passed
> a tail page from shmem, and if it is ...
>
> if (page_in_cache) {
> ...
> } else {
> page_add_new_anon_rmap(page, dst_vma, dst_addr);
> lru_cache_add_inactive_or_unevictable(page, dst_vma);
> }
>
> it'll get put on the rmap as an anon page!
Hmm yeah.. thanks Matthew!
Does the patch attached look reasonable to you?
Copying Axel too.
>
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@...il.com>
> > ---
> > mm/userfaultfd.c | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index e24e8a47ce8a..2560973b00d8 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -66,6 +66,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
> > bool vm_shared = dst_vma->vm_flags & VM_SHARED;
> > bool page_in_cache = page->mapping;
> > spinlock_t *ptl;
> > + struct folio *folio;
> > struct inode *inode;
> > pgoff_t offset, max_off;
> >
> > @@ -113,14 +114,15 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
> > if (!pte_none_mostly(*dst_pte))
> > goto out_unlock;
> >
> > + folio = page_folio(page);
> > if (page_in_cache) {
> > /* Usually, cache pages are already added to LRU */
> > if (newly_allocated)
> > - lru_cache_add(page);
> > + folio_add_lru(folio);
> > page_add_file_rmap(page, dst_vma, false);
> > } else {
> > page_add_new_anon_rmap(page, dst_vma, dst_addr);
> > - lru_cache_add_inactive_or_unevictable(page, dst_vma);
> > + folio_add_lru_vma(folio, dst_vma);
> > }
> >
> > /*
> > --
> > 2.38.1
> >
> >
>
--
Peter Xu
View attachment "0001-mm-shmem-Use-page_mapping-to-detect-page-cache-for-u.patch" of type "text/plain" (1899 bytes)
Powered by blists - more mailing lists