[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7AwFiDb7cAMkWMWb3vkccie1-tocmZfT7m4WRb_UKPghg@mail.gmail.com>
Date: Tue, 28 Nov 2023 19:22:10 +0800
From: Kairui Song <ryncsn@...il.com>
To: Chris Li <chrisl@...nel.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
"Huang, Ying" <ying.huang@...el.com>,
David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 18/24] mm/swap: introduce a helper non fault swapin
Chris Li <chrisl@...nel.org> 于2023年11月22日周三 12:41写道:
>
> On Sun, Nov 19, 2023 at 11:49 AM Kairui Song <ryncsn@...il.com> wrote:
> >
> > From: Kairui Song <kasong@...cent.com>
> >
> > There are two places where swapin is not direct caused by page fault:
> > shmem swapin is invoked through shmem mapping, swapoff cause swapin by
> > walking the page table. They used to construct a pseudo vmfault struct
> > for swapin function.
> >
> > Shmem has dropped the pseudo vmfault recently in commit ddc1a5cbc05d
> > ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"). Swapoff
> > path is still using a pseudo vmfault.
> >
> > Introduce a helper for them both, this help save stack usage for swapoff
> > path, and help apply a unified swapin cache and readahead policy check.
> >
> > Also prepare for follow up commits.
> >
> > Signed-off-by: Kairui Song <kasong@...cent.com>
> > ---
> > mm/shmem.c | 51 ++++++++++++++++---------------------------------
> > mm/swap.h | 11 +++++++++++
> > mm/swap_state.c | 38 ++++++++++++++++++++++++++++++++++++
> > mm/swapfile.c | 23 +++++++++++-----------
> > 4 files changed, 76 insertions(+), 47 deletions(-)
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index f9ce4067c742..81d129aa66d1 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -1565,22 +1565,6 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo)
> > static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
> > pgoff_t index, unsigned int order, pgoff_t *ilx);
> >
> > -static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp,
> > - struct shmem_inode_info *info, pgoff_t index)
> > -{
> > - struct mempolicy *mpol;
> > - pgoff_t ilx;
> > - struct page *page;
> > -
> > - mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
> > - page = swap_cluster_readahead(swap, gfp, mpol, ilx);
> > - mpol_cond_put(mpol);
> > -
> > - if (!page)
> > - return NULL;
> > - return page_folio(page);
> > -}
> > -
>
> Nice. Thank you.
>
> > /*
> > * Make sure huge_gfp is always more limited than limit_gfp.
> > * Some of the flags set permissions, while others set limitations.
> > @@ -1854,9 +1838,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> > {
> > struct address_space *mapping = inode->i_mapping;
> > struct shmem_inode_info *info = SHMEM_I(inode);
> > - struct swap_info_struct *si;
> > + enum swap_cache_result result;
> > struct folio *folio = NULL;
> > + struct mempolicy *mpol;
> > + struct page *page;
> > swp_entry_t swap;
> > + pgoff_t ilx;
> > int error;
> >
> > VM_BUG_ON(!*foliop || !xa_is_value(*foliop));
> > @@ -1866,34 +1853,30 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> > if (is_poisoned_swp_entry(swap))
> > return -EIO;
> >
> > - si = get_swap_device(swap);
> > - if (!si) {
> > + mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
> > + page = swapin_page_non_fault(swap, gfp, mpol, ilx, fault_mm, &result);
Hi Chris,
I've been trying to address these issues in V2, most issue in other
patches have a straight solution, some could be discuss in seperate
series, but I come up with some thoughts here:
>
> Notice this "result" CAN be outdated. e.g. after this call, the swap
> cache can be changed by another thread generating the swap page fault
> and installing the folio into the swap cache or removing it.
This is true, and it seems a potential race also exist before this
series for direct (no swapcache) swapin path (do_swap_page) if I
understand it correctly:
In do_swap_page path, multiple process could swapin the page at the
same time (a mapped once page can still be shared by sub threads),
they could get different folios. The later pte lock and pte_same check
is not enough, because while one process is not holding the pte lock,
another process could read-in, swap_free the entry, then swap-out the
page again, using same entry, an ABA problem. The race is not likely
to happen in reality but in theory possible.
Same issue for shmem here, there are
shmem_confirm_swap/shmem_add_to_page_cache check later to prevent
re-installing into shmem mapping for direct swap in, but also not
enough. Other process could read-in and re-swapout using same entry so
the mapping entry seems unchanged during the time window. Still very
unlikely to happen in reality, but not impossible.
When swapcache is used there is no such issue, since swap lock and
swap_map are used to sync all readers, and while one reader is still
holding the folio, the entry is locked through swapcache, or if a
folio is removed from swapcache, folio_test_swapcache will fail, and
the reader could retry.
I'm trying to come up with a better locking for direct swap in, am I
missing anything here? Correct me if I get it wrong...
Powered by blists - more mailing lists