[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkrYBpbDN4QHGP_HYwcoxOxOpEK1Q=mUxcos3MtdZ5fEzw@mail.gmail.com>
Date: Tue, 12 Oct 2021 20:00:31 -0700
From: Yang Shi <shy828301@...il.com>
To: Peter Xu <peterx@...hat.com>
Cc: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>, Hugh Dickins <hughd@...gle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [v3 PATCH 4/5] mm: shmem: don't truncate page if memory failure happens
On Tue, Oct 12, 2021 at 3:27 PM Peter Xu <peterx@...hat.com> wrote:
>
> On Tue, Oct 12, 2021 at 12:17:33PM -0700, Yang Shi wrote:
> > On Mon, Oct 11, 2021 at 6:57 PM Peter Xu <peterx@...hat.com> wrote:
> > >
> > > On Thu, Sep 30, 2021 at 02:53:10PM -0700, Yang Shi wrote:
> > > > diff --git a/mm/shmem.c b/mm/shmem.c
> > > > index 88742953532c..75c36b6a405a 100644
> > > > --- a/mm/shmem.c
> > > > +++ b/mm/shmem.c
> > > > @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
> > > > struct inode *inode = mapping->host;
> > > > struct shmem_inode_info *info = SHMEM_I(inode);
> > > > pgoff_t index = pos >> PAGE_SHIFT;
> > > > + int ret = 0;
> > > >
> > > > /* i_rwsem is held by caller */
> > > > if (unlikely(info->seals & (F_SEAL_GROW |
> > > > @@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
> > > > return -EPERM;
> > > > }
> > > >
> > > > - return shmem_getpage(inode, index, pagep, SGP_WRITE);
> > > > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE);
> > > > +
> > > > + if (*pagep) {
> > > > + if (PageHWPoison(*pagep)) {
> > > > + unlock_page(*pagep);
> > > > + put_page(*pagep);
> > > > + ret = -EIO;
> > > > + }
> > > > + }
> > > > +
> > > > + return ret;
> > > > }
> > > >
> > > > static int
> > > > @@ -2555,6 +2566,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
> > > > unlock_page(page);
> > > > }
> > > >
> > > > + if (page && PageHWPoison(page)) {
> > > > + error = -EIO;
> > > > + break;
> > > > + }
> > > > +
> > > > /*
> > > > * We must evaluate after, since reads (unlike writes)
> > > > * are called without i_rwsem protection against truncate
> > >
> > > [...]
> > >
> > > > @@ -4193,6 +4216,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping,
> > > > page = ERR_PTR(error);
> > > > else
> > > > unlock_page(page);
> > > > +
> > > > + if (PageHWPoison(page))
> > > > + page = ERR_PTR(-EIO);
> > > > +
> > > > return page;
> > > > #else
> > > > /*
> > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > > > index 7a9008415534..b688d5327177 100644
> > > > --- a/mm/userfaultfd.c
> > > > +++ b/mm/userfaultfd.c
> > > > @@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm,
> > > > goto out;
> > > > }
> > > >
> > > > + if (PageHWPoison(page)) {
> > > > + ret = -EIO;
> > > > + goto out_release;
> > > > + }
> > > > +
> > > > ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr,
> > > > page, false, wp_copy);
> > > > if (ret)
> > > > --
> > > > 2.26.2
> > > >
> > >
> > > These are shmem_getpage_gfp() call sites:
> > >
> > > shmem_getpage[151] return shmem_getpage_gfp(inode, index, pagep, sgp,
> > > shmem_fault[2112] err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE,
> > > shmem_read_mapping_page_gfp[4188] error = shmem_getpage_gfp(inode, index, &page, SGP_CACHE,
> > >
> > > These are further shmem_getpage() call sites:
> > >
> > > collapse_file[1735] if (shmem_getpage(mapping->host, index, &page,
> > > shmem_undo_range[965] shmem_getpage(inode, start - 1, &page, SGP_READ);
> > > shmem_undo_range[980] shmem_getpage(inode, end, &page, SGP_READ);
> > > shmem_write_begin[2467] return shmem_getpage(inode, index, pagep, SGP_WRITE);
> > > shmem_file_read_iter[2544] error = shmem_getpage(inode, index, &page, sgp);
> > > shmem_fallocate[2733] error = shmem_getpage(inode, index, &page, SGP_FALLOC);
> > > shmem_symlink[3079] error = shmem_getpage(inode, 0, &page, SGP_WRITE);
> > > shmem_get_link[3120] error = shmem_getpage(inode, 0, &page, SGP_READ);
> > > mcontinue_atomic_pte[235] ret = shmem_getpage(inode, pgoff, &page, SGP_READ);
> > >
> > > Wondering whether this patch covered all of them.
> >
> > No, it doesn't need. Not all places care about hwpoison page, for
> > example, truncate, hole punch, etc. Only the APIs which return the
> > data back to userspace or write back to disk need care about if the
> > data is corrupted or not since. This has been elaborated in the cover
> > letter.
>
> I see, sorry I missed that. However I still have two entries unsure in above
> list that this patch didn't cover (besides fault path, truncate, ...):
>
> collapse_file[1735] if (shmem_getpage(mapping->host, index, &page,
> shmem_get_link[3120] error = shmem_getpage(inode, 0, &page, SGP_READ);
>
> IIUC the 1st one is when we want to collapse a file thp, should we stop the
> attempt if we see a hwpoison small page?
The page refcount could stop collapsing hwpoison page. One could argue
khugepaged could bail out earlier by checking hwpoison flag, but it is
definitely not a must do. So it relies on refcount now.
>
> The 2nd one should be where we had a symlink shmem file and the 1st page which
> stores the link got corrupted. Should we fail the get_link() then?
Thanks for catching this. Yeah, it seems this one is overlooked. I
didn't know that reading symlink needed to copy the first page. Will
fix it in the next version.
>
> --
> Peter Xu
>
Powered by blists - more mailing lists