[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210827035739.GA3247360@hori.linux.bs1.fc.nec.co.jp>
Date: Fri, 27 Aug 2021 03:57:39 +0000
From: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
To: Yang Shi <shy828301@...il.com>
CC: "osalvador@...e.de" <osalvador@...e.de>,
"hughd@...gle.com" <hughd@...gle.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: hwpoison: deal with page cache THP
On Thu, Aug 26, 2021 at 03:03:57PM -0700, Yang Shi wrote:
> On Thu, Aug 26, 2021 at 1:03 PM Yang Shi <shy828301@...il.com> wrote:
> >
> > On Wed, Aug 25, 2021 at 11:17 PM HORIGUCHI NAOYA(堀口 直也)
> > <naoya.horiguchi@....com> wrote:
> > >
> > > On Tue, Aug 24, 2021 at 03:13:22PM -0700, Yang Shi wrote:
...
> > >
> > > There was a discussion about another approach of keeping error pages in page
> > > cache for filesystem without backend storage.
> > > https://lore.kernel.org/lkml/alpine.LSU.2.11.2103111312310.7859@eggly.anvils/
> > > This approach seems to me less complicated, but one concern is that this
> > > change affects user-visible behavior of memory errors. Keeping error pages
> > > in page cache means that the errors are persistent until next system reboot,
> > > so we might need to define the way to clear the errors to continue to use
> > > the error file. Current implementation is just to send SIGBUS to the
> > > mapping processes (at least once), then forget about the error, so there is
> > > no such issue.
> > >
> > > Another thought of possible solution might be to send SIGBUS immediately when
> > > a memory error happens on a shmem thp. We can find all the mapping processes
> > > before splitting shmem thp, so send SIGBUS first, then split it and contain
> > > the error page. This is not elegant (giving up any optional actions) but
> > > anyway we can avoid the silent data lost.
> >
> > Thanks a lot. I apologize I didn't notice you already posted a similar
> > patch before.
> >
> > Yes, I think I focused on the soft offline part too much and missed
> > the uncorrected error part and I admit I did underestimate the
> > problem.
> >
> > I think Hugh's suggestion makes sense if we treat tmpfs as a regular
> > filesystem (just memory backed). AFAIK, some filesystem, e.g. btrfs,
> > may do checksum after reading from storage block then return an error
> > if checksum is not right since it may indicate hardware failure on
> > disk. Then the syscalls or page fault return error or SIGBUS.
> >
> > So in shmem/tmpfs case, if hwpoisoned page is met, just return error
> > (-EIO or whatever) for syscall or SIGBUS for page fault. It does align
> > with the behavior of other filesystems. It is definitely applications'
> > responsibility to check the return value of read/write syscalls.
>
> BTW, IIUC the dirty regular page cache (storage backed) would be left
> in the page cache too, the clean page cache would be truncated since
> they can be just reread from storage, right?
A dirty page cache is also removed on error (me_pagecache_dirty() falls
through me_pagecache_clean(), then truncate_error_page() is called).
The main purpose of this is to separate off the error page from exising
data structures to minimize the risk of later accesses (maybe by race or bug).
But we can change this behavior for specific file systems by updating
error_remove_page() callbacks in address_space_operation.
Honestly, it seems to me that how dirty data is lost does not depend on
file system, and I'm still not sure that this is really a right approach
for the current issue.
Thanks,
Naoya Horiguchi
Powered by blists - more mailing lists