lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210902030728.GA1860112@hori.linux.bs1.fc.nec.co.jp>
Date:   Thu, 2 Sep 2021 03:07:28 +0000
From:   HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>
To:     Yang Shi <shy828301@...il.com>
CC:     "osalvador@...e.de" <osalvador@...e.de>,
        "hughd@...gle.com" <hughd@...gle.com>,
        "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: hwpoison: deal with page cache THP

On Mon, Aug 30, 2021 at 04:44:06PM -0700, Yang Shi wrote:
> On Thu, Aug 26, 2021 at 10:02 PM Yang Shi <shy828301@...il.com> wrote:
> >
> > On Thu, Aug 26, 2021 at 8:57 PM HORIGUCHI NAOYA(堀口 直也)
> > <naoya.horiguchi@....com> wrote:
> > >
> > > On Thu, Aug 26, 2021 at 03:03:57PM -0700, Yang Shi wrote:
> > > > On Thu, Aug 26, 2021 at 1:03 PM Yang Shi <shy828301@...il.com> wrote:
> > > > >
> > > > > On Wed, Aug 25, 2021 at 11:17 PM HORIGUCHI NAOYA(堀口 直也)
> > > > > <naoya.horiguchi@....com> wrote:
> > > > > >
> > > > > > On Tue, Aug 24, 2021 at 03:13:22PM -0700, Yang Shi wrote:
> > > ...
> > > > > >
> > > > > > There was a discussion about another approach of keeping error pages in page
> > > > > > cache for filesystem without backend storage.
> > > > > > https://lore.kernel.org/lkml/alpine.LSU.2.11.2103111312310.7859@eggly.anvils/
> > > > > > This approach seems to me less complicated, but one concern is that this
> > > > > > change affects user-visible behavior of memory errors.  Keeping error pages
> > > > > > in page cache means that the errors are persistent until next system reboot,
> > > > > > so we might need to define the way to clear the errors to continue to use
> > > > > > the error file.  Current implementation is just to send SIGBUS to the
> > > > > > mapping processes (at least once), then forget about the error, so there is
> > > > > > no such issue.
> > > > > >
> > > > > > Another thought of possible solution might be to send SIGBUS immediately when
> > > > > > a memory error happens on a shmem thp. We can find all the mapping processes
> > > > > > before splitting shmem thp, so send SIGBUS first, then split it and contain
> > > > > > the error page.  This is not elegant (giving up any optional actions) but
> > > > > > anyway we can avoid the silent data lost.
> > > > >
> > > > > Thanks a lot. I apologize I didn't notice you already posted a similar
> > > > > patch before.
> > > > >
> > > > > Yes, I think I focused on the soft offline part too much and missed
> > > > > the uncorrected error part and I admit I did underestimate the
> > > > > problem.
> > > > >
> > > > > I think Hugh's suggestion makes sense if we treat tmpfs as a regular
> > > > > filesystem (just memory backed). AFAIK, some filesystem, e.g. btrfs,
> > > > > may do checksum after reading from storage block then return an error
> > > > > if checksum is not right since it may indicate hardware failure on
> > > > > disk. Then the syscalls or page fault return error or SIGBUS.
> > > > >
> > > > > So in shmem/tmpfs case, if hwpoisoned page is met, just return error
> > > > > (-EIO or whatever) for syscall or SIGBUS for page fault. It does align
> > > > > with the behavior of other filesystems. It is definitely applications'
> > > > > responsibility to check the return value of read/write syscalls.
> > > >
> > > > BTW, IIUC the dirty regular page cache (storage backed) would be left
> > > > in the page cache too, the clean page cache would be truncated since
> > > > they can be just reread from storage, right?
> > >
> > > A dirty page cache is also removed on error (me_pagecache_dirty() falls
> > > through me_pagecache_clean(), then truncate_error_page() is called).
> > > The main purpose of this is to separate off the error page from exising
> > > data structures to minimize the risk of later accesses (maybe by race or bug).
> > > But we can change this behavior for specific file systems by updating
> > > error_remove_page() callbacks in address_space_operation.
> >
> > Yeah, if fs's error_remove_page() is defined. It seems the filesystems
> > which have error_remove_page() defined just use generic_remove_page()
> > except hugetlbfs. And the generic implementation just clears the dirty
> > flag and removes the page from page cache.
> >
> > If error_remove_page() is not defined, the page would stay in page
> > cache since invalidate_inode_page() can't remove dirty page.
> >
> > >
> > > Honestly, it seems to me that how dirty data is lost does not depend on
> > > file system, and I'm still not sure that this is really a right approach
> > > for the current issue.
> >
> > IMHO the biggest problem is that applications may see
> > obsolete/inconsistent data silently, right? Actually keeping the
> > corrupted page in page cache should be able to notify applications
> > that they are accessing inconsistent data.
> 
> The removal from page cache behavior may be much worse for shmem/tmpfs
> since it actually removes the whole data blocks for the file. The user
> will get all zero if the corrupted blocks are read without any
> notification.
> 
> The more I stared at the code and had tests done, the more I think we
> should keep the corrupted page in page cache and notify the users.
> 
> It seems easier for readonly filesystem. Just remove the page from
> page cache since it always could read data from disk. This is also the
> current behavior.
> 
> For shmem, the page could be kept in page cache with dirty flag set
> since it won't be written back.
> 
> For regular filesystems that could do writeback, things are a little
> bit more complicated since we need to prevent from writing back by
> clearing dirty flag. Other than writeback we also need to distinguish
> cache drop from truncation/hole punch/unlink. We don't want cache drop
> (e.g. echo 1 > /proc/sys/vm/drop_caches) drop corrupted page. But
> truncate/hole punch/unlink should be fine to remove the page since the
> underlying data blocks will be gone too.
> 
> Thanks to the refcount pin done by memory failure, cache drop can't
> drop the page since it checks if the refcount is expected or not.
> Truncate/hole punch/unlink doesn't check refcount so they could
> proceed. But inode evict (slab shrinking path) may call truncate, so
> the corrupted page may still be removed from page cache when the
> underlying data blocks still exist IIUC. There might be other paths in
> filesystems to have page cache truncate but the underlying data blocks
> are still present.
> 
> The read/write syscalls also need check hwpoisoned flag. I'm not sure
> if I miss other syscalls or not.

Thanks for analyzing, I think that filesystems supporting writeback might
call filemap_write_and_wait() when evicting inode, which can notify error
by AS_EIO flag in struct address_space.  But it does nothing for no-writeback
filesystems like shmem.  So we need better error report for them.

> 
> I'm not a filesystem expert so I'm not sure if I'm missing something
> else or not. But I'm supposed the most should be covered.
> 
> I'd like to start with shmem/tmpfs since it is relatively easier and
> this also could unblock shmem THP hwpoison support. Any comment is
> welcome.

I think that keeping corrupted pages in page cache can be a better solution.
So if you plan to write a patchset, I'm glad to review/test it.  Starting with
shmem/tmpfs sounds nice to me.

Thanks,
Naoya Horiguchi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ