lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.2103111312310.7859@eggly.anvils>
Date:   Thu, 11 Mar 2021 14:00:40 -0800 (PST)
From:   Hugh Dickins <hughd@...gle.com>
To:     Jue Wang <juew@...gle.com>
cc:     HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>, Hugh Dickins <hughd@...gle.com>,
        Naoya Horiguchi <nao.horiguchi@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...nel.org>,
        Oscar Salvador <osalvador@...e.de>,
        Tony Luck <tony.luck@...el.com>,
        Matthew Wilcox <willy@...radead.org>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Greg Thelen <gthelen@...gle.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH v1] mm, hwpoison: enable error handling on shmem thp

On Thu, 11 Mar 2021, Jue Wang wrote:
> On Thu, Mar 11, 2021 at 7:14 AM HORIGUCHI NAOYA(堀口 直也)
> <naoya.horiguchi@....com> wrote:
> > On Wed, Mar 10, 2021 at 11:22:18PM -0800, Hugh Dickins wrote:
> > >
> > > I'm not much into memory-failure myself, but Jue discovered that the
> > > SIGBUS never arrives: because split_huge_page() on a shmem or file
> > > THP unmaps all its pmds and ptes, and (unlike with anon) leaves them
> > > unmapped - in normal circumstances, to be faulted back on demand.
> > > So the page_mapped() check in hwpoison_user_mappings() fails,
> > > and the intended SIGBUS is not delivered.
> >
> > Thanks for the information.  The split behaves quite differently between
> > for anon thp and for shmem thp.  I saw some unexpected behavior in my
> > testing, maybe that's due to the difference.
> >
> > >
> > > (Or, is it acceptable that the SIGBUS is not delivered to those who
> > > have the huge page mapped: should it get delivered later, to anyone
> > > who faults back in the bad 4k?)
> >
> > Later access should report error in page fault, so the worst scenario
> > of consuming corrupted data does not happen, but precautionary signal
> > does not work so it's not acceptable.

On the other hand, if split_huge_page() does succeed, then there is an
argument that it would be better not to SIGBUS all mappers of parts of
the THP, but wait to select only those re-accessing the one bad 4k.

> In our experiment with SHMEM THPs, later accesses resulted in a zero
> page allocated instead of a SIGBUS with BUS_MCEERR_AR reported by the
> page fault handler. That part might be an opportunity to prevent some
> silent data corruption just in case.

Thanks for filling in more detail, Jue: I understand better now.

Maybe mm/shmem.c is wrong to be using generic_error_remove_page(),
the function which punches a hole on memory-failure.

That works well for filesystems backed by storage (at least when the
page had not been modified), because it does not (I think) actually
punch a hole in the stored object; and the next touch at that offset of
the file will allocate a new cache page to be filled from good storage.

But in the case of shmem (if we ignore the less likely swap cache case)
there is no storage to read back good data from, so the next touch just
fills a new cache page with zeroes (as you report above).

I don't know enough of the philosophy of memory-failure to say, but
I can see there's an argument for leaving the bad page in cache, to
give SIGBUS or EFAULT or EIO (whether by observation of PageHWPoison,
or by another MCE) to whoever accesses it - until the file or that
part of it is deleted (then that page never returned to use again).

Hugh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ