lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210818155245.GE12664@magnolia>
Date:   Wed, 18 Aug 2021 08:52:45 -0700
From:   "Darrick J. Wong" <djwong@...nel.org>
To:     Jane Chu <jane.chu@...cle.com>
Cc:     Shiyang Ruan <ruansy.fnst@...itsu.com>,
        linux-kernel@...r.kernel.org, linux-xfs@...r.kernel.org,
        nvdimm@...ts.linux.dev, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org, dm-devel@...hat.com,
        dan.j.williams@...el.com, david@...morbit.com, hch@....de,
        agk@...hat.com, snitzer@...hat.com
Subject: Re: [PATCH RESEND v6 1/9] pagemap: Introduce ->memory_failure()

On Tue, Aug 17, 2021 at 11:08:40PM -0700, Jane Chu wrote:
> 
> On 8/17/2021 10:43 PM, Jane Chu wrote:
> > More information -
> > 
> > On 8/16/2021 10:20 AM, Jane Chu wrote:
> > > Hi, ShiYang,
> > > 
> > > So I applied the v6 patch series to my 5.14-rc3 as it's what you
> > > indicated is what v6 was based at, and injected a hardware poison.
> > > 
> > > I'm seeing the same problem that was reported a while ago after the
> > > poison was consumed - in the SIGBUS payload, the si_addr is missing:
> > > 
> > > ** SIGBUS(7): canjmp=1, whichstep=0, **
> > > ** si_addr(0x(nil)), si_lsb(0xC), si_code(0x4, BUS_MCEERR_AR) **
> > > 
> > > The si_addr ought to be 0x7f6568000000 - the vaddr of the first page
> > > in this case.
> > 
> > The failure came from here :
> > 
> > [PATCH RESEND v6 6/9] xfs: Implement ->notify_failure() for XFS
> > 
> > +static int
> > +xfs_dax_notify_failure(
> > ...
> > +    if (!xfs_sb_version_hasrmapbt(&mp->m_sb)) {
> > +        xfs_warn(mp, "notify_failure() needs rmapbt enabled!");
> > +        return -EOPNOTSUPP;
> > +    }
> > 
> > I am not familiar with XFS, but I have a few questions I hope to get
> > answers -
> > 
> > 1) What does it take and cost to make
> >     xfs_sb_version_hasrmapbt(&mp->m_sb) to return true?

mkfs.xfs -m rmapbt=1

> > 2) For a running environment that fails the above check, is it
> >     okay to leave the poison handle in limbo and why?
> > 
> > 3) If the above regression is not acceptable, any potential remedy?
> 
> How about moving the check to prior to the notifier registration?
> And register only if the check is passed?  This seems better
> than an alternative which is to fall back to the legacy memory_failure
> handling in case the filesystem returns -EOPNOTSUPP.

"return -EOPNOTSUPP;" is the branching point where a future patch could
probe the (DRAM) buffer cache to bwrite the contents to restore the pmem
contents.  Right now the focus should be on landing the core code
changes without drawing any more NAKs from Dan.

--D

> thanks,
> -jane
> 
> > 
> > thanks!
> > -jane
> > 
> > 
> > > 
> > > Something is not right...
> > > 
> > > thanks,
> > > -jane
> > > 
> > > 
> > > On 8/5/2021 6:17 PM, Jane Chu wrote:
> > > > The filesystem part of the pmem failure handling is at minimum built
> > > > on PAGE_SIZE granularity - an inheritance from general
> > > > memory_failure handling.  However, with Intel's DCPMEM
> > > > technology, the error blast
> > > > radius is no more than 256bytes, and might get smaller with future
> > > > hardware generation, also advanced atomic 64B write to clear the poison.
> > > > But I don't see any of that could be incorporated in, given that the
> > > > filesystem is notified a corruption with pfn, rather than an exact
> > > > address.
> > > > 
> > > > So I guess this question is also for Dan: how to avoid unnecessarily
> > > > repairing a PMD range for a 256B corrupt range going forward?
> > > > 
> > > > thanks,
> > > > -jane
> > > > 
> > > > 
> > > > On 7/30/2021 3:01 AM, Shiyang Ruan wrote:
> > > > > When memory-failure occurs, we call this function which is implemented
> > > > > by each kind of devices.  For the fsdax case, pmem device driver
> > > > > implements it.  Pmem device driver will find out the
> > > > > filesystem in which
> > > > > the corrupted page located in.  And finally call filesystem handler to
> > > > > deal with this error.
> > > > > 
> > > > > The filesystem will try to recover the corrupted data if necessary.
> > > > 
> > > 
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ