lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OSBPR01MB2920AD0C7FD02E238D0C387AF4FF9@OSBPR01MB2920.jpnprd01.prod.outlook.com>
Date:   Wed, 18 Aug 2021 07:52:01 +0000
From:   "ruansy.fnst@...itsu.com" <ruansy.fnst@...itsu.com>
To:     Jane Chu <jane.chu@...cle.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
        "nvdimm@...ts.linux.dev" <nvdimm@...ts.linux.dev>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "dm-devel@...hat.com" <dm-devel@...hat.com>
CC:     "djwong@...nel.org" <djwong@...nel.org>,
        "dan.j.williams@...el.com" <dan.j.williams@...el.com>,
        "david@...morbit.com" <david@...morbit.com>,
        "hch@....de" <hch@....de>, "agk@...hat.com" <agk@...hat.com>,
        "snitzer@...hat.com" <snitzer@...hat.com>
Subject: RE: [PATCH RESEND v6 1/9] pagemap: Introduce ->memory_failure()



> -----Original Message-----
> From: Jane Chu <jane.chu@...cle.com>
> Subject: Re: [PATCH RESEND v6 1/9] pagemap: Introduce ->memory_failure()
> 
> 
> On 8/17/2021 10:43 PM, Jane Chu wrote:
> > More information -
> >
> > On 8/16/2021 10:20 AM, Jane Chu wrote:
> >> Hi, ShiYang,
> >>
> >> So I applied the v6 patch series to my 5.14-rc3 as it's what you
> >> indicated is what v6 was based at, and injected a hardware poison.
> >>
> >> I'm seeing the same problem that was reported a while ago after the
> >> poison was consumed - in the SIGBUS payload, the si_addr is missing:
> >>
> >> ** SIGBUS(7): canjmp=1, whichstep=0, **
> >> ** si_addr(0x(nil)), si_lsb(0xC), si_code(0x4, BUS_MCEERR_AR) **
> >>
> >> The si_addr ought to be 0x7f6568000000 - the vaddr of the first page
> >> in this case.
> >
> > The failure came from here :
> >
> > [PATCH RESEND v6 6/9] xfs: Implement ->notify_failure() for XFS
> >
> > +static int
> > +xfs_dax_notify_failure(
> > ...
> > +    if (!xfs_sb_version_hasrmapbt(&mp->m_sb)) {
> > +        xfs_warn(mp, "notify_failure() needs rmapbt enabled!");
> > +        return -EOPNOTSUPP;
> > +    }
> >
> > I am not familiar with XFS, but I have a few questions I hope to get
> > answers -
> >
> > 1) What does it take and cost to make
> >     xfs_sb_version_hasrmapbt(&mp->m_sb) to return true?

Enable rmpabt feature when making xfs filesystem
   `mkfs.xfs -m rmapbt=1 /path/to/device`
BTW, reflink is enabled by default.

> >
> > 2) For a running environment that fails the above check, is it
> >     okay to leave the poison handle in limbo and why?
It will fall back to the old handler.  I think you have already known it.

> >
> > 3) If the above regression is not acceptable, any potential remedy?
> 
> How about moving the check to prior to the notifier registration?
> And register only if the check is passed?  This seems better than an
> alternative which is to fall back to the legacy memory_failure handling in case
> the filesystem returns -EOPNOTSUPP.

Sounds like a nice solution.  I think I can add an is_notify_supported() interface in dax_holder_ops and check it when register dax_holder.

--
Thanks,
Ruan.
> 
> thanks,
> -jane
> 
> >
> > thanks!
> > -jane
> >
> >
> >>
> >> Something is not right...
> >>
> >> thanks,
> >> -jane
> >>
> >>
> >> On 8/5/2021 6:17 PM, Jane Chu wrote:
> >>> The filesystem part of the pmem failure handling is at minimum built
> >>> on PAGE_SIZE granularity - an inheritance from general
> >>> memory_failure handling.  However, with Intel's DCPMEM technology,
> >>> the error blast radius is no more than 256bytes, and might get
> >>> smaller with future hardware generation, also advanced atomic 64B write
> to clear the poison.
> >>> But I don't see any of that could be incorporated in, given that the
> >>> filesystem is notified a corruption with pfn, rather than an exact
> >>> address.
> >>>
> >>> So I guess this question is also for Dan: how to avoid unnecessarily
> >>> repairing a PMD range for a 256B corrupt range going forward?
> >>>
> >>> thanks,
> >>> -jane
> >>>
> >>>
> >>> On 7/30/2021 3:01 AM, Shiyang Ruan wrote:
> >>>> When memory-failure occurs, we call this function which is
> >>>> implemented by each kind of devices.  For the fsdax case, pmem
> >>>> device driver implements it.  Pmem device driver will find out the
> >>>> filesystem in which the corrupted page located in.  And finally
> >>>> call filesystem handler to deal with this error.
> >>>>
> >>>> The filesystem will try to recover the corrupted data if necessary.
> >>>
> >>
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ