lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yzx64zGt2kTiDYaP@magnolia>
Date:   Tue, 4 Oct 2022 11:26:43 -0700
From:   "Darrick J. Wong" <djwong@...nel.org>
To:     Gotou, Yasunori/五島 康文 
        <y-goto@...itsu.com>
Cc:     Yang, Xiao/杨 晓 <yangx.jy@...itsu.com>,
        Brian Foster <bfoster@...hat.com>,
        "hch@...radead.org" <hch@...radead.org>,
        Ruan, Shiyang/阮 世阳 
        <ruansy.fnst@...itsu.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
        "nvdimm@...ts.linux.dev" <nvdimm@...ts.linux.dev>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "david@...morbit.com" <david@...morbit.com>, zwisler@...nel.org,
        Jeff Moyer <jmoyer@...hat.com>, dm-devel@...hat.com,
        toshi.kani@....com
Subject: Re: [PATCH] xfs: fail dax mount if reflink is enabled on a partition

On Mon, Oct 03, 2022 at 09:12:46PM -0700, Gotou, Yasunori/五島 康文 wrote:
> On 2022/10/03 17:12, Darrick J. Wong wrote:
> > On Fri, Sep 30, 2022 at 09:56:41AM +0900, Gotou, Yasunori/五島 康文 wrote:
> > > Hello everyone,
> > > 
> > > On 2022/09/20 11:38, Yang, Xiao/杨 晓 wrote:
> > > > Hi Darrick, Brian and Christoph
> > > > 
> > > > Ping. I hope to get your feedback.
> > > > 
> > > > 1) I have confirmed that the following patch set did not change the test
> > > > result of generic/470 with thin-volume. Besides, I didn't see any
> > > > failure when running generic/470 based on normal PMEM device instaed of
> > > > thin-volume.
> > > > https://lore.kernel.org/linux-xfs/20211129102203.2243509-1-hch@lst.de/
> > > > 
> > > > 2) I can reproduce the failure of generic/482 without thin-volume.
> > > > 
> > > > 3) Is it necessary to make thin-volume support DAX. Is there any use
> > > > case for the requirement?
> > > 
> > > 
> > > Though I asked other place(*), I really want to know the usecase of
> > > dm-thin-volume with DAX and reflink.
> > > 
> > > 
> > > In my understanding, dm-thin-volume seems to provide similar feature like
> > > reflink of xfs. Both feature provide COW update to reduce usage of
> > > its region, and snapshot feature, right?
> > > 
> > > I found that docker seems to select one of them (or other feature which
> > > supports COW). Then user don't need to use thin-volume and reflink at same
> > > time.
> > > 
> > > Database which uses FS-DAX may want to use snapshot for its data of FS-DAX,
> > > its user seems to be satisfied with reflink or thin-volume.
> > > 
> > > So I could not find on what use-case user would like to use dm-thin-volume
> > > and reflink at same time.
> > > 
> > > The only possibility is that the user has mistakenly configured dm-thinpool
> > > and reflink to be used at the same time, but if that is the case, it seems
> > > to be better for the user to disable one or the other.
> > > 
> > > I really wander why dm-thin-volume must be used with reflik and FS-DAX.
> > 
> > There isn't a hard requirement between fsdax and dm-thinp.  The /test/
> > needs dm-logwrites to check that write page faults on a MAP_SYNC
> > mmapping are persisted directly to disk.  dm-logwrites requires a fast
> > way to zero an entire device for correct operation of the replay step,
> > and thinp is the only way to guarantee that.
> 
> Thank you for your answer. But I still feel something is strange.
> Though dm-thinp may be good way to execute the test correctly,

Yep.

> I suppose it seems to be likely a kind of workaround to pass the test,
> it may not be really required for actual users.

Exactly correct.  Real users should /never/ set up this kind of (test
scaffolding|insanity) to use fsdax.

> Could you tell me why passing test by workaround is so necessary?

Notice this line in generic/470:

$XFS_IO_PROG -t -c "truncate $LEN" -c "mmap -S 0 $LEN" -c "mwrite 0 $LEN" \
	-c "log_writes -d $LOGWRITES_NAME -m preunmap" \
	-f $SCRATCH_MNT/test

The second xfs_io command creates a MAP_SYNC mmap of the
SCRATCH_MNT/test file, and the third command memcpy's bytes to the
mapping to invoke the write page fault handler.

The fourth command tells the dm-logwrites driver for $LOGWRITES_NAME
(aka the block device containing the mounted XFS filesystem) to create a
mark called "preunmap".  This mark captures the exact state of the block
device immediately after the write faults complete, so that we can come
back to it later.  There are a few things to note here:

  (1) We did not tell the fs to persist anything;
  (2) We can't use dm-snapshot here, because dm-snapshot will flush the
      fs (I think?); and
  (3) The fs is still mounted, so the state of the block device at the
      mark reflects a dirty XFS with a log that must be replayed.

The next thing the test does is unmount the fs, remove the dm-logwrites
driver to stop recording, and check the fs:

_log_writes_unmount
_log_writes_remove
_dmthin_check_fs

This ensures that the post-umount fs is consistent.  Now we want to roll
back to the place we marked to see if the mwrite data made it to pmem.
It *should* have, since we asked for a MAP_SYNC mapping on a fsdax
filesystem recorded on a pmem device:

# check pre-unmap state
_log_writes_replay_log preunmap $DMTHIN_VOL_DEV
_dmthin_mount

dm-logwrites can't actually roll backwards in time to a mark, since it
only records new disk contents.  It /can/ however roll forward from
whatever point it began recording writes to the mark, so that's what it
does.

However -- remember note (3) from earlier.  When we _dmthin_mount after
replaying the log to the "preunmap" mark, XFS will see the dirty XFS log
and try to recover the XFS log.  This is where the replay problems crop
up.  The XFS log records a monotonically increasing sequence number
(LSN) with every log update, and when updates are written into the
filesystem, that LSN is also written into the filesystem block.  Log
recovery also replays updates into the filesystem, but with the added
behavior that it skips a block replay if the block's LSN is higher than
the transaction being replayed.  IOWs, we never replay older block
contents over newer block contents.

For dm-logwrites this is a major problem, because there could be more
filesystem updates written to the XFS log after the mark is made.  LSNs
will then be handed out like this:

mkfs_lsn                 preunmap_lsn             umount_lsn
  |                           |                      |
  |--------------------------||----------|-----------|
                             |           |
                         xxx_lsn     yyy_lsn

Let's say that a new metadata block "BBB" was created in update "xxx"
immediately before the preunmap mark was made.  Per (1), we didn't flush
the filesystem before taking the mark, which means that the new block's
contents exist only in the log at this point.

Let us further say that the new block was again changed in update "yyy",
where preunmap_lsn < yyy_lsn <= umount_lsn.  Clearly, yyy_lsn > xxx_lsn.
yyy_lsn is written to the block at unmount, because unmounting flushes
the log clean before it completes.  This is the first time that BBB ever
gets written.

_log_writes_replay_log begins replaying the block device from mkfs_lsn
towards preunmap_lsn.  When it's done, it will have a log that reflects
all the changes up to preunmap_lsn.  Recall however that BBB isn't
written until after the preunmap mark, which means that dm-logwrites has
no record of BBB before preunmap_lsn, so dm-logwrites replay won't touch
BBB.  At this point, the block header for BBB has a UUID that matches
the filesystem, but a LSN (yyy_lsn) that is beyond preunmap_lsn.

XFS log recovery starts up, and finds transaction xxx.  It will read BBB
from disk, but then it will see that it has an LSN of yyy_lsn.  This is
larger than xxx_lsn, so it concludes that BBB is newer than the log and
moves on to the next log item.  No other log items touch BBB, so
recovery finishes, and now we have a filesystem containing one metadata
block (BBB) from the future.  This is an inconsistent filesystem, and
has caused failures in the tests that use logwrites.

To work around this problem, all we really need to do is reinitialize
the entire block device to known contents at mkfs time.  This can be
done expensively by writing zeroes to the entire block device, or it can
be done cheaply by (a) issuing DISCARD to the whole the block device at
the start of the test and (b) ensuring that reads after a discard always
produce zeroes.  mkfs.xfs already does (a), so the test merely has to
ensure (b).

dm-thinp is the only software solution that provides (b), so that's why
this test layers dm-logwrites on top of dm-thinp on top of $SCRATCH_DEV.
This combination used to work, but with the pending pmem/blockdev
divorce, this strategy is no longer feasible.

I think the only way to fix this test is (a) revert all of Christoph's
changes so far and scuttle the divorce; or (b) change this test like so:

 1. Create a large sparse file on $TEST_DIR and losetup that sparse
    file.  The resulting loop device will not have dax capability.

 2. Set up the dmthin/dmlogwrites stack on top of this loop device.

 3. Call mkfs.xfs with the SCRATCH_DEV (which hopefully is a pmem
    device) as the realtime device, and set the daxinherit and rtinherit
    flags on the root directory.  The result is a filesystem with a data
    section that the kernel will treat as a regular block device, a
    realtime section backed by pmem, and the necessary flags to make
    sure that the test file will actually get fsdax mode.

 4. Acknowledge that we no longer have any way to test MAP_SYNC
    functionality on ext4, which means that generic/470 has to move to
    tests/xfs/.

--D

> Thanks,
> 
> 
> > 
> > --D
> > 
> > > If my understanding is something wrong, please correct me.
> > > 
> > > (*)https://lore.kernel.org/all/TYWPR01MB1008258F474CA2295B4CD3D9B90549@TYWPR01MB10082.jpnprd01.prod.outlook.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ