[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OSBPR01MB2920C085D788A9F918107F33F4539@OSBPR01MB2920.jpnprd01.prod.outlook.com>
Date: Tue, 11 May 2021 05:53:51 +0000
From: "ruansy.fnst@...itsu.com" <ruansy.fnst@...itsu.com>
To: "Darrick J. Wong" <djwong@...nel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"darrick.wong@...cle.com" <darrick.wong@...cle.com>,
"dan.j.williams@...el.com" <dan.j.williams@...el.com>,
"willy@...radead.org" <willy@...radead.org>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"david@...morbit.com" <david@...morbit.com>,
"hch@....de" <hch@....de>, "rgoldwyn@...e.de" <rgoldwyn@...e.de>
Subject: RE: [PATCH v5 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax
> -----Original Message-----
> From: Darrick J. Wong <djwong@...nel.org>
> Sent: Tuesday, May 11, 2021 11:57 AM
> Subject: Re: [PATCH v5 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax
>
> On Tue, May 11, 2021 at 11:09:26AM +0800, Shiyang Ruan wrote:
> > This patchset is attempt to add CoW support for fsdax, and take XFS,
> > which has both reflink and fsdax feature, as an example.
>
> Slightly off topic, but I noticed all my pmem disappeared once I rolled forward to
> 5.13-rc1. Am I the only lucky one? Qemu 4.2, with fake memory devices
> backed by tmpfs files -- info qtree says they're there, but the kernel doesn't show
> anything in /proc/iomem.
I have the same situation on 5.13-rc1 too. (Qemu 5.2.0, fake memory device backed by files)
I tested this code in v5.12-rc8 and then rebased it to v5.13-rc1... It's my bad for not testing again.
--
Thanks,
Ruan Shiyang.
>
> --D
>
> > Changes from V4:
> > - Fix the mistake of breaking dax layout for two inodes
> > - Add CONFIG_FS_DAX judgement for fsdax code in remap_range.c
> > - Fix other small problems and mistakes
> >
> > Changes from V3:
> > - Take out the first 3 patches as a cleanup patchset[1], which has been
> > sent yesterday.
> > - Fix usage of code in dax_iomap_cow_copy()
> > - Add comments for macro definitions
> > - Fix other code style problems and mistakes
> >
> > One of the key mechanism need to be implemented in fsdax is CoW. Copy
> > the data from srcmap before we actually write data to the destance
> > iomap. And we just copy range in which data won't be changed.
> >
> > Another mechanism is range comparison. In page cache case, readpage()
> > is used to load data on disk to page cache in order to be able to
> > compare data. In fsdax case, readpage() does not work. So, we need
> > another compare data with direct access support.
> >
> > With the two mechanisms implemented in fsdax, we are able to make
> > reflink and fsdax work together in XFS.
> >
> > Some of the patches are picked up from Goldwyn's patchset. I made
> > some changes to adapt to this patchset.
> >
> >
> > (Rebased on v5.13-rc1 and patchset[1])
> > [1]: https://lkml.org/lkml/2021/4/22/575
> >
> > Shiyang Ruan (7):
> > fsdax: Introduce dax_iomap_cow_copy()
> > fsdax: Replace mmap entry in case of CoW
> > fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero
> > iomap: Introduce iomap_apply2() for operations on two files
> > fsdax: Dedup file range to use a compare function
> > fs/xfs: Handle CoW for fsdax write() path
> > fs/xfs: Add dax dedupe support
> >
> > fs/dax.c | 206
> +++++++++++++++++++++++++++++++++++------
> > fs/iomap/apply.c | 52 +++++++++++
> > fs/iomap/buffered-io.c | 2 +-
> > fs/remap_range.c | 57 ++++++++++--
> > fs/xfs/xfs_bmap_util.c | 3 +-
> > fs/xfs/xfs_file.c | 11 +--
> > fs/xfs/xfs_inode.c | 66 ++++++++++++-
> > fs/xfs/xfs_inode.h | 1 +
> > fs/xfs/xfs_iomap.c | 61 +++++++++++-
> > fs/xfs/xfs_iomap.h | 4 +
> > fs/xfs/xfs_iops.c | 7 +-
> > fs/xfs/xfs_reflink.c | 15 +--
> > include/linux/dax.h | 7 +-
> > include/linux/fs.h | 12 ++-
> > include/linux/iomap.h | 7 +-
> > 15 files changed, 449 insertions(+), 62 deletions(-)
> >
> > --
> > 2.31.1
> >
> >
> >
Powered by blists - more mailing lists