[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <985f720c-0cf0-5ada-4df4-3405d5969b8d@linux.ibm.com>
Date: Tue, 23 Mar 2021 21:38:59 +0530
From: Ritesh Harjani <riteshh@...ux.ibm.com>
To: Shiyang Ruan <ruansy.fnst@...itsu.com>,
linux-kernel@...r.kernel.org, linux-xfs@...r.kernel.org,
linux-nvdimm@...ts.01.org, linux-fsdevel@...r.kernel.org
Cc: darrick.wong@...cle.com, dan.j.williams@...el.com,
willy@...radead.org, jack@...e.cz, viro@...iv.linux.org.uk,
linux-btrfs@...r.kernel.org, ocfs2-devel@....oracle.com,
david@...morbit.com, hch@....de, rgoldwyn@...e.de
Subject: Re: [PATCH v3 04/10] fsdax: Introduce dax_iomap_cow_copy()
On 3/19/21 7:22 AM, Shiyang Ruan wrote:
> In the case where the iomap is a write operation and iomap is not equal
> to srcmap after iomap_begin, we consider it is a CoW operation.
>
> The destance extent which iomap indicated is new allocated extent.
> So, it is needed to copy the data from srcmap to new allocated extent.
> In theory, it is better to copy the head and tail ranges which is
> outside of the non-aligned area instead of copying the whole aligned
> range. But in dax page fault, it will always be an aligned range. So,
> we have to copy the whole range in this case.
>
> Signed-off-by: Shiyang Ruan <ruansy.fnst@...itsu.com>
> Reviewed-by: Christoph Hellwig <hch@....de>
> ---
> fs/dax.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++++++----
> 1 file changed, 66 insertions(+), 5 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index a70e6aa285bb..181aad97136a 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -1037,6 +1037,51 @@ static int dax_iomap_direct_access(struct iomap *iomap, loff_t pos, size_t size,
> return rc;
> }
>
> +/*
> + * Copy the head and tail part of the pages not included in the write but
> + * required for CoW, because pos/pos+length are not page aligned. But in dax
> + * page fault case, the range is page aligned, we need to copy the whole range
> + * of data. Use copy_edge to distinguish these cases.
> + */
Is this version better? Feel free to update/change.
dax_iomap_cow_copy(): This can be called from two places.
Either during DAX write fault, to copy the length size data to daddr.
Or, while doing normal DAX write operation, dax_iomap_actor() might call
this to do the copy of either start or end unaligned address. In this
case the rest of the copy of aligned ranges is taken care by
dax_iomap_actor() itself.
Also, note DAX fault will always result in aligned pos and pos + length.
* @pos: address to do copy from.
* @length: size of copy operation.
* @align_size: aligned w.r.t align_size (either PMD_SIZE or PAGE_SIZE)
* @srcmap: iomap srcmap
* @daddr: destination address to copy to.
> +static int dax_iomap_cow_copy(loff_t pos, loff_t length, size_t align_size,
> + struct iomap *srcmap, void *daddr, bool copy_edge)
do we need bool copy_edge here?
We can detect non-align case directly if head_off != pos or pd_end !=
end no?
> +{
> + loff_t head_off = pos & (align_size - 1);
> + size_t size = ALIGN(head_off + length, align_size);
> + loff_t end = pos + length;
> + loff_t pg_end = round_up(end, align_size);
> + void *saddr = 0;
> + int ret = 0;
> +
> + ret = dax_iomap_direct_access(srcmap, pos, size, &saddr, NULL);
> + if (ret)
> + return ret;
> +
> + if (!copy_edge)
> + return copy_mc_to_kernel(daddr, saddr, length);
> +
> + /* Copy the head part of the range. Note: we pass offset as length. */
> + if (head_off) {
> + if (saddr)
> + ret = copy_mc_to_kernel(daddr, saddr, head_off);
> + else
> + memset(daddr, 0, head_off);
> + }
> + /* Copy the tail part of the range */
> + if (end < pg_end) {
> + loff_t tail_off = head_off + length;
> + loff_t tail_len = pg_end - end;
> +
> + if (saddr)
> + ret = copy_mc_to_kernel(daddr + tail_off,
> + saddr + tail_off, tail_len);
> + else
> + memset(daddr + tail_off, 0, tail_len);
> + }
Can you pls help me understand in which case where saddr is 0 and we
will fall back to memset API ?
I was thinking shouldn't such restrictions be coded inside
copy_mc_to_kernel() function in general?
-ritesh
Powered by blists - more mailing lists