[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180701062901.GA27398@lst.de>
Date: Sun, 1 Jul 2018 08:29:01 +0200
From: Christoph Hellwig <hch@....de>
To: Andreas Gruenbacher <agruenba@...hat.com>
Cc: Christoph Hellwig <hch@....de>,
cluster-devel <cluster-devel@...hat.com>,
linux-ext4 <linux-ext4@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 1/1] iomap: Direct I/O for inline data
On Fri, Jun 29, 2018 at 04:40:40PM +0200, Andreas Gruenbacher wrote:
> On 29 June 2018 at 10:56, Christoph Hellwig <hch@....de> wrote:
> > This looks generally fine. But I think it might be worth refactoring
> > iomap_dio_actor a bit first, e.g. something like this new patch
> > before yours, which would also nicely solve your alignmnet concern
> > (entirely untested for now):
>
> This looks correct. I've rebased my patches on top of it and I ran the
> xfstest auto group on gfs2 and xfs on top.
As I've just been rebasing the iomap work I've done the work already,
does the version below work for you?
---
>From 5e8a0f157629bb8850b8d8fe049bb896730f0da7 Mon Sep 17 00:00:00 2001
From: Andreas Gruenbacher <agruenba@...hat.com>
Date: Sun, 1 Jul 2018 08:26:22 +0200
Subject: iomap: support direct I/O to inline data
Add support for reading from and writing to inline data to iomap_dio_rw.
This saves filesystems from having to implement fallback code for this
case.
The inline data is actually cached in the inode, so the I/O is only
direct in the sense that it doesn't go through the page cache.
Signed-off-by: Andreas Gruenbacher <agruenba@...hat.com>
---
fs/iomap.c | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/fs/iomap.c b/fs/iomap.c
index 4d8ff0f5ecc9..98a1fdd5c091 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -1450,6 +1450,33 @@ iomap_dio_hole_actor(loff_t length, struct iomap_dio *dio)
return length;
}
+static loff_t
+iomap_dio_inline_actor(struct inode *inode, loff_t pos, loff_t length,
+ struct iomap_dio *dio, struct iomap *iomap)
+{
+ struct iov_iter *iter = dio->submit.iter;
+ size_t copied;
+
+ BUG_ON(pos + length > PAGE_SIZE - offset_in_page(iomap->inline_data));
+
+ if (dio->flags & IOMAP_DIO_WRITE) {
+ loff_t size = inode->i_size;
+
+ if (pos > size)
+ memset(iomap->inline_data + size, 0, pos - size);
+ copied = copy_from_iter(iomap->inline_data + pos, length, iter);
+ if (copied) {
+ if (pos + copied > size)
+ i_size_write(inode, pos + copied);
+ mark_inode_dirty(inode);
+ }
+ } else {
+ copied = copy_to_iter(iomap->inline_data + pos, length, iter);
+ }
+ dio->size += copied;
+ return copied;
+}
+
static loff_t
iomap_dio_actor(struct inode *inode, loff_t pos, loff_t length,
void *data, struct iomap *iomap)
@@ -1467,6 +1494,8 @@ iomap_dio_actor(struct inode *inode, loff_t pos, loff_t length,
return iomap_dio_bio_actor(inode, pos, length, dio, iomap);
case IOMAP_MAPPED:
return iomap_dio_bio_actor(inode, pos, length, dio, iomap);
+ case IOMAP_INLINE:
+ return iomap_dio_inline_actor(inode, pos, length, dio, iomap);
default:
WARN_ON_ONCE(1);
return -EIO;
--
2.18.0
Powered by blists - more mailing lists