[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201109125026.570966436@linuxfoundation.org>
Date: Mon, 9 Nov 2020 13:56:20 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andy Strohman <astroh@...zon.com>,
"Darrick J. Wong" <darrick.wong@...cle.com>
Subject: [PATCH 5.4 83/85] xfs: flush for older, xfs specific ioctls
From: Andy Strohman <astroh@...zon.com>
837a6e7f5cdb ("fs: add generic UNRESVSP and ZERO_RANGE ioctl handlers") changed
ioctls XFS_IOC_UNRESVSP XFS_IOC_UNRESVSP64 and XFS_IOC_ZERO_RANGE to be generic
instead of xfs specific.
Because of this change, 36f11775da75 ("xfs: properly serialise fallocate against
AIO+DIO") needed adaptation, as 5.4 still uses the xfs specific ioctls.
Without this, xfstests xfs/242 and xfs/290 fail. Both of these tests test
XFS_IOC_ZERO_RANGE.
Fixes: 36f11775da75 ("xfs: properly serialise fallocate against AIO+DIO")
Tested-by: Andy Strohman <astroh@...zon.com>
Signed-off-by: Darrick J. Wong <darrick.wong@...cle.com>
Reviewed-by: Darrick J. Wong <darrick.wong@...cle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
fs/xfs/xfs_ioctl.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
--- a/fs/xfs/xfs_ioctl.c
+++ b/fs/xfs/xfs_ioctl.c
@@ -622,7 +622,6 @@ xfs_ioc_space(
error = xfs_break_layouts(inode, &iolock, BREAK_UNMAP);
if (error)
goto out_unlock;
- inode_dio_wait(inode);
switch (bf->l_whence) {
case 0: /*SEEK_SET*/
@@ -668,6 +667,31 @@ xfs_ioc_space(
goto out_unlock;
}
+ /*
+ * Must wait for all AIO to complete before we continue as AIO can
+ * change the file size on completion without holding any locks we
+ * currently hold. We must do this first because AIO can update both
+ * the on disk and in memory inode sizes, and the operations that follow
+ * require the in-memory size to be fully up-to-date.
+ */
+ inode_dio_wait(inode);
+
+ /*
+ * Now that AIO and DIO has drained we can flush and (if necessary)
+ * invalidate the cached range over the first operation we are about to
+ * run. We include zero range here because it starts with a hole punch
+ * over the target range.
+ */
+ switch (cmd) {
+ case XFS_IOC_ZERO_RANGE:
+ case XFS_IOC_UNRESVSP:
+ case XFS_IOC_UNRESVSP64:
+ error = xfs_flush_unmap_range(ip, bf->l_start, bf->l_len);
+ if (error)
+ goto out_unlock;
+ break;
+ }
+
switch (cmd) {
case XFS_IOC_ZERO_RANGE:
flags |= XFS_PREALLOC_SET;
Powered by blists - more mailing lists