lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170412170723.GQ29622@ZenIV.linux.org.uk>
Date:   Wed, 12 Apr 2017 18:07:23 +0100
From:   Al Viro <viro@...IV.linux.org.uk>
To:     Dave Jones <davej@...emonkey.org.uk>,
        Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: iov_iter_pipe warning.

On Wed, Apr 12, 2017 at 12:27:09PM -0400, Dave Jones wrote:

> [ 1010.317696] asked to read 2097152, claims to have read 7025
> [ 1010.329471] actual size of data in pipe 65536 
> [ 1010.341162] [0:4096
> [ 1010.353232] ,1:4096
> [ 1010.364402] ,2:4096
> [ 1010.375608] ,3:4096
> [ 1010.386346] ,4:4096
> [ 1010.397027] ,5:4096
> [ 1010.407611] ,6:4096
> [ 1010.418010] ,7:4096
> [ 1010.428533] ,8:4096
> [ 1010.438885] ,9:4096
> [ 1010.449269] ,10:4096
> [ 1010.459462] ,11:4096
> [ 1010.469519] ,12:4096
> [ 1010.479326] ,13:4096
> [ 1010.489093] ,14:4096
> [ 1010.498711] ,15:4096
> [ 1010.508217] ]
> [ 1010.517570] f_op: ffffffffa0242980, f_flags: 311298, pos: 11/7036, size: 7036

	OK, I see what's going on.  Could you check if the following stops
the warnings?  It's not the final variant of fix - there's no need to copy
the entire iov_iter, it's just that the primitive needed to deal with that
in cleaner way is still not in mainline - davem has pulled it into
net.git, but that was after the latest pull from net.git into mainline.

	For now it should at least tell whether there's something else
going on, though:

diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
index aab32fc3d6a8..d1633753a1a8 100644
--- a/fs/nfs/direct.c
+++ b/fs/nfs/direct.c
@@ -568,6 +568,7 @@ ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
 	struct nfs_lock_context *l_ctx;
 	ssize_t result = -EINVAL;
 	size_t count = iov_iter_count(iter);
+	struct iov_iter data;
 	nfs_add_stats(mapping->host, NFSIOS_DIRECTREADBYTES, count);
 
 	dfprintk(FILE, "NFS: direct read(%pD2, %zd@%Ld)\n",
@@ -600,14 +601,17 @@ ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
 	nfs_start_io_direct(inode);
 
 	NFS_I(inode)->read_io += count;
-	result = nfs_direct_read_schedule_iovec(dreq, iter, iocb->ki_pos);
+	data = *iter;
+	result = nfs_direct_read_schedule_iovec(dreq, &data, iocb->ki_pos);
 
 	nfs_end_io_direct(inode);
 
 	if (!result) {
 		result = nfs_direct_wait(dreq);
-		if (result > 0)
+		if (result > 0) {
+			iov_iter_advance(iter, result);
 			iocb->ki_pos += result;
+		}
 	}
 
 out_release:

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ