[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170807201818.kykqzexce6ap6aik@codemonkey.org.uk>
Date: Mon, 7 Aug 2017 16:18:18 -0400
From: Dave Jones <davej@...emonkey.org.uk>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: iov_iter_pipe warning.
On Fri, Apr 28, 2017 at 06:20:25PM +0100, Al Viro wrote:
> On Fri, Apr 28, 2017 at 12:50:24PM -0400, Dave Jones wrote:
> > currently running v4.11-rc8-75-gf83246089ca0
> >
> > sunrpc bit is for the other unrelated problem I'm chasing.
> >
> > note also, I saw the backtrace without the fs/splice.c changes.
>
> Interesting... Could you add this and see if that triggers?
>
> diff --git a/fs/splice.c b/fs/splice.c
> index 540c4a44756c..12a12d9c313f 100644
> --- a/fs/splice.c
> +++ b/fs/splice.c
> @@ -306,6 +306,9 @@ ssize_t generic_file_splice_read(struct file *in, loff_t *ppos,
> kiocb.ki_pos = *ppos;
> ret = call_read_iter(in, &kiocb, &to);
> if (ret > 0) {
> + if (WARN_ON(iov_iter_count(&to) != len - ret))
> + printk(KERN_ERR "ops %p: was %zd, left %zd, returned %d\n",
> + in->f_op, len, iov_iter_count(&to), ret);
> *ppos = kiocb.ki_pos;
> file_accessed(in);
> } else if (ret < 0) {
Hey Al,
Due to a git stash screw up on my part, I've had this leftover WARN_ON
in my tree for the last couple months. (That screw-up might turn out to be
serendipitous if this is a real bug..)
Today I decided to change things up and beat up on xfs for a change, and
was able to trigger this again.
Is this check no longer valid, or am I triggering the same bug we were chased
down in nfs, but now in xfs ? (None of the other detritus from that debugging
back in April made it, just those three lines above).
Dave
WARNING: CPU: 1 PID: 18377 at fs/splice.c:309 generic_file_splice_read+0x3e4/0x430
CPU: 1 PID: 18377 Comm: trinity-c17 Not tainted 4.13.0-rc4-think+ #1
task: ffff88045d2855c0 task.stack: ffff88045ca28000
RIP: 0010:generic_file_splice_read+0x3e4/0x430
RSP: 0018:ffff88045ca2f900 EFLAGS: 00010206
RAX: 000000000000001f RBX: ffff88045c36e200 RCX: 0000000000000000
RDX: 0000000000000fe1 RSI: dffffc0000000000 RDI: ffff88045ca2f960
RBP: ffff88045ca2fa38 R08: ffff88046b26b880 R09: 000000000000001f
R10: ffff88045ca2f540 R11: 0000000000000000 R12: ffff88045ca2f9b0
R13: ffff88045ca2fa10 R14: 1ffff1008b945f26 R15: ffff88045c36e228
FS: 00007f5580594700(0000) GS:ffff88046b200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f5580594698 CR3: 000000045d3ef000 CR4: 00000000001406e0
Call Trace:
? pipe_to_user+0xa0/0xa0
? lockdep_init_map+0xb2/0x2b0
? rw_verify_area+0x9d/0x150
do_splice_to+0xab/0xc0
splice_direct_to_actor+0x1ac/0x480
? generic_pipe_buf_nosteal+0x10/0x10
? do_splice_to+0xc0/0xc0
? rw_verify_area+0x9d/0x150
do_splice_direct+0x1b9/0x230
? splice_direct_to_actor+0x480/0x480
? retint_kernel+0x10/0x10
? rw_verify_area+0x9d/0x150
do_sendfile+0x428/0x840
? do_compat_pwritev64+0xb0/0xb0
? copy_user_generic_unrolled+0x83/0xb0
SyS_sendfile64+0xa4/0x120
? SyS_sendfile+0x150/0x150
? mark_held_locks+0x23/0xb0
? do_syscall_64+0xc0/0x3e0
? SyS_sendfile+0x150/0x150
do_syscall_64+0x1bc/0x3e0
? syscall_return_slowpath+0x240/0x240
? mark_held_locks+0x23/0xb0
? return_from_SYSCALL_64+0x2d/0x7a
? trace_hardirqs_on_caller+0x182/0x260
? trace_hardirqs_on_thunk+0x1a/0x1c
entry_SYSCALL64_slow_path+0x25/0x25
RIP: 0033:0x7f557febf219
RSP: 002b:00007ffc25086db8 EFLAGS: 00000246
ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 0000000000000028 RCX: 00007f557febf219
RDX: 00007f557e559000 RSI: 0000000000000187 RDI: 0000000000000199
RBP: 00007ffc25086e60 R08: 0000000000000100 R09: 0000000000006262
R10: 0000000000001000 R11: 0000000000000246 R12: 0000000000000002
R13: 00007f5580516058 R14: 00007f5580594698 R15: 00007f5580516000
---[ end trace e2f2217aba545e92 ]---
ops ffffffffa09e4920: was 4096, left 0, returned 31
$ grep ffffffffa09e4920 /proc/kallsyms
ffffffffa09e4920 r xfs_file_operations [xfs]
Powered by blists - more mailing lists