[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170428152955.mafs3f22srmm34aw@codemonkey.org.uk>
Date: Fri, 28 Apr 2017 11:29:55 -0400
From: Dave Jones <davej@...emonkey.org.uk>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: iov_iter_pipe warning.
On Fri, Apr 21, 2017 at 06:54:30PM +0100, Al Viro wrote:
> On Wed, Apr 12, 2017 at 03:03:18PM -0400, Dave Jones wrote:
>
> > Well it's been running an hour without incident, which looks promising.
> > I'll leave it run, but I'd say you're on the right track given how quick
> > it reproduced so far.
>
> Could you try this and see if it works? What happens is that unlike
> e.g. generic_file_read_iter/generic_file_write_iter, NFS O_DIRECT handling
> does not make sure that iov_iter had been advanced by the amount
> actually transferred - it is left advanced by the amount *requested*.
Crap. So I never ran it long enough it seems. I can still hit that trace.
I re-added one of your earlier debug patches, and got this..
WARNING: CPU: 1 PID: 20100 at fs/splice.c:297 test_it+0x7d/0x120
CPU: 1 PID: 20100 Comm: trinity-c49 Not tainted 4.11.0-rc8-think+ #3
Call Trace:
dump_stack+0x68/0x93
__warn+0xcb/0xf0
warn_slowpath_null+0x1d/0x20
test_it+0x7d/0x120
generic_file_splice_read+0x19a/0x1e0
do_splice_to+0x79/0x90
splice_direct_to_actor+0xc6/0x280
? generic_pipe_buf_nosteal+0x10/0x10
do_splice_direct+0x9e/0xd0
do_sendfile+0x1d7/0x3c0
SyS_sendfile64+0xc9/0xe0
do_syscall_64+0x66/0x1d0
entry_SYSCALL64_slow_path+0x25/0x25
RIP: 0033:0x7f8b83b59099
RSP: 002b:00007ffd7967da68 EFLAGS: 00000246
ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 0000000000000028 RCX: 00007f8b83b59099
RDX: 0000000000000000 RSI: 0000000000000131 RDI: 0000000000000195
RBP: 00007f8b840d1000 R08: 0000000000000073 R09: fffffffffffffffc
R10: 000000000000040f R11: 0000000000000246 R12: 0000000000000002
R13: 00007f8b840d1048 R14: 00007f8b8422fad8 R15: 00007f8b840d1000
---[ end trace 70d344adaede0734 ]---
asked to read 1039, claims to have read 62
actual size of data in pipe 977
[0:977
]
f_op: ffffffffa02dd980, f_flags: 313346, pos: 4478229749/4478229811, size: 4478229811
------------[ cut here ]------------
WARNING: CPU: 1 PID: 20100 at fs/splice.c:1017 splice_direct_to_actor+0x13f/0x280
CPU: 1 PID: 20100 Comm: trinity-c49 Tainted: G W 4.11.0-rc8-think+ #3
Call Trace:
dump_stack+0x68/0x93
__warn+0xcb/0xf0
warn_slowpath_null+0x1d/0x20
splice_direct_to_actor+0x13f/0x280
? generic_pipe_buf_nosteal+0x10/0x10
do_splice_direct+0x9e/0xd0
do_sendfile+0x1d7/0x3c0
SyS_sendfile64+0xc9/0xe0
do_syscall_64+0x66/0x1d0
entry_SYSCALL64_slow_path+0x25/0x25
RIP: 0033:0x7f8b83b59099
RSP: 002b:00007ffd7967da68 EFLAGS: 00000246
ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 0000000000000028 RCX: 00007f8b83b59099
RDX: 0000000000000000 RSI: 0000000000000131 RDI: 0000000000000195
RBP: 00007f8b840d1000 R08: 0000000000000073 R09: fffffffffffffffc
R10: 000000000000040f R11: 0000000000000246 R12: 0000000000000002
R13: 00007f8b840d1048 R14: 00007f8b8422fad8 R15: 00007f8b840d1000
---[ end trace 70d344adaede0735 ]---
in->f_op = ffffffffa02dd980, ->splice_write = ffffffff812b1db0
$ grep ffffffffa02dd980 /proc/kallsyms
ffffffffa02dd980 r nfs4_file_operations [nfsv4]
$ grep ffffffff812b1db0 /proc/kallsyms
ffffffff812b1db0 T iter_file_splice_write
Powered by blists - more mailing lists