[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <af911546-72e4-5525-6b31-1ad1f555799e@gmail.com>
Date: Mon, 3 May 2021 11:45:26 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: Palash Oswal <oswalpalash@...il.com>
Cc: Jens Axboe <axboe@...nel.dk>, io-uring@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>,
syzbot+11bf59db879676f59e52@...kaller.appspotmail.com
Subject: Re: INFO: task hung in io_uring_cancel_sqpoll
On 5/3/21 5:41 AM, Palash Oswal wrote:
> On Mon, May 3, 2021 at 3:42 AM Pavel Begunkov <asml.silence@...il.com> wrote:
>> The test might be very useful. Would you send a patch to
>> liburing? Or mind the repro being taken as a test?
>
> Pavel,
>
> I'm working on a PR for liburing with this test. Do you know how I can
Perfect, thanks
> address this behavior?
As the hang is sqpoll cancellations, it's most probably
fixed in 5.13 + again awaits to be taken for stable.
Don't know about segfaults, but it was so for long, and
syz reproducers are ofthen faults for me, and exit with 0
in the end. So, I wouldn't worry about it.
> root@...kaller:~/liburing/test# ./runtests.sh sqpoll-cancel-hang
> Running test sqp[ 15.310997] Running test sqpoll-cancel-hang:
> oll-cancel-hang:
> [ 15.333348] sqpoll-cancel-h[305]: segfault at 0 ip 000055ad00e265e3 sp]
> [ 15.334940] Code: 89 d8 8d 34 90 8b 45 04 ba 03 00 00 00 c1 e0 04 03 46
> All tests passed
>
> root@...kaller:~/liburing/test# ./sqpoll-cancel-hang
> [ 13.572639] sqpoll-cancel-h[298]: segfault at 0 ip 00005634c4a455e3 sp]
> [ 13.576506] Code: 89 d8 8d 34 90 8b 45 04 ba 03 00 00 00 c1 e0 04 03 46
> [ 23.350459] random: crng init done
> [ 23.352837] random: 7 urandom warning(s) missed due to ratelimiting
> [ 243.090865] INFO: task iou-sqp-298:299 blocked for more than 120 secon.
> [ 243.095187] Not tainted 5.12.0 #142
> [ 243.099800] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disable.
> [ 243.105928] task:iou-sqp-298 state:D stack: 0 pid: 299 ppid: 4
> [ 243.111044] Call Trace:
> [ 243.112855] __schedule+0xb1d/0x1130
> [ 243.115549] ? __sched_text_start+0x8/0x8
> [ 243.118328] ? io_wq_worker_sleeping+0x145/0x500
> [ 243.121790] schedule+0x131/0x1c0
> [ 243.123698] io_uring_cancel_sqpoll+0x288/0x350
> [ 243.125977] ? io_sq_thread_unpark+0xd0/0xd0
> [ 243.128966] ? mutex_lock+0xbb/0x130
> [ 243.132572] ? init_wait_entry+0xe0/0xe0
> [ 243.135429] ? wait_for_completion_killable_timeout+0x20/0x20
> [ 243.138303] io_sq_thread+0x174c/0x18c0
> [ 243.140162] ? io_rsrc_put_work+0x380/0x380
> [ 243.141613] ? init_wait_entry+0xe0/0xe0
> [ 243.143686] ? _raw_spin_lock_irq+0xa5/0x180
> [ 243.145619] ? _raw_spin_lock_irqsave+0x190/0x190
> [ 243.147671] ? calculate_sigpending+0x6b/0xa0
> [ 243.149036] ? io_rsrc_put_work+0x380/0x380
> [ 243.150694] ret_from_fork+0x22/0x30
>
> Palash
>
--
Pavel Begunkov
Powered by blists - more mailing lists