[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c7b9c600-724b-6df1-84ba-b74999d6f4a6@kernel.dk>
Date: Mon, 18 Nov 2019 21:34:22 -0700
From: Jens Axboe <axboe@...nel.dk>
To: Eric Biggers <ebiggers@...nel.org>
Cc: syzbot <syzbot+0f1cc17f85154f400465@...kaller.appspotmail.com>,
andriy.shevchenko@...ux.intel.com, davem@...emloft.net,
f.fainelli@...il.com, gregkh@...uxfoundation.org,
idosch@...lanox.com, kimbrownkd@...il.com,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, petrm@...lanox.com,
syzkaller-bugs@...glegroups.com, tglx@...utronix.de,
viro@...iv.linux.org.uk, wanghai26@...wei.com,
yuehaibing@...wei.com
Subject: Re: INFO: task hung in io_wq_destroy
On 11/18/19 8:15 PM, Jens Axboe wrote:
> On 11/18/19 7:23 PM, Eric Biggers wrote:
>> Hi Jens,
>>
>> On Mon, Oct 28, 2019 at 03:00:08PM -0600, Jens Axboe wrote:
>>> This is fixed in my for-next branch for a few days at least, unfortunately
>>> linux-next is still on the old one. Next version should be better.
>>
>> This is still occurring on linux-next. Here's a report on next-20191115 from
>> https://syzkaller.appspot.com/text?tag=CrashReport&x=16fa3d1ce00000
>
> Hmm, I'll take a look. Looking at the reproducer, it's got a massive
> sleep at the end. I take it this triggers before that time actually
> passes? Because that's around 11.5 days of sleep.
>
> No luck reproducing this so far, I'll try on linux-next.
I see what it is - if the io-wq is setup and torn down before the
manager thread is started, then we won't create the workers we already
expected. The manager thread will exit without doing anything, but
teardown will wait for the expected workers to exit before being
allowed to proceed. That never happens.
I've got a patch for this, but I'll test it a bit and send it out
tomorrow.
--
Jens Axboe
Powered by blists - more mailing lists