[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d8b18ba8-ea12-b617-6b5e-455a1d7b5e21@linux.microsoft.com>
Date: Thu, 29 Sep 2022 15:18:21 +0200
From: Thilo Fromm <t-lo@...ux.microsoft.com>
To: Jan Kara <jack@...e.cz>
Cc: jack@...e.com, tytso@....edu, Ye Bin <yebin10@...wei.com>,
linux-ext4@...r.kernel.org
Subject: Re: [syzbot] possible deadlock in jbd2_journal_lock_updates
Hello Honza,
Thank you very much for your thorough feedback. We were unaware of the
backtrace issue and will have a look at once.
>>> So this seems like a real issue. Essentially, the problem is that
>>> ext4_bmap() acquires inode->i_rwsem while its caller
>>> jbd2_journal_flush() is holding journal->j_checkpoint_mutex. This
>>> looks like a real deadlock possibility.
>>
>> Flatcar Container Linux users have reported a kernel issue which might be
>> caused by commit 51ae846cff5. The issue is triggered under I/O load in
>> certain conditions and leads to a complete system hang. I've pasted a
>> typical kernel log below; please refer to
>> https://github.com/flatcar/Flatcar/issues/847 for more details.
>>
>> The issue can be triggered on Flatcar release 3227.2.2 / kernel version
>> 5.15.63 (we ship LTS kernels) but not on release 3227.2.1 / kernel 5.15.58.
>> 51ae846cff5 was introduced to 5.15 in 5.15.61.
>
> Well, so far your stacktraces do not really show anything pointing to that
> particular commit. So we need to understand that hang some more.
This makes sense and I agree. Sorry for the garbled stack traces.
In other news, one of our users - who can reliably trigger the issue in
their set-up - ran tests with kernel 5.15.63 with and without commit
51ae846cff5. Without the commit, the kernel hang did not occur (see
https://github.com/flatcar/Flatcar/issues/847#issuecomment-1261967920).
We'll now focus on un-garbling our traces to get to the bottom of this.
>> ( Kernel log of a crash follows; more info here:
>> https://github.com/flatcar/Flatcar/issues/847 )
>>
[...]
>> [1282119.190346] ret_from_fork+0x22/0x30
>
> Hrm, so your backtraces seem to be strange. For example in this stacktrace
> we should have kjournald2() somewhere instead of
> jbd2_journal_check_available_features() which can hardly be there. So
> somehow stack unwinding or symbol resolution is strangely confused with
> this kernel. Compiling with any unusual config or compiler?
We're on GCC 10.3.0 and will review our build process to get to the
bottom of this. Will get back to this thread as soon as we have news.
Thanks again for pointing this out!
> So far it seems that most tasks are waiting for transaction to commit, jbd2
> thread committing the transaction waits for someone to drop its transaction
> reference which never happens. It is unclear who holds the transaction
> reference. But with stacktraces corrupted like this it is difficult to be
> certain.
>
> So probably first try find out why stacktraces are not working right on
> your kernel and fix them. And then, if the hang happens, please trigger
> sysrq-w (or do echo w >/proc/sysrq-trigger if you can still get to the
> machine) and send here the output. It will dump all blocked tasks and from
> that we should be able to better understand what is happening.
Working on it!
Best regards,
Thilo
Powered by blists - more mailing lists