lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <381863DE-17A7-4D4E-8F28-0F18A4CEFC31@flyingcircus.io>
Date: Fri, 11 Oct 2024 09:27:08 +0200
From: Christian Theune <ct@...ingcircus.io>
To: Chris Mason <clm@...a.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
 Dave Chinner <david@...morbit.com>,
 Matthew Wilcox <willy@...radead.org>,
 Jens Axboe <axboe@...nel.dk>,
 linux-mm@...ck.org,
 "linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
 linux-fsdevel@...r.kernel.org,
 linux-kernel@...r.kernel.org,
 Daniel Dao <dqminh@...udflare.com>,
 regressions@...ts.linux.dev,
 regressions@...mhuis.info
Subject: Re: Known and unfixed active data loss bug in MM + XFS with large
 folios since Dec 2021 (any kernel from 6.1 upwards)

Hi,

> On 10. Oct 2024, at 08:29, Christian Theune <ct@...ingcircus.io> wrote:
> 
> 
>> On 1. Oct 2024, at 02:56, Chris Mason <clm@...a.com> wrote:
>> 
>> Not disagreeing with Linus at all, but given that you've got IO
>> throttling too, we might really just be waiting.  It's hard to tell
>> because the hung task timeouts only give you information about one process.
>> 
>> I've attached a minimal version of a script we use here to show all the
>> D state processes, it might help explain things.  The only problem is
>> you have to actually ssh to the box and run it when you're stuck.
>> 
>> The idea is to print the stack trace of every D state process, and then
>> also print out how often each unique stack trace shows up.  When we're
>> deadlocked on something, there are normally a bunch of the same stack
>> (say waiting on writeback) and then one jerk sitting around in a
>> different stack who is causing all the trouble.
> 
> I think I should be able to trigger this. I’ve seen around a 100 of those issues over the last week and the chance of it happening correlates with a certain workload that should be easy to trigger. Also, the condition remains for at around 5 minutes, so I should be able to trace it when I see the alert in an interactive session.
> 
> I’ve verified I can run your script and I’ll get back to you in the next days.

I wasn’t able to create a reproducer after all so I’ve set up alerting.

I just caught one right away, but it unblocked quickly after I logged in:

The original message that triggered the alert was:

[Oct11 09:18] INFO: task nix-build:157920 blocked for more than 122 seconds.
[  +0.000937]       Not tainted 6.11.0 #1-NixOS
[  +0.000540] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.000902] task:nix-build       state:D stack:0     pid:157920 tgid:157920 ppid:157919 flags:0x00000002
[  +0.001098] Call Trace:
[  +0.000306]  <TASK>
[  +0.000279]  __schedule+0x3a3/0x1300
[  +0.000478]  schedule+0x27/0xf0
[  +0.000392]  io_schedule+0x46/0x70
[  +0.000436]  folio_wait_bit_common+0x13f/0x340
[  +0.000572]  ? __pfx_wake_page_function+0x10/0x10
[  +0.000592]  folio_wait_writeback+0x2b/0x80
[  +0.000466]  truncate_inode_partial_folio+0x5e/0x1b0
[  +0.000586]  truncate_inode_pages_range+0x1de/0x400
[  +0.000595]  evict+0x29f/0x2c0
[  +0.000396]  ? iput+0x6e/0x230
[  +0.000408]  ? _atomic_dec_and_lock+0x39/0x50
[  +0.000542]  do_unlinkat+0x2de/0x330
[  +0.000402]  __x64_sys_unlink+0x3f/0x70
[  +0.000419]  do_syscall_64+0xb7/0x200
[  +0.000407]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[  +0.000556] RIP: 0033:0x7f2bb5d1056b
[  +0.000473] RSP: 002b:00007ffc013c8588 EFLAGS: 00000206 ORIG_RAX: 0000000000000057
[  +0.000942] RAX: ffffffffffffffda RBX: 000055963c267500 RCX: 00007f2bb5d1056b
[  +0.000859] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000055963c268c80
[  +0.000800] RBP: 000055963c267690 R08: 0000000000016020 R09: 0000000000000000
[  +0.000977] R10: 00000000000000f0 R11: 0000000000000206 R12: 00007ffc013c85c8
[  +0.000826] R13: 00007ffc013c85ac R14: 00007ffc013c8ed0 R15: 00005596441e42b0
[  +0.000833]  </TASK>

Then after logging in I caught it once with walker.py - this was about a minute after the alert triggered I think. I’ll add timestamps to walker.py in the next instances:

157920 nix-build D
[<0>] folio_wait_bit_common+0x13f/0x340
[<0>] folio_wait_writeback+0x2b/0x80
[<0>] truncate_inode_partial_folio+0x5e/0x1b0
[<0>] truncate_inode_pages_range+0x1de/0x400
[<0>] evict+0x29f/0x2c0
[<0>] do_unlinkat+0x2de/0x330
[<0>] __x64_sys_unlink+0x3f/0x70
[<0>] do_syscall_64+0xb7/0x200
[<0>] entry_SYSCALL_64_after_hwframe+0x77/0x7f

-----
stack summary

1 hit:
[<0>] folio_wait_bit_common+0x13f/0x340
[<0>] folio_wait_writeback+0x2b/0x80
[<0>] truncate_inode_partial_folio+0x5e/0x1b0
[<0>] truncate_inode_pages_range+0x1de/0x400
[<0>] evict+0x29f/0x2c0
[<0>] do_unlinkat+0x2de/0x330
[<0>] __x64_sys_unlink+0x3f/0x70
[<0>] do_syscall_64+0xb7/0x200
[<0>] entry_SYSCALL_64_after_hwframe+0x77/0x7f

I tried once again after 1-2 seconds and got this:

157920 nix-build D
[<0>] xlog_wait_on_iclog+0x167/0x180 [xfs]
[<0>] xfs_log_force_seq+0x8d/0x150 [xfs]
[<0>] xfs_file_fsync+0x195/0x2a0 [xfs]
[<0>] __x64_sys_fdatasync+0x52/0x90
[<0>] do_syscall_64+0xb7/0x200
[<0>] entry_SYSCALL_64_after_hwframe+0x77/0x7f

-----
stack summary

1 hit:
[<0>] xlog_wait_on_iclog+0x167/0x180 [xfs]
[<0>] xfs_log_force_seq+0x8d/0x150 [xfs]
[<0>] xfs_file_fsync+0x195/0x2a0 [xfs]
[<0>] __x64_sys_fdatasync+0x52/0x90
[<0>] do_syscall_64+0xb7/0x200
[<0>] entry_SYSCALL_64_after_hwframe+0x77/0x7f

and after that the process was done and exited. The last traceback looks unlocked already.

I’m going to gather a few more instances during the day and will post them as a batch later.

Christian

-- 
Christian Theune · ct@...ingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ