lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bug-198187-13602-uK3IaBSDpw@https.bugzilla.kernel.org/>
Date:   Thu, 18 Jan 2018 10:33:47 +0000
From:   bugzilla-daemon@...zilla.kernel.org
To:     linux-ext4@...nel.org
Subject: [Bug 198187] jbd2_log_wait_commit hangs

https://bugzilla.kernel.org/show_bug.cgi?id=198187

--- Comment #9 from lavv17@...il.com ---
You are correct that we use plain SATA drives. LVM on top of them, with
raid1 mode.

18 янв. 2018 г. 13:06 пользователь <bugzilla-daemon@...zilla.kernel.org>
написал:

> https://bugzilla.kernel.org/show_bug.cgi?id=198187
>
> --- Comment #8 from Jan Kara (jack@...e.cz) ---
> Thanks for the output. So every process there waits for jbd2 thread to
> commit
> the running transaction. JBD2 does:
>
> [144526.447408] jbd2/dm-19-8    D    0  1580      2 0x00000080
> [144526.448176] Call Trace:
> [144526.448937]  __schedule+0x3be/0x830
> [144526.449632]  ? scsi_request_fn+0x3f/0x6b0
> [144526.450310]  ? bit_wait+0x60/0x60
> [144526.450993]  schedule+0x36/0x80
> [144526.451673]  io_schedule+0x16/0x40
> [144526.452343]  bit_wait_io+0x11/0x60
> [144526.453017]  __wait_on_bit+0x58/0x90
> [144526.453692]  out_of_line_wait_on_bit+0x8e/0xb0
> [144526.454371]  ? bit_waitqueue+0x40/0x40
> [144526.455051]  __wait_on_buffer+0x32/0x40
> [144526.455731]  jbd2_journal_commit_transaction+0xfa4/0x1800
> [144526.456414]  kjournald2+0xd2/0x270
> [144526.457098]  ? kjournald2+0xd2/0x270
> [144526.457782]  ? remove_wait_queue+0x70/0x70
> [144526.458470]  kthread+0x109/0x140
> [144526.459139]  ? commit_timeout+0x10/0x10
> [144526.459821]  ? kthread_park+0x60/0x60
> [144526.460497]  ? do_syscall_64+0x67/0x150
> [144526.461166]  ret_from_fork+0x25/0x30
>
> So we have submitted buffers for IO and they have not completed. In cases
> like
> this the problem is in 99% of cases either in the storage driver or in the
> storage firmware. Since you mentioned this started happening after kernel
> update and you seem to be using only plain SATA drives (am I right?),
> storage
> firmware is probably out of question.
>
> You seem to be using some kind of RAID on top of these SATA drives, that
> would
> look like the most probable culprit at this point. Can you describe your
> storage configuration? Also output of 'dmsetup table' would be useful.
> After
> having that we would probably need to pull in DM developers to have a look.
> Thanks.
>
> --
> You are receiving this mail because:
> You reported the bug.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ