lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Mar 2022 10:11:21 +0530
From:   Ritesh Harjani <riteshh@...ux.ibm.com>
To:     Ye Bin <yebin10@...wei.com>
Cc:     tytso@....edu, adilger.kernel@...ger.ca,
        linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
        jack@...e.cz, lczerner@...hat.com,
        Ojaswin Mujoo <ojaswin@...ux.ibm.com>
Subject: Re: [PATCH -next] ext4: fix bug_on in start_this_handle during
 umount filesystem

On 22/03/22 09:24AM, Ye Bin wrote:
> We got issue as follows:
> ------------[ cut here ]------------
> kernel BUG at fs/jbd2/transaction.c:389!
> invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI
> CPU: 9 PID: 131 Comm: kworker/9:1 Not tainted 5.17.0-862.14.0.6.x86_64-00001-g23f87daf7d74-dirty #197
> Workqueue: events flush_stashed_error_work
> RIP: 0010:start_this_handle+0x41c/0x1160
> RSP: 0018:ffff888106b47c20 EFLAGS: 00010202
> RAX: ffffed10251b8400 RBX: ffff888128dc204c RCX: ffffffffb52972ac
> RDX: 0000000000000200 RSI: 0000000000000004 RDI: ffff888128dc2050
> RBP: 0000000000000039 R08: 0000000000000001 R09: ffffed10251b840a
> R10: ffff888128dc204f R11: ffffed10251b8409 R12: ffff888116d78000
> R13: 0000000000000000 R14: dffffc0000000000 R15: ffff888128dc2000
> FS:  0000000000000000(0000) GS:ffff88839d680000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000001620068 CR3: 0000000376c0e000 CR4: 00000000000006e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
>  <TASK>
>  jbd2__journal_start+0x38a/0x790
>  jbd2_journal_start+0x19/0x20
>  flush_stashed_error_work+0x110/0x2b3
>  process_one_work+0x688/0x1080
>  worker_thread+0x8b/0xc50
>  kthread+0x26f/0x310
>  ret_from_fork+0x22/0x30
>  </TASK>
> Modules linked in:
> ---[ end trace 0000000000000000 ]---
>
> Above issue may happen as follows:
>       umount            read procfs            error_work
> ext4_put_super
>   flush_work(&sbi->s_error_work);
>
>                       ext4_mb_seq_groups_show
> 	                ext4_mb_load_buddy_gfp
> 			  ext4_mb_init_group
> 			    ext4_mb_init_cache
> 	                      ext4_read_block_bitmap_nowait
> 			        ext4_validate_block_bitmap
> 				  ext4_error
					^^^^^^^ I am guessing this occurred due to some error
injection framework? or was it a bad disk?

> 			            ext4_handle_error
> 			              schedule_work(&EXT4_SB(sb)->s_error_work);
>
>   ext4_unregister_sysfs(sb);
>   jbd2_journal_destroy(sbi->s_journal);
>     journal_kill_thread
>       journal->j_flags |= JBD2_UNMOUNT;
>
>                                           flush_stashed_error_work
> 				            jbd2_journal_start
> 					      start_this_handle
> 					        BUG_ON(journal->j_flags & JBD2_UNMOUNT);
>
> To solve this issue, we call 'ext4_unregister_sysfs' in 'ext4_put_super' firstly
> like 'ext4_fill_super' error handle.

I don't see a reason why not. In fact to simulate this more reliably and to add
a fstest around this - we could do following.
(If we are adding a fstest we might also explore checking other exported sysfs options
racing with umount/mount or module load/unload).
Like in past it was journal_task at [1]


Thread-1 												Thread-2
while [ 1 ]:  							   				while [ 1 ]:
  echo 1 > /sys/fs/ext4/<dev>/trigger_fs_error            umount /dev/<dev>
  sleep random 											  sleep random
  														  mount /dev/<dev> /mnt

Currently we call flush_work(&sbi->s_error_work) and then
ext4_unregister_sysfs(). So if someone triggered an fs error before
unregistering from sysfs, it will schedule_work() which might race similar
to jbd2_journal_destroy() like what you showed above.

So calling ext4_unregister_sysfs() as the first thing in ext4_put_super(),
looks good to me.

Reviewed-by: Ritesh Harjani <riteshh@...ux.ibm.com>


[1]: https://lore.kernel.org/all/20200318061301.4320-1-riteshh@linux.ibm.com/


>
> Signed-off-by: Ye Bin <yebin10@...wei.com>
> ---
>  fs/ext4/super.c | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/fs/ext4/super.c b/fs/ext4/super.c
> index 81749eaddf4c..a673012e35c8 100644
> --- a/fs/ext4/super.c
> +++ b/fs/ext4/super.c
> @@ -1199,20 +1199,25 @@ static void ext4_put_super(struct super_block *sb)
>  	int aborted = 0;
>  	int i, err;
>
> -	ext4_unregister_li_request(sb);
> -	ext4_quota_off_umount(sb);
> -
> -	flush_work(&sbi->s_error_work);
> -	destroy_workqueue(sbi->rsv_conversion_wq);
> -	ext4_release_orphan_info(sb);
> -
>  	/*
>  	 * Unregister sysfs before destroying jbd2 journal.
>  	 * Since we could still access attr_journal_task attribute via sysfs
>  	 * path which could have sbi->s_journal->j_task as NULL
> +	 * Unregister sysfs before flush sbi->s_error_work.
> +	 * Since user may read /proc/fs/ext4/xx/mb_groups during umount, If
> +	 * read metadata verify failed then will queue error work.
> +	 * flush_stashed_error_work will call start_this_handle may trigger
> +	 * BUG_ON.
>  	 */
>  	ext4_unregister_sysfs(sb);
>
> +	ext4_unregister_li_request(sb);
> +	ext4_quota_off_umount(sb);
> +
> +	flush_work(&sbi->s_error_work);
> +	destroy_workqueue(sbi->rsv_conversion_wq);
> +	ext4_release_orphan_info(sb);
> +
>  	if (sbi->s_journal) {
>  		aborted = is_journal_aborted(sbi->s_journal);
>  		err = jbd2_journal_destroy(sbi->s_journal);
> --
> 2.31.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ