[<prev] [next>] [day] [month] [year] [list]
Message-ID: <ZrdzbFYJly8EaVzC@xpf.sh.intel.com>
Date: Sat, 10 Aug 2024 22:04:28 +0800
From: Pengfei Xu <pengfei.xu@...el.com>
To: <hch@....de>
CC: <linux-kernel@...r.kernel.org>, <linux-pm@...r.kernel.org>,
<axboe@...nel.dk>, <syzkaller-bugs@...glegroups.com>
Subject: [Syzkaller & bisect] There is general protection fault in path_init
in v6.11-rc2
Hi Christoph Hellwig,
Greetings!
There is general protection fault in path_init in v6.11-rc2:
Bisected and found it related to:
1e8c813b083c PM: hibernate: don't use early_lookup_bdev in resume_store
All detailed info: https://github.com/xupengfe/syzkaller_logs/tree/main/240809_171408_path_init
Syzkaller repro code: https://github.com/xupengfe/syzkaller_logs/blob/main/240809_171408_path_init/repro.c
Syzkaller repro syscall steps: https://github.com/xupengfe/syzkaller_logs/blob/main/240809_171408_path_init/repro.prog
Syzkaller report: https://github.com/xupengfe/syzkaller_logs/blob/main/240809_171408_path_init/repro.report
Kconfig(make olddefconfig): https://github.com/xupengfe/syzkaller_logs/blob/main/240809_171408_path_init/kconfig_origin
Bisect info: https://github.com/xupengfe/syzkaller_logs/blob/main/240809_171408_path_init/bisect_info.log
Issue dmesg: https://github.com/xupengfe/syzkaller_logs/blob/main/240809_171408_path_init/de9c2c66ad8e787abec7c9d7eff4f8c3cdd28aed_dmesg.log
v6.11-rc2 bzImage: https://github.com/xupengfe/syzkaller_logs/raw/main/240809_171408_path_init/bzImage_de9c2c66ad8e787abec7c9d7eff4f8c3cdd28aed.tar.gz
"
[ 23.436545] cgroup: Unknown subsys name 'net'
[ 23.567369] cgroup: Unknown subsys name 'rlimit'
[ 23.737915] Process accounting resumed
[ 23.747674] cgroup: fork rejected by pids controller in /syz0
[ 23.749730] general protection fault, probably for non-canonical address 0xdffffc000000000a: 0000 [#1] PREEMPT SMP KASAN NOPTI
[ 23.750465] KASAN: null-ptr-deref in range [0x0000000000000050-0x0000000000000057]
[ 23.750937] CPU: 0 PID: 719 Comm: repro Not tainted 6.4.0-rc2-1e8c813b083c+ #1
[ 23.751395] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
[ 23.752100] RIP: 0010:__lock_acquire+0xe83/0x5e10
[ 23.752422] Code: 00 00 3b 05 df 65 fc 08 0f 87 c8 08 00 00 41 bf 01 00 00 00 e9 84 00 00 00 48 b8 00 00 00 00 00 fc ff df 4c 89 da 48 c1 ea 0f
[ 23.753566] RSP: 0018:ff1100001446f060 EFLAGS: 00010006
[ 23.753896] RAX: dffffc0000000000 RBX: 1fe220000288de1f RCX: 0000000000000002
[ 23.754347] RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000001
[ 23.754794] RBP: ff1100001446f180 R08: 0000000000000001 R09: 0000000000000001
[ 23.755242] R10: fffffbfff0e70d4c R11: 0000000000000050 R12: 0000000000000001
[ 23.755694] R13: ff1100001a9f0000 R14: 0000000000000000 R15: 0000000000000002
[ 23.756134] FS: 0000000000000000(0000) GS:ff1100006c200000(0000) knlGS:0000000000000000
[ 23.756636] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 23.756989] CR2: 00007fec33ffca50 CR3: 000000000667e003 CR4: 0000000000771ef0
[ 23.757439] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 23.757886] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400
[ 23.758338] PKRU: 55555554
[ 23.758525] Call Trace:
[ 23.758690] <TASK>
[ 23.758829] ? __kasan_check_read+0x15/0x20
[ 23.759101] ? __lock_acquire+0xc77/0x5e10
[ 23.759376] ? __pfx_mark_lock.part.0+0x10/0x10
[ 23.759688] ? __pfx___lock_acquire+0x10/0x10
[ 23.759977] ? __pfx___lock_acquire+0x10/0x10
[ 23.760268] ? lock_release+0x417/0x7e0
[ 23.760535] lock_acquire+0x1c9/0x530
[ 23.760782] ? path_init+0x8cd/0x16e0
[ 23.761034] ? __pfx_lock_acquire+0x10/0x10
[ 23.761308] ? __pfx_lock_acquire+0x10/0x10
[ 23.761591] ? seqcount_lockdep_reader_access+0x82/0xd0
[ 23.761933] ? seqcount_lockdep_reader_access+0x82/0xd0
[ 23.762272] ? path_init+0x8cd/0x16e0
[ 23.762524] ? debug_smp_processor_id+0x20/0x30
[ 23.762828] ? rcu_is_watching+0x19/0xc0
[ 23.763097] seqcount_lockdep_reader_access+0x9f/0xd0
[ 23.763423] ? path_init+0x8cd/0x16e0
[ 23.763675] path_init+0x8cd/0x16e0
[ 23.763913] ? getname_kernel+0x5c/0x380
[ 23.764174] path_lookupat+0x35/0x770
[ 23.764423] ? kasan_save_stack+0x2a/0x50
[ 23.764693] ? kasan_set_track+0x29/0x40
[ 23.764948] filename_lookup+0x1db/0x5a0
[ 23.765212] ? __pfx_filename_lookup+0x10/0x10
[ 23.765512] ? __this_cpu_preempt_check+0x21/0x30
[ 23.765821] ? lock_is_held_type+0xf0/0x150
[ 23.766104] ? kmem_cache_alloc+0x32d/0x370
[ 23.766382] ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[ 23.766744] kern_path+0x42/0x60
[ 23.766964] lookup_bdev+0xda/0x2a0
[ 23.767203] ? __pfx_lookup_bdev+0x10/0x10
[ 23.767485] ? __kmalloc_node_track_caller+0xfb/0x180
[ 23.767812] resume_store+0x233/0x540
[ 23.768050] ? __pfx_resume_store+0x10/0x10
[ 23.768326] ? __this_cpu_preempt_check+0x21/0x30
[ 23.768641] ? lock_acquire+0x1d9/0x530
[ 23.768905] ? __this_cpu_preempt_check+0x21/0x30
[ 23.769217] ? __pfx_resume_store+0x10/0x10
[ 23.769488] kobj_attr_store+0x5b/0x90
[ 23.769741] ? __pfx_kobj_attr_store+0x10/0x10
[ 23.770031] sysfs_kf_write+0x11f/0x180
[ 23.770290] kernfs_fop_write_iter+0x411/0x630
[ 23.770584] ? __pfx_sysfs_kf_write+0x10/0x10
[ 23.770879] __kernel_write_iter+0x28c/0x7f0
[ 23.771164] ? __pfx___kernel_write_iter+0x10/0x10
[ 23.771485] ? __pfx___lock_acquire+0x10/0x10
[ 23.771785] ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[ 23.772130] ? iov_iter_kvec+0x55/0x1f0
[ 23.772382] __kernel_write+0xe4/0x130
[ 23.772638] ? __pfx___kernel_write+0x10/0x10
[ 23.772922] ? __pfx_lock_acquire+0x10/0x10
[ 23.773209] ? __this_cpu_preempt_check+0x21/0x30
[ 23.773522] ? lock_is_held_type+0xf0/0x150
[ 23.773806] do_acct_process+0xd84/0x1580
[ 23.774075] ? __pfx_do_acct_process+0x10/0x10
[ 23.774374] ? __this_cpu_preempt_check+0x21/0x30
[ 23.774688] ? __pfx_lock_release+0x10/0x10
[ 23.774966] ? pin_kill+0x11e/0x980
[ 23.775201] acct_pin_kill+0x38/0x110
[ 23.775452] pin_kill+0x182/0x980
[ 23.775676] ? lock_acquire+0x1d9/0x530
[ 23.775935] ? __pfx_pin_kill+0x10/0x10
[ 23.776187] ? call_rcu+0x12/0x20
[ 23.776420] ? __pfx_autoremove_wake_function+0x10/0x10
[ 23.776761] ? __sanitizer_cov_trace_cmp8+0x1c/0x30
[ 23.777079] ? _find_next_bit+0x120/0x160
[ 23.777343] ? mnt_pin_kill+0x72/0x210
[ 23.777603] ? mnt_pin_kill+0x72/0x210
[ 23.777851] mnt_pin_kill+0x72/0x210
[ 23.778095] cleanup_mnt+0x343/0x400
[ 23.778335] __cleanup_mnt+0x1f/0x30
[ 23.778572] task_work_run+0x19d/0x2b0
[ 23.778823] ? __pfx_task_work_run+0x10/0x10
[ 23.779096] ? free_nsproxy+0x3b2/0x4e0
[ 23.779349] ? switch_task_namespaces+0xc8/0xe0
[ 23.779656] do_exit+0xaf5/0x2730
[ 23.779880] ? lock_release+0x417/0x7e0
[ 23.780139] ? __pfx_lock_release+0x10/0x10
[ 23.780427] ? __pfx_do_exit+0x10/0x10
[ 23.780673] ? __this_cpu_preempt_check+0x21/0x30
[ 23.780982] ? _raw_spin_unlock_irq+0x2c/0x60
[ 23.781272] ? lockdep_hardirqs_on+0x8a/0x110
[ 23.781564] ? _raw_spin_unlock_irq+0x2c/0x60
[ 23.781841] ? trace_hardirqs_on+0x26/0x120
[ 23.782120] do_group_exit+0xe5/0x2c0
[ 23.782369] __x64_sys_exit_group+0x4d/0x60
[ 23.782655] do_syscall_64+0x3c/0x90
[ 23.782899] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[ 23.783225] RIP: 0033:0x7fec33f18a4d
[ 23.783460] Code: Unable to access opcode bytes at 0x7fec33f18a23.
[ 23.783838] RSP: 002b:00007fffdd22ee98 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
[ 23.784312] RAX: ffffffffffffffda RBX: 00007fec33ff69e0 RCX: 00007fec33f18a4d
[ 23.784763] RDX: 00000000000000e7 RSI: fffffffffffffeb0 RDI: 0000000000000001
[ 23.785209] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000020
[ 23.785663] R10: 00007fffdd22ed40 R11: 0000000000000246 R12: 00007fec33ff69e0
[ 23.786108] R13: 00007fec33ffbf00 R14: 0000000000000001 R15: 00007fec33ffbee8
[ 23.786560] </TASK>
[ 23.786707] Modules linked in:
[ 23.786910] ---[ end trace 0000000000000000 ]---
[ 23.787204] RIP: 0010:__lock_acquire+0xe83/0x5e10
[ 23.787517] Code: 00 00 3b 05 df 65 fc 08 0f 87 c8 08 00 00 41 bf 01 00 00 00 e9 84 00 00 00 48 b8 00 00 00 00 00 fc ff df 4c 89 da 48 c1 ea 0f
[ 23.788664] RSP: 0018:ff1100001446f060 EFLAGS: 00010006
[ 23.788992] RAX: dffffc0000000000 RBX: 1fe220000288de1f RCX: 0000000000000002
[ 23.789444] RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000001
[ 23.789890] RBP: ff1100001446f180 R08: 0000000000000001 R09: 0000000000000001
[ 23.790336] R10: fffffbfff0e70d4c R11: 0000000000000050 R12: 0000000000000001
[ 23.790788] R13: ff1100001a9f0000 R14: 0000000000000000 R15: 0000000000000002
[ 23.791233] FS: 0000000000000000(0000) GS:ff1100006c200000(0000) knlGS:0000000000000000
[ 23.791728] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 23.792093] CR2: 00007fec33ffca50 CR3: 000000000667e003 CR4: 0000000000771ef0
[ 23.792538] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 23.792970] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400
[ 23.793423] PKRU: 55555554
[ 23.793608] note: repro[719] exited with irqs disabled
[ 23.793983] Fixing recursive fault but reboot is needed!
[ 23.794322] BUG: using smp_processor_id() in preemptible [00000000] code: repro/719
[ 23.794823] caller is debug_smp_processor_id+0x20/0x30
[ 23.795151] CPU: 0 PID: 719 Comm: repro Tainted: G D 6.4.0-rc2-1e8c813b083c+ #1
[ 23.795692] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
[ 23.796391] Call Trace:
[ 23.796557] <TASK>
[ 23.796695] dump_stack_lvl+0xe1/0x110
[ 23.796943] dump_stack+0x19/0x20
[ 23.797164] check_preemption_disabled+0x16a/0x180
[ 23.797484] debug_smp_processor_id+0x20/0x30
[ 23.797771] __schedule+0x9a/0x3010
[ 23.797998] ? debug_smp_processor_id+0x20/0x30
[ 23.798293] ? rcu_is_watching+0x19/0xc0
[ 23.798558] ? __pfx___schedule+0x10/0x10
[ 23.798820] ? __pfx_lock_release+0x10/0x10
[ 23.799092] ? _raw_spin_unlock_irqrestore+0x35/0x70
[ 23.799404] ? do_task_dead+0xa6/0x110
[ 23.799655] ? debug_smp_processor_id+0x20/0x30
[ 23.799954] ? rcu_is_watching+0x19/0xc0
[ 23.800215] ? _raw_spin_unlock_irqrestore+0x35/0x70
[ 23.800537] ? trace_hardirqs_on+0x26/0x120
[ 23.800810] do_task_dead+0xde/0x110
[ 23.801046] make_task_dead+0x37f/0x3c0
[ 23.801304] ? __x64_sys_exit_group+0x4d/0x60
[ 23.801595] rewind_stack_and_make_dead+0x17/0x20
[ 23.801903] RIP: 0033:0x7fec33f18a4d
[ 23.802140] Code: Unable to access opcode bytes at 0x7fec33f18a23.
[ 23.802535] RSP: 002b:00007fffdd22ee98 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
[ 23.803013] RAX: ffffffffffffffda RBX: 00007fec33ff69e0 RCX: 00007fec33f18a4d
[ 23.803458] RDX: 00000000000000e7 RSI: fffffffffffffeb0 RDI: 0000000000000001
[ 23.803908] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000020
[ 23.804354] R10: 00007fffdd22ed40 R11: 0000000000000246 R12: 00007fec33ff69e0
[ 23.804803] R13: 00007fec33ffbf00 R14: 0000000000000001 R15: 00007fec33ffbee8
[ 23.805258] </TASK>
[ 23.805421] BUG: scheduling while atomic: repro/719/0x00000000
[ 23.805801] INFO: lockdep is turned off.
[ 23.806050] Modules linked in:
[ 23.806249] Preemption disabled at:
[ 23.806252] [<ffffffff813123e7>] do_task_dead+0x27/0x110
[ 23.806829] CPU: 0 PID: 719 Comm: repro Tainted: G D 6.4.0-rc2-1e8c813b083c+ #1
[ 23.807382] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
[ 23.808097] Call Trace:
[ 23.808260] <TASK>
[ 23.808403] dump_stack_lvl+0xe1/0x110
[ 23.808656] ? do_task_dead+0x27/0x110
[ 23.808898] dump_stack+0x19/0x20
[ 23.809120] __schedule_bug+0x13f/0x190
[ 23.809379] __schedule+0x221f/0x3010
[ 23.809630] ? rcu_is_watching+0x19/0xc0
[ 23.809887] ? __pfx___schedule+0x10/0x10
[ 23.810142] ? __pfx_lock_release+0x10/0x10
[ 23.810419] ? _raw_spin_unlock_irqrestore+0x35/0x70
[ 23.810751] ? do_task_dead+0xa6/0x110
[ 23.810994] ? debug_smp_processor_id+0x20/0x30
[ 23.811289] ? rcu_is_watching+0x19/0xc0
[ 23.811553] ? _raw_spin_unlock_irqrestore+0x35/0x70
[ 23.811873] ? trace_hardirqs_on+0x26/0x120
[ 23.812148] do_task_dead+0xde/0x110
[ 23.812389] make_task_dead+0x37f/0x3c0
[ 23.812651] ? __x64_sys_exit_group+0x4d/0x60
[ 23.812938] rewind_stack_and_make_dead+0x17/0x20
[ 23.813246] RIP: 0033:0x7fec33f18a4d
[ 23.813486] Code: Unable to access opcode bytes at 0x7fec33f18a23.
[ 23.813872] RSP: 002b:00007fffdd22ee98 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
[ 23.814347] RAX: ffffffffffffffda RBX: 00007fec33ff69e0 RCX: 00007fec33f18a4d
[ 23.814794] RDX: 00000000000000e7 RSI: fffffffffffffeb0 RDI: 0000000000000001
[ 23.815231] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000020
[ 23.815680] R10: 00007fffdd22ed40 R11: 0000000000000246 R12: 00007fec33ff69e0
[ 23.816126] R13: 00007fec33ffbf00 R14: 0000000000000001 R15: 00007fec33ffbee8
[ 23.816580] </TASK>
[ 23.816734] ------------[ cut here ]------------
"
I hope it's helpful.
---
If you don't need the following environment to reproduce the problem or if you
already have one reproduced environment, please ignore the following information.
How to reproduce:
git clone https://gitlab.com/xupengfe/repro_vm_env.git
cd repro_vm_env
tar -xvf repro_vm_env.tar.gz
cd repro_vm_env; ./start3.sh // it needs qemu-system-x86_64 and I used v7.1.0
// start3.sh will load bzImage_2241ab53cbb5cdb08a6b2d4688feb13971058f65 v6.2-rc5 kernel
// You could change the bzImage_xxx as you want
// Maybe you need to remove line "-drive if=pflash,format=raw,readonly=on,file=./OVMF_CODE.fd \" for different qemu version
You could use below command to log in, there is no password for root.
ssh -p 10023 root@...alhost
After login vm(virtual machine) successfully, you could transfer reproduced
binary to the vm by below way, and reproduce the problem in vm:
gcc -pthread -o repro repro.c
scp -P 10023 repro root@...alhost:/root/
Get the bzImage for target kernel:
Please use target kconfig and copy it to kernel_src/.config
make olddefconfig
make -jx bzImage //x should equal or less than cpu num your pc has
Fill the bzImage file into above start3.sh to load the target kernel in vm.
Tips:
If you already have qemu-system-x86_64, please ignore below info.
If you want to install qemu v7.1.0 version:
git clone https://github.com/qemu/qemu.git
cd qemu
git checkout -f v7.1.0
mkdir build
cd build
yum install -y ninja-build.x86_64
yum -y install libslirp-devel.x86_64
../configure --target-list=x86_64-softmmu --enable-kvm --enable-vnc --enable-gtk --enable-sdl --enable-usb-redir --enable-slirp
make
make install
Best Regards,
Thanks!
Powered by blists - more mailing lists