[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202511172111.32f89804-lkp@intel.com>
Date: Mon, 17 Nov 2025 21:48:30 +0800
From: kernel test robot <oliver.sang@...el.com>
To: <peng8420.li@...il.com>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>, <linux-mm@...ck.org>,
<akpm@...ux-foundation.org>, <david@...hat.com>, <osalvador@...e.de>,
<jgg@...pe.ca>, <jhubbard@...dia.com>, <peterx@...hat.com>,
<linux-kernel@...r.kernel.org>, <dan.j.williams@...el.com>, peng8420.li
<peng8420.li@...il.com>, <oliver.sang@...el.com>
Subject: Re: [PATCH] mm/gup: fix handling of zero page in follow_page_pte()
Hello,
kernel test robot noticed "BUG:Bad_page_state_in_process" on:
commit: 4e691413bed009d7bd6198eb8fcebd4559a9e017 ("[PATCH] mm/gup: fix handling of zero page in follow_page_pte()")
url: https://github.com/intel-lab-lkp/linux/commits/peng8420-li-gmail-com/mm-gup-fix-handling-of-zero-page-in-follow_page_pte/20251112-152851
base: https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/all/20251112072424.125514-1-peng8420.li@gmail.com/
patch subject: [PATCH] mm/gup: fix handling of zero page in follow_page_pte()
in testcase: trinity
version:
with following parameters:
runtime: 300s
group: group-04
nr_groups: 5
config: x86_64-randconfig-001-20251114
compiler: clang-20
test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 32G
(please refer to attached dmesg/kmsg for entire log/backtrace)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@...el.com>
| Closes: https://lore.kernel.org/oe-lkp/202511172111.32f89804-lkp@intel.com
[ 271.087528][ T5904] BUG: Bad page state in process trinity-c1 pfn:05d00
[ 271.088206][ T5904] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d00
[ 271.088949][ T5904] flags: 0x2000000000002000(reserved|zone=1)
[ 271.089455][ T5904] raw: 2000000000002000 ffffea0000174008 ffffea0000174008 0000000000000000
[ 271.090260][ T5904] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[ 271.090973][ T5904] page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set
[ 271.091567][ T5904] Modules linked in: uvesafb input_leds pcspkr
[ 271.092115][ T5904] CPU: 0 UID: 16384 PID: 5904 Comm: trinity-c1 Tainted: G T 6.18.0-rc5-00409-g4e691413bed0 #1 PREEMPT(none)
[ 271.093592][ T5904] Tainted: [T]=RANDSTRUCT
[ 271.094176][ T5904] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 271.095412][ T5904] Call Trace:
[ 271.095916][ T5904] <TASK>
[ 271.096360][ T5904] __dump_stack (lib/dump_stack.c:95)
[ 271.096928][ T5904] dump_stack_lvl (lib/dump_stack.c:123)
[ 271.097534][ T5904] dump_stack (lib/dump_stack.c:130)
[ 271.098105][ T5904] bad_page (mm/page_alloc.c:?)
[ 271.098668][ T5904] __free_frozen_pages (mm/page_alloc.c:?)
[ 271.099331][ T5904] free_frozen_pages (mm/page_alloc.c:2987)
[ 271.099969][ T5904] __folio_put (mm/swap.c:?)
[ 271.100495][ T5904] page_cache_pipe_buf_release (fs/splice.c:112)
[ 271.100975][ T5904] __se_sys_vmsplice (include/linux/pipe_fs_i.h:? fs/splice.c:261 fs/splice.c:1475 fs/splice.c:1555 fs/splice.c:1610 fs/splice.c:1580)
[ 271.101425][ T5904] ? _raw_spin_unlock_irq (arch/x86/include/asm/preempt.h:104 include/linux/spinlock_api_smp.h:160 kernel/locking/spinlock.c:202)
[ 271.101869][ T5904] ? do_setitimer (include/linux/spinlock.h:?)
[ 271.102283][ T5904] ? trace_preempt_on (kernel/trace/trace_preemptirq.c:122)
[ 271.102730][ T5904] __x64_sys_vmsplice (fs/splice.c:1580)
[ 271.103151][ T5904] ? entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
[ 271.103653][ T5904] x64_sys_call (kbuild/obj/consumer/x86_64-randconfig-001-20251114/./arch/x86/include/generated/asm/syscalls_64.h:470)
[ 271.104064][ T5904] do_syscall_64 (arch/x86/entry/syscall_64.c:?)
[ 271.104467][ T5904] ? irqentry_exit (kernel/entry/common.c:224)
[ 271.104869][ T5904] ? exc_page_fault (arch/x86/mm/fault.c:?)
[ 271.105286][ T5904] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
[ 271.105781][ T5904] RIP: 0033:0x463519
[ 271.106141][ T5904] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db 59 00 00 c3 66 2e 0f 1f 84 00 00 00 00
All code
========
0: 00 f3 add %dh,%bl
2: c3 ret
3: 66 2e 0f 1f 84 00 00 cs nopw 0x0(%rax,%rax,1)
a: 00 00 00
d: 0f 1f 40 00 nopl 0x0(%rax)
11: 48 89 f8 mov %rdi,%rax
14: 48 89 f7 mov %rsi,%rdi
17: 48 89 d6 mov %rdx,%rsi
1a: 48 89 ca mov %rcx,%rdx
1d: 4d 89 c2 mov %r8,%r10
20: 4d 89 c8 mov %r9,%r8
23: 4c 8b 4c 24 08 mov 0x8(%rsp),%r9
28: 0f 05 syscall
2a:* 48 3d 01 f0 ff ff cmp $0xfffffffffffff001,%rax <-- trapping instruction
30: 0f 83 db 59 00 00 jae 0x5a11
36: c3 ret
37: 66 data16
38: 2e cs
39: 0f .byte 0xf
3a: 1f (bad)
3b: 84 00 test %al,(%rax)
3d: 00 00 add %al,(%rax)
...
Code starting with the faulting instruction
===========================================
0: 48 3d 01 f0 ff ff cmp $0xfffffffffffff001,%rax
6: 0f 83 db 59 00 00 jae 0x59e7
c: c3 ret
d: 66 data16
e: 2e cs
f: 0f .byte 0xf
10: 1f (bad)
11: 84 00 test %al,(%rax)
13: 00 00 add %al,(%rax)
...
[ 271.107601][ T5904] RSP: 002b:00007ffffa2365e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000116
[ 271.108301][ T5904] RAX: ffffffffffffffda RBX: 0000000000000116 RCX: 0000000000463519
[ 271.108967][ T5904] RDX: 00000000000000d4 RSI: 000000002d4190c0 RDI: 0000000000000126
[ 271.109624][ T5904] RBP: 00007f2836c7b000 R08: 0000000000000000 R09: fffffffffffffffb
[ 271.110286][ T5904] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000002
[ 271.110942][ T5904] R13: 00007f2836c7b058 R14: 000000002d15b850 R15: 00007f2836c7b000
[ 271.111621][ T5904] </TASK>
[ 271.111949][ T5904] Disabling lock debugging due to kernel taint
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20251117/202511172111.32f89804-lkp@intel.com
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists