[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aINKAQt3qcj2s38N@xsang-OptiPlex-9020>
Date: Fri, 25 Jul 2025 17:10:25 +0800
From: Oliver Sang <oliver.sang@...el.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: Peter Zijlstra <peterz@...radead.org>, <oe-lkp@...ts.linux.dev>,
<lkp@...el.com>, <linux-kernel@...r.kernel.org>, <x86@...nel.org>, "Sebastian
Andrzej Siewior" <bigeasy@...utronix.de>, <linux-mm@...ck.org>,
<ltp@...ts.linux.it>, <oliver.sang@...el.com>
Subject: Re: [tip:locking/futex] [futex] 56180dd20c:
BUG:sleeping_function_called_from_invalid_context_at_kernel/nsproxy.c
hi, Thomas Gleixner,
On Wed, Jul 23, 2025 at 07:22:43PM +0200, Thomas Gleixner wrote:
> On Wed, Jul 23 2025 at 16:46, kernel test robot wrote:
> > kernel test robot noticed "BUG:sleeping_function_called_from_invalid_context_at_kernel/nsproxy.c" on:
> >
> > commit: 56180dd20c19e5b0fa34822997a9ac66b517e7b3 ("futex: Use RCU-based per-CPU reference counting instead of rcuref_t")
> > https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git locking/futex
> >
> > the issue (1)(2) happen randomly upon 56180dd20c.
>
> Hmm.
>
> > a255b78d14324f8a 56180dd20c19e5b0fa34822997a
> > ---------------- ---------------------------
> > fail:runs %reproduction fail:runs
> > | | |
> > :50 48% 24:50 dmesg.BUG:scheduling_while_atomic <---- (2)
> > :50 48% 24:50 dmesg.BUG:sleeping_function_called_from_invalid_context_at_kernel/nsproxy.c <---- (1)
> > 50:50 0% 50:50 dmesg.Mem-Info
> > 50:50 0% 50:50 dmesg.invoked_oom-killer:gfp_mask=0x
> >
> >
> >
> > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > the same patch/commit), kindly add following tags
> > | Reported-by: kernel test robot <oliver.sang@...el.com>
> > | Closes: https://lore.kernel.org/oe-lkp/202507231021.dcf24373-lkp@intel.com
> >
> >
> > [ 286.673775][ C97] BUG: sleeping function called from invalid context at kernel/nsproxy.c:233 <---- (1)
> > [ 286.673784][ C97] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6748, name: oom03
> > [ 286.673787][ C97] preempt_count: 7ffffffe, expected: 0
>
> Ooops. That's a corrupted preempt counter, which has underflown twice.
>
> Can you please enable CONFIG_DEBUG_PREEMPT, so we can see where this
> happens?
after enable CONFIG_DEBUG_PREEMPT, the config is as attached
config-6.16.0-rc5-00002-g56180dd20c19
the issue becomes random dmesg.WARNING:at_kernel/sched/core.c:#preempt_count_sub
=========================================================================================
compiler/kconfig/rootfs/tbox_group/test/testcase:
gcc-12/x86_64-rhel-9.4-ltp_+CONFIG_DEBUG_PREEMPT/debian-12-x86_64-20240206.cgz/lkp-skl-fpga01/mm-oom/ltp
a255b78d14324f8a 56180dd20c19e5b0fa34822997a
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:20 65% 13:20 dmesg.RIP:preempt_count_sub
:20 65% 13:20 dmesg.WARNING:at_kernel/sched/core.c:#preempt_count_sub
one dmesg is attached.
[ 351.452869][ T7232] ------------[ cut here ]------------
[ 351.452888][ T7232] DEBUG_LOCKS_WARN_ON(val > preempt_count())
[ 351.452904][ T7232] WARNING: CPU: 23 PID: 7232 at kernel/sched/core.c:5903 preempt_count_sub+0xca/0x170
[ 351.452933][ T7232] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common skx_edac skx_edac_common nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp btrfs coretemp blake2b_generic xor zstd_compress raid6_pq sd_mod sg kvm_intel kvm irdma irqbypass ghash_clmulni_intel snd_pcm ice rapl ast snd_timer ahci intel_cstate gnss ib_uverbs drm_client_lib snd nvme mei_me libahci drm_shmem_helper ipmi_ssif soundcore i2c_i801 intel_uncore ioatdma ib_core libata nvme_core acpi_power_meter pcspkr mei drm_kms_helper lpc_ich i2c_smbus intel_pch_thermal dca wmi ipmi_si acpi_ipmi ipmi_devintf ipmi_msghandler acpi_pad joydev binfmt_misc drm loop fuse dm_mod ip_tables
[ 351.453245][ T7232] CPU: 23 UID: 0 PID: 7232 Comm: oom03 Not tainted 6.16.0-rc5-00002-g56180dd20c19 #1 PREEMPT(voluntary)
[ 351.453263][ T7232] RIP: 0010:preempt_count_sub+0xca/0x170
[ 351.453279][ T7232] Code: 11 38 d0 7c 08 84 d2 0f 85 91 00 00 00 8b 15 1d 3c e3 04 85 d2 75 b5 48 c7 c6 60 12 2d 84 48 c7 c7 a0 12 2d 84 e8 76 91 f3 ff <0f> 0b eb 9e 84 c0 75 91 e8 a9 89 1a 01 85 c0 74 91 48 c7 c0 a0 76
[ 351.453290][ T7232] RSP: 0018:ffffc9003564f8c0 EFLAGS: 00010286
[ 351.453301][ T7232] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
[ 351.453309][ T7232] RDX: 0000000000000002 RSI: 0000000000000004 RDI: 0000000000000001
[ 351.453316][ T7232] RBP: ffffc9003564f8c8 R08: 0000000000000001 R09: ffffed12f5375839
[ 351.453325][ T7232] R10: ffff8897a9bac1cb R11: ffffffff874e19b0 R12: ffff88990174c380
[ 351.453333][ T7232] R13: ffffc9003564fb50 R14: 00000000003d0f00 R15: 0000000000001c40
[ 351.453341][ T7232] FS: 00007fbcae930740(0000) GS:ffff8898227f2000(0000) knlGS:0000000000000000
[ 351.453352][ T7232] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 351.453360][ T7232] CR2: 00007fa61b7ffd58 CR3: 00000030158d2004 CR4: 00000000007726f0
[ 351.453368][ T7232] PKRU: 55555554
[ 351.453375][ T7232] Call Trace:
[ 351.453381][ T7232] <TASK>
[ 351.453388][ T7232] _raw_spin_unlock+0x19/0x70
[ 351.453402][ T7232] copy_process+0x4244/0x4ab0
[ 351.453416][ T7232] ? mod_memcg_lruvec_state+0x362/0x5b0
[ 351.453439][ T7232] ? __pfx_copy_process+0x10/0x10
[ 351.453451][ T7232] ? _inline_copy_from_user+0x4f/0xb0
[ 351.453470][ T7232] ? copy_clone_args_from_user+0xff/0x670
[ 351.453487][ T7232] ? __pfx_folios_put_refs+0x10/0x10
[ 351.453501][ T7232] kernel_clone+0xb6/0x7b0
[ 351.453514][ T7232] ? __pfx_kernel_clone+0x10/0x10
[ 351.453525][ T7232] ? folio_batch_move_lru+0x231/0x370
[ 351.453536][ T7232] ? __pfx_lru_add+0x10/0x10
[ 351.453556][ T7232] __do_sys_clone3+0x150/0x1b0
[ 351.453569][ T7232] ? __pfx___do_sys_clone3+0x10/0x10
[ 351.453590][ T7232] ? __smp_call_single_queue+0x268/0x3f0
[ 351.453609][ T7232] ? __pfx___smp_call_single_queue+0x10/0x10
[ 351.453623][ T7232] ? select_task_rq_fair+0x395/0xcb0
[ 351.453637][ T7232] ? preempt_count_add+0xca/0x170
[ 351.453651][ T7232] ? _raw_spin_lock_irq+0x8b/0xf0
[ 351.453661][ T7232] ? __pfx__raw_spin_lock_irq+0x10/0x10
[ 351.453671][ T7232] ? ttwu_queue_wakelist+0x2be/0x4b0
[ 351.453688][ T7232] do_syscall_64+0x7f/0x2f0
[ 351.453701][ T7232] ? _raw_spin_unlock_irq+0x1a/0x70
[ 351.453711][ T7232] ? sigprocmask+0x1ea/0x330
[ 351.453737][ T7232] ? __pfx_sigprocmask+0x10/0x10
[ 351.453742][ T7232] ? _copy_to_user+0x5c/0x70
[ 351.453746][ T7232] ? __x64_sys_rt_sigprocmask+0x183/0x230
[ 351.453751][ T7232] ? __pfx___x64_sys_rt_sigprocmask+0x10/0x10
[ 351.453756][ T7232] ? __asan_memset+0x23/0x70
[ 351.453763][ T7232] ? rwsem_wake+0xca/0x130
[ 351.453773][ T7232] ? do_syscall_64+0x7f/0x2f0
[ 351.453776][ T7232] ? handle_mm_fault+0x3ff/0x6f0
[ 351.453784][ T7232] ? __pfx___rseq_handle_notify_resume+0x10/0x10
[ 351.453795][ T7232] ? fpregs_restore_userregs+0xed/0x1f0
[ 351.453803][ T7232] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 351.453808][ T7232] RIP: 0033:0x7fbcaea3c889
[ 351.453813][ T7232] Code: 31 ed e9 44 ff ff ff e8 25 e9 00 00 0f 1f 44 00 00 b8 ea ff ff ff 48 85 ff 74 2c 48 85 d2 74 27 49 89 c8 b8 b3 01 00 00 0f 05 <48> 85 c0 7c 18 74 01 c3 31 ed 48 83 e4 f0 4c 89 c7 ff d2 48 89 c7
[ 351.453816][ T7232] RSP: 002b:00007fffcc117a48 EFLAGS: 00000206 ORIG_RAX: 00000000000001b3
[ 351.453821][ T7232] RAX: ffffffffffffffda RBX: 00007fbcae9bbef0 RCX: 00007fbcaea3c889
[ 351.453824][ T7232] RDX: 00007fbcae9bbef0 RSI: 0000000000000058 RDI: 00007fffcc117a90
[ 351.453827][ T7232] RBP: 00007fa61b7ff6c0 R08: 00007fa61b7ff6c0 R09: 00007fffcc117b87
[ 351.453830][ T7232] R10: 0000000000000008 R11: 0000000000000206 R12: ffffffffffffff78
[ 351.453832][ T7232] R13: 0000000000000000 R14: 00007fffcc117a90 R15: 00007fa61afff000
[ 351.453838][ T7232] </TASK>
[ 351.453840][ T7232] ---[ end trace 0000000000000000 ]---
>
> Thanks,
>
> tglx
View attachment "config-6.16.0-rc5-00002-g56180dd20c19" of type "text/plain" (247245 bytes)
Download attachment "dmesg.xz" of type "application/x-xz" (104344 bytes)
Powered by blists - more mailing lists