[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fdaaee62-b67e-556c-2675-aacd3ec994f6@roeck-us.net>
Date: Mon, 31 Jul 2023 07:35:13 -0700
From: Guenter Roeck <linux@...ck-us.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Joel Fernandes <joel@...lfernandes.org>, paulmck@...nel.org,
Pavel Machek <pavel@...x.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, patches@...ts.linux.dev,
linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, shuah@...nel.org, patches@...nelci.org,
lkft-triage@...ts.linaro.org, jonathanh@...dia.com,
f.fainelli@...il.com, sudipm.mukherjee@...il.com,
srw@...dewatkins.net, rwarsow@....de, conor@...nel.org,
rcu@...r.kernel.org, Ingo Molnar <mingo@...nel.org>
Subject: Re: scheduler problems in -next (was: Re: [PATCH 6.4 000/227]
6.4.7-rc1 review)
On 7/31/23 07:19, Peter Zijlstra wrote:
> On Sat, Jul 29, 2023 at 09:00:02PM -0700, Guenter Roeck wrote:
>> On 7/27/23 16:18, Joel Fernandes wrote:
>>
>> [ ... ]
>>
>>>> I freely confess that I am having a hard time imagining what would
>>>> be CPU dependent in that code. Timing, maybe? Whatever the reason,
>>>> I am not seeing these failures in my testing.
>>>>
>>>> So which of the following Kconfig options is defined in your .config?
>>>> CONFIG_TASKS_RCU, CONFIG_TASKS_RUDE_RCU, and CONFIG_TASKS_TRACE_RCU.
>>>>
>>>> If you have more than one of them, could you please apply this patch
>>>> and show me the corresponding console output from the resulting hang?
>>>
>>> FWIW, I am not able to repro this issue either. If a .config can be shared of the problem system, I can try it out to see if it can be reproduced on my side.
>>>
>>
>> I managed to bisect the problem. See bisect log below. Bisect repeated twice.
>> so it should be reliable. I don't really understand it, but the following
>> reverts fix the problem. This is on top of next-20230721 because next-20230728
>> crashes immediately in my tests.
>>
>> 0caafe9b94ab (HEAD) Revert "sched/fair: Remove sched_feat(START_DEBIT)"
>> 518bdbd39fdb Revert "sched/fair: Add lag based placement"
>> a011162c3e32 Revert "sched/fair: Implement an EEVDF-like scheduling policy"
>> df579720bf98 Revert "sched/fair: Commit to lag based placement"
>> aac459a7e738 Revert "sched/smp: Use lag to simplify cross-runqueue placement"
>> 8d686eb173e1 Revert "sched/fair: Commit to EEVDF"
>> 486474c50f95 Revert "sched/debug: Rename sysctl_sched_min_granularity to sysctl_sched_base_slice"
>> 79e94d67d08a Revert "sched/fair: Propagate enqueue flags into place_entity()"
>> ae867bc97b71 (tag: next-20230721) Add linux-next specific files for 20230721
>>
>> For context: x86 images (32 and 64 bit) in -next tend to hang at
>>
>> [ 2.309323] RCU Tasks: Setting shift to 0 and lim to 1 rcu_task_cb_adjust=1.
>> [ 2.311634] Running RCU-tasks wait API self tests
>>
>> The hang is not seen with every boot; it happens roughly about once every
>> 10 boot attempts. It is not CPU dependent as I initially thought.
>>
>> Configuration file is at http://server.roeck-us.net/qemu/x86-next/config.
>> Example qemu command line:
>
> Hurmph, let me see if I can reproduce on next-20230731 (not having the
> older next thingies around).
That crashes hard with my configuration.
[ 6.353191] kernel tried to execute NX-protected page - exploit attempt? (uid: 0)
[ 6.353392] BUG: unable to handle page fault for address: ffff9b10c0013cd0
[ 6.353531] #PF: supervisor instruction fetch in kernel mode
[ 6.353624] #PF: error_code(0x0011) - permissions violation
[ 6.353751] PGD 1000067 P4D 1000067 PUD 1205067 PMD 1206067 PTE 800000000124e063
[ 6.354011] Oops: 0011 [#1] PREEMPT SMP PTI
[ 6.354164] CPU: 0 PID: 182 Comm: kunit_try_catch Tainted: G N 6.5.0-rc4-next-20230731 #1
[ 6.354315] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
[ 6.354525] RIP: 0010:0xffff9b10c0013cd0
[ 6.354793] Code: ff ff 60 64 ce a9 ff ff ff ff 00 00 00 00 00 00 00 00 d1 3a bc a8 ff ff ff ff 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <f0> 00 01 44 10 8a ff ff b8 01 01 44 10 8a ff ff 00 00 00 00 00 00
[ 6.355059] RSP: 0000:ffff9b10c027fd60 EFLAGS: 00000246
[ 6.355157] RAX: ffff9b10c0013cd0 RBX: ffff8a1043bdb400 RCX: 0000000000000000
[ 6.355259] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff8a1043bdb400
[ 6.355358] RBP: ffff9b10c027fdc8 R08: 0000000000000001 R09: 0000000000000001
[ 6.355456] R10: 0000000000000001 R11: 0000000000000001 R12: ffff9b10c027fe74
[ 6.355556] R13: ffff8a10440100f0 R14: ffff8a10440101b8 R15: ffff9b10c027fe74
[ 6.355679] FS: 0000000000000000(0000) GS:ffff8a104fc00000(0000) knlGS:0000000000000000
[ 6.355798] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6.355886] CR2: ffff9b10c0013cd0 CR3: 000000000e048000 CR4: 00000000003506f0
[ 6.356029] Call Trace:
[ 6.356158] <TASK>
[ 6.356334] ? __die+0x1f/0x70
[ 6.356472] ? page_fault_oops+0x14a/0x460
[ 6.356547] ? exc_page_fault+0xee/0x1c0
[ 6.356612] ? asm_exc_page_fault+0x26/0x30
[ 6.356703] ? kunit_filter_attr_tests+0xc4/0x2e0
[ 6.356796] kunit_filter_suites+0x2e2/0x460
[ 6.356889] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[ 6.356979] filter_suites_test+0xea/0x2c0
[ 6.357051] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[ 6.357148] kunit_generic_run_threadfn_adapter+0x15/0x20
[ 6.357228] kthread+0xef/0x120
[ 6.357282] ? __pfx_kthread+0x10/0x10
[ 6.357343] ret_from_fork+0x2f/0x50
[ 6.357399] ? __pfx_kthread+0x10/0x10
[ 6.357458] ret_from_fork_asm+0x1b/0x30
[ 6.357560] </TASK>
[ 6.357632] Modules linked in:
[ 6.357786] CR2: ffff9b10c0013cd0
[ 6.358010] ---[ end trace 0000000000000000 ]---
Enabling CONFIG_ZERO_CALL_USED_REGS might fix (hide) this, but I have not tried.
Guenter
Powered by blists - more mailing lists