[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c92bc8a6-225d-4fd2-88b5-8994090fb2de@samsung.com>
Date: Wed, 4 Oct 2023 22:39:39 +0200
From: Marek Szyprowski <m.szyprowski@...sung.com>
To: Benjamin Segall <bsegall@...gle.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, mgorman@...e.de,
bristot@...hat.com, corbet@....net, qyousef@...alina.io,
chris.hyser@...cle.com, patrick.bellasi@...bug.net, pjt@...gle.com,
pavel@....cz, qperret@...gle.com, tim.c.chen@...ux.intel.com,
joshdon@...gle.com, timj@....org, kprateek.nayak@....com,
yu.c.chen@...el.com, youssefesmat@...omium.org,
joel@...lfernandes.org, efault@....de, tglx@...utronix.de,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH] sched/fair: fix pick_eevdf to always find the correct
se
Hi,
On 30.09.2023 02:09, Benjamin Segall wrote:
> The old pick_eevdf could fail to find the actual earliest eligible
> deadline when it descended to the right looking for min_deadline, but it
> turned out that that min_deadline wasn't actually eligible. In that case
> we need to go back and search through any left branches we skipped
> looking for the actual best _eligible_ min_deadline.
>
> This is more expensive, but still O(log n), and at worst should only
> involve descending two branches of the rbtree.
>
> I've run this through a userspace stress test (thank you
> tools/lib/rbtree.c), so hopefully this implementation doesn't miss any
> corner cases.
>
> Fixes: 147f3efaa241 ("sched/fair: Implement an EEVDF-like scheduling policy")
> Signed-off-by: Ben Segall <bsegall@...gle.com>
This patch landed in today's linux-next as commit 561c58efd239
("sched/fair: Fix pick_eevdf()"). Surprisingly it introduced a warning
about circular locking dependency. It can be easily observed during boot
from time to time on on qemu/arm64 'virt' machine:
======================================================
WARNING: possible circular locking dependency detected
6.6.0-rc4+ #7222 Not tainted
------------------------------------------------------
systemd-udevd/1187 is trying to acquire lock:
ffffbcc2be0c4de0 (console_owner){..-.}-{0:0}, at:
console_flush_all+0x1b0/0x500
but task is already holding lock:
ffff5535ffdd2b18 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0xe0/0xc40
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #4 (&rq->__lock){-.-.}-{2:2}:
_raw_spin_lock_nested+0x44/0x5c
raw_spin_rq_lock_nested+0x24/0x40
task_fork_fair+0x3c/0xac
sched_cgroup_fork+0xe8/0x14c
copy_process+0x11c4/0x1a14
kernel_clone+0x8c/0x400
user_mode_thread+0x70/0x98
rest_init+0x28/0x190
arch_post_acpi_subsys_init+0x0/0x8
start_kernel+0x594/0x684
__primary_switched+0xbc/0xc4
-> #3 (&p->pi_lock){-.-.}-{2:2}:
_raw_spin_lock_irqsave+0x60/0x88
try_to_wake_up+0x58/0x468
default_wake_function+0x14/0x20
woken_wake_function+0x20/0x2c
__wake_up_common+0x94/0x170
__wake_up_common_lock+0x7c/0xcc
__wake_up+0x18/0x24
tty_wakeup+0x34/0x70
tty_port_default_wakeup+0x20/0x38
tty_port_tty_wakeup+0x18/0x24
uart_write_wakeup+0x18/0x28
pl011_tx_chars+0x140/0x2b4
pl011_start_tx+0xe8/0x190
serial_port_runtime_resume+0x90/0xc0
__rpm_callback+0x48/0x1a8
rpm_callback+0x6c/0x78
rpm_resume+0x438/0x6d8
pm_runtime_work+0x84/0xc8
process_one_work+0x1ec/0x53c
worker_thread+0x298/0x408
kthread+0x124/0x128
ret_from_fork+0x10/0x20
-> #2 (&tty->write_wait){....}-{2:2}:
_raw_spin_lock_irqsave+0x60/0x88
__wake_up_common_lock+0x5c/0xcc
__wake_up+0x18/0x24
tty_wakeup+0x34/0x70
tty_port_default_wakeup+0x20/0x38
tty_port_tty_wakeup+0x18/0x24
uart_write_wakeup+0x18/0x28
pl011_tx_chars+0x140/0x2b4
pl011_start_tx+0xe8/0x190
serial_port_runtime_resume+0x90/0xc0
__rpm_callback+0x48/0x1a8
rpm_callback+0x6c/0x78
rpm_resume+0x438/0x6d8
pm_runtime_work+0x84/0xc8
process_one_work+0x1ec/0x53c
worker_thread+0x298/0x408
kthread+0x124/0x128
ret_from_fork+0x10/0x20
-> #1 (&port_lock_key){..-.}-{2:2}:
_raw_spin_lock+0x48/0x60
pl011_console_write+0x13c/0x1b0
console_flush_all+0x20c/0x500
console_unlock+0x6c/0x130
vprintk_emit+0x228/0x3a0
vprintk_default+0x38/0x44
vprintk+0xa4/0xc0
_printk+0x5c/0x84
register_console+0x1f4/0x420
serial_core_register_port+0x5a4/0x5d8
serial_ctrl_register_port+0x10/0x1c
uart_add_one_port+0x10/0x1c
pl011_register_port+0x70/0x12c
pl011_probe+0x1bc/0x1fc
amba_probe+0x110/0x1c8
really_probe+0x148/0x2b4
__driver_probe_device+0x78/0x12c
driver_probe_device+0xd8/0x160
__device_attach_driver+0xb8/0x138
bus_for_each_drv+0x84/0xe0
__device_attach+0xa8/0x1b0
device_initial_probe+0x14/0x20
bus_probe_device+0xb0/0xb4
device_add+0x574/0x738
amba_device_add+0x40/0xac
of_platform_bus_create+0x2b4/0x378
of_platform_populate+0x50/0xfc
of_platform_default_populate_init+0xd0/0xf0
do_one_initcall+0x74/0x2f0
kernel_init_freeable+0x28c/0x4dc
kernel_init+0x24/0x1dc
ret_from_fork+0x10/0x20
-> #0 (console_owner){..-.}-{0:0}:
__lock_acquire+0x1318/0x20c4
lock_acquire+0x1e8/0x318
console_flush_all+0x1f8/0x500
console_unlock+0x6c/0x130
vprintk_emit+0x228/0x3a0
vprintk_default+0x38/0x44
vprintk+0xa4/0xc0
_printk+0x5c/0x84
pick_next_task_fair+0x28c/0x498
__schedule+0x164/0xc40
do_task_dead+0x54/0x58
do_exit+0x61c/0x9e8
do_group_exit+0x34/0x90
__wake_up_parent+0x0/0x30
invoke_syscall+0x48/0x114
el0_svc_common.constprop.0+0x40/0xe0
do_el0_svc_compat+0x1c/0x38
el0_svc_compat+0x48/0xb4
el0t_32_sync_handler+0x90/0x140
el0t_32_sync+0x194/0x198
other info that might help us debug this:
Chain exists of:
console_owner --> &p->pi_lock --> &rq->__lock
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&rq->__lock);
lock(&p->pi_lock);
lock(&rq->__lock);
lock(console_owner);
*** DEADLOCK ***
3 locks held by systemd-udevd/1187:
#0: ffff5535ffdd2b18 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0xe0/0xc40
#1: ffffbcc2be0c4c30 (console_lock){+.+.}-{0:0}, at:
vprintk_emit+0x11c/0x3a0
#2: ffffbcc2be0c4c88 (console_srcu){....}-{0:0}, at:
console_flush_all+0x7c/0x500
stack backtrace:
CPU: 1 PID: 1187 Comm: systemd-udevd Not tainted 6.6.0-rc4+ #7222
Hardware name: linux,dummy-virt (DT)
Call trace:
dump_backtrace+0x98/0xf0
show_stack+0x18/0x24
dump_stack_lvl+0x60/0xac
dump_stack+0x18/0x24
print_circular_bug+0x290/0x370
check_noncircular+0x15c/0x170
__lock_acquire+0x1318/0x20c4
lock_acquire+0x1e8/0x318
console_flush_all+0x1f8/0x500
console_unlock+0x6c/0x130
vprintk_emit+0x228/0x3a0
vprintk_default+0x38/0x44
vprintk+0xa4/0xc0
_printk+0x5c/0x84
pick_next_task_fair+0x28c/0x498
__schedule+0x164/0xc40
do_task_dead+0x54/0x58
do_exit+0x61c/0x9e8
do_group_exit+0x34/0x90
__wake_up_parent+0x0/0x30
invoke_syscall+0x48/0x114
el0_svc_common.constprop.0+0x40/0xe0
do_el0_svc_compat+0x1c/0x38
el0_svc_compat+0x48/0xb4
el0t_32_sync_handler+0x90/0x140
el0t_32_sync+0x194/0x198
The problem is probably elsewhere, but this scheduler change only
revealed it in a fully reproducible way. Reverting $subject on top of
linux-next hides the problem deep enough that I was not able to
reproduce it. Let me know if there is anything I can do to help fixing
this issue.
> ---
> kernel/sched/fair.c | 72 ++++++++++++++++++++++++++++++++++++---------
> 1 file changed, 58 insertions(+), 14 deletions(-)
>
> ...
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
Powered by blists - more mailing lists