[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200411231413.26911-3-sashal@kernel.org>
Date: Sat, 11 Apr 2020 19:13:50 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Peter Ujfalusi <peter.ujfalusi@...com>,
Tomi Valkeinen <tomi.valkeinen@...com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Sasha Levin <sashal@...nel.org>, linux-serial@...r.kernel.org
Subject: [PATCH AUTOSEL 4.9 03/26] serial: 8250_omap: Fix sleeping function called from invalid context during probe
From: Peter Ujfalusi <peter.ujfalusi@...com>
[ Upstream commit 4ce35a3617c0ac758c61122b2218b6c8c9ac9398 ]
When booting j721e the following bug is printed:
[ 1.154821] BUG: sleeping function called from invalid context at kernel/sched/completion.c:99
[ 1.154827] in_atomic(): 0, irqs_disabled(): 128, non_block: 0, pid: 12, name: kworker/0:1
[ 1.154832] 3 locks held by kworker/0:1/12:
[ 1.154836] #0: ffff000840030728 ((wq_completion)events){+.+.}, at: process_one_work+0x1d4/0x6e8
[ 1.154852] #1: ffff80001214fdd8 (deferred_probe_work){+.+.}, at: process_one_work+0x1d4/0x6e8
[ 1.154860] #2: ffff00084060b170 (&dev->mutex){....}, at: __device_attach+0x38/0x138
[ 1.154872] irq event stamp: 63096
[ 1.154881] hardirqs last enabled at (63095): [<ffff800010b74318>] _raw_spin_unlock_irqrestore+0x70/0x78
[ 1.154887] hardirqs last disabled at (63096): [<ffff800010b740d8>] _raw_spin_lock_irqsave+0x28/0x80
[ 1.154893] softirqs last enabled at (62254): [<ffff800010080c88>] _stext+0x488/0x564
[ 1.154899] softirqs last disabled at (62247): [<ffff8000100fdb3c>] irq_exit+0x114/0x140
[ 1.154906] CPU: 0 PID: 12 Comm: kworker/0:1 Not tainted 5.6.0-rc6-next-20200318-00094-g45e4089b0bd3 #221
[ 1.154911] Hardware name: Texas Instruments K3 J721E SoC (DT)
[ 1.154917] Workqueue: events deferred_probe_work_func
[ 1.154923] Call trace:
[ 1.154928] dump_backtrace+0x0/0x190
[ 1.154933] show_stack+0x14/0x20
[ 1.154940] dump_stack+0xe0/0x148
[ 1.154946] ___might_sleep+0x150/0x1f0
[ 1.154952] __might_sleep+0x4c/0x80
[ 1.154957] wait_for_completion_timeout+0x40/0x140
[ 1.154964] ti_sci_set_device_state+0xa0/0x158
[ 1.154969] ti_sci_cmd_get_device_exclusive+0x14/0x20
[ 1.154977] ti_sci_dev_start+0x34/0x50
[ 1.154984] genpd_runtime_resume+0x78/0x1f8
[ 1.154991] __rpm_callback+0x3c/0x140
[ 1.154996] rpm_callback+0x20/0x80
[ 1.155001] rpm_resume+0x568/0x758
[ 1.155007] __pm_runtime_resume+0x44/0xb0
[ 1.155013] omap8250_probe+0x2b4/0x508
[ 1.155019] platform_drv_probe+0x50/0xa0
[ 1.155023] really_probe+0xd4/0x318
[ 1.155028] driver_probe_device+0x54/0xe8
[ 1.155033] __device_attach_driver+0x80/0xb8
[ 1.155039] bus_for_each_drv+0x74/0xc0
[ 1.155044] __device_attach+0xdc/0x138
[ 1.155049] device_initial_probe+0x10/0x18
[ 1.155053] bus_probe_device+0x98/0xa0
[ 1.155058] deferred_probe_work_func+0x74/0xb0
[ 1.155063] process_one_work+0x280/0x6e8
[ 1.155068] worker_thread+0x48/0x430
[ 1.155073] kthread+0x108/0x138
[ 1.155079] ret_from_fork+0x10/0x18
To fix the bug we need to first call pm_runtime_enable() prior to any
pm_runtime calls.
Reported-by: Tomi Valkeinen <tomi.valkeinen@...com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@...com>
Link: https://lore.kernel.org/r/20200320125200.6772-1-peter.ujfalusi@ti.com
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/tty/serial/8250/8250_omap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
index a3adf21f9dcec..7d4680ef5307d 100644
--- a/drivers/tty/serial/8250/8250_omap.c
+++ b/drivers/tty/serial/8250/8250_omap.c
@@ -1194,11 +1194,11 @@ static int omap8250_probe(struct platform_device *pdev)
spin_lock_init(&priv->rx_dma_lock);
device_init_wakeup(&pdev->dev, true);
+ pm_runtime_enable(&pdev->dev);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_set_autosuspend_delay(&pdev->dev, -1);
pm_runtime_irq_safe(&pdev->dev);
- pm_runtime_enable(&pdev->dev);
pm_runtime_get_sync(&pdev->dev);
--
2.20.1
Powered by blists - more mailing lists