[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8299946.T7Z3S40VBb@steina-w>
Date: Mon, 15 May 2023 14:32:50 +0200
From: Alexander Stein <alexander.stein@...tq-group.com>
To: "Sverdlin, Alexander" <alexander.sverdlin@...mens.com>
Cc: "linux-imx@....com" <linux-imx@....com>,
"kernel@...gutronix.de" <kernel@...gutronix.de>,
"s.hauer@...gutronix.de" <s.hauer@...gutronix.de>,
"festevam@...il.com" <festevam@...il.com>,
"shawnguo@...nel.org" <shawnguo@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-i2c@...r.kernel.org" <linux-i2c@...r.kernel.org>,
"aisheng.dong@....com" <aisheng.dong@....com>
Subject: Re: [PATCH v3] i2c: lpi2c: cache peripheral clock rate
Hi Alexander,
Am Montag, 15. Mai 2023, 14:04:10 CEST schrieb Sverdlin, Alexander:
> Hello Alexander!
>
> On Mon, 2023-05-15 at 11:11 +0200, Alexander Stein wrote:
>
> > > [ 22.503179] other info that might help us debug this:
> > > [ 22.503179]
> > > [ 22.503183] Chain exists of:
> > > [ 22.503183] prepare_lock --> rtc_pcf85063:594:(&config-
> > >
> > > >regmap)->lock
> > >
> > > --> i2c_register_adapter
> > > [ 22.503183]
> > > [ 22.503207] Possible unsafe locking scenario:
> > > [ 22.503207]
> > > [ 22.503210] CPU0 CPU1
> > > [ 22.503213] ---- ----
> > > [ 22.503216] lock(i2c_register_adapter);
> > > [ 22.503225]
> > > lock(rtc_pcf85063:594:(&config-
> > >
> > > > regmap)->lock);
> > >
> > >
> > > [ 22.503235]
> > > lock(i2c_register_adapter);
> > > [ 22.503244] lock(prepare_lock);
> > > [ 22.503253]
> > > [ 22.503253] *** DEADLOCK ***
> > >
> > > Now lpi2c_runtime_resume will call into clk_prepare() which also
> > > calls
> > > clk_prepare_lock() (identical to clk_get_rate).
> > >
> > > Best regards,
> > > Alexader
> > >
> > >
> > > > pm_runtime_set_autosuspend_delay(&pdev->dev,
> > > > I2C_PM_TIMEOUT);
> > > > pm_runtime_use_autosuspend(&pdev->dev);
> > > > pm_runtime_get_noresume(&pdev->dev);
> >
> >
> > With commit fa39065833db ("i2c: imx-lpi2c: avoid taking clk_prepare
> > mutex in
> > PM callbacks") in place, the additional deadlock warning regarding
> > runtime
> > resume is gone.
> > So only the ordering issue needs to be addressed.
>
>
> Unfortunately with both your (backported) and my patches applied I've
> got a 3rd warning:
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 5.15.71+git8e43aee #1 Tainted: G O
> ------------------------------------------------------
> xxxxxxx/2238 is trying to acquire lock:
> ffff0000040bda40 (&desc->request_mutex){+.+.}-{3:3}, at:
> __setup_irq+0xbc/0x8bc
>
> but task is already holding lock:
> ffff000010ef1100 (i2c_register_adapter){+.+.}-{3:3}, at:
> i2c_adapter_lock_bus+0x2c/0x3c
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #5 (i2c_register_adapter){+.+.}-{3:3}:
> lock_acquire+0x68/0x84
> rt_mutex_lock_nested+0x88/0xe0
> i2c_adapter_lock_bus+0x2c/0x3c
> i2c_transfer+0x58/0x130
> regmap_i2c_read+0x64/0xb0
> _regmap_raw_read+0x114/0x440
> _regmap_bus_read+0x4c/0x84
> _regmap_read+0x6c/0x270
> regmap_read+0x54/0x80
> pcf85063_probe+0xec/0x4cc
> i2c_device_probe+0x10c/0x350
> really_probe+0xc4/0x470
> __driver_probe_device+0x11c/0x190
> driver_probe_device+0x48/0x110
> __device_attach_driver+0xc4/0x160
> bus_for_each_drv+0x80/0xe0
> __device_attach+0xb0/0x1f0
> device_initial_probe+0x1c/0x2c
> bus_probe_device+0xa4/0xb0
> device_add+0x398/0x8ac
> device_register+0x28/0x40
> i2c_new_client_device+0x144/0x290
> of_i2c_register_devices+0x18c/0x230
> i2c_register_adapter+0x1dc/0x6b0
> __i2c_add_numbered_adapter+0x68/0xbc
> i2c_add_adapter+0xb0/0xe0
> lpi2c_imx_probe+0x3a4/0x670
> platform_probe+0x70/0xec
> really_probe+0xc4/0x470
> __driver_probe_device+0x11c/0x190
> driver_probe_device+0x48/0x110
> __device_attach_driver+0xc4/0x160
> bus_for_each_drv+0x80/0xe0
> __device_attach+0xb0/0x1f0
> device_initial_probe+0x1c/0x2c
> bus_probe_device+0xa4/0xb0
> deferred_probe_work_func+0xb8/0x110
> process_one_work+0x27c/0x6c4
> worker_thread+0x7c/0x450
> kthread+0x168/0x17c
> ret_from_fork+0x10/0x20
>
> -> #4 (rtc_pcf85063:560:(&config->regmap)->lock){+.+.}-{3:3}:
> lock_acquire+0x68/0x84
> __mutex_lock+0xa8/0x4d0
> mutex_lock_nested+0x48/0x54
> regmap_lock_mutex+0x1c/0x30
> regmap_read+0x44/0x80
> pcf85063_clkout_recalc_rate+0x34/0x80
> __clk_register+0x520/0x880
> devm_clk_register+0x64/0xc4
> pcf85063_probe+0x24c/0x4cc
> i2c_device_probe+0x10c/0x350
> really_probe+0xc4/0x470
> __driver_probe_device+0x11c/0x190
> driver_probe_device+0x48/0x110
> __device_attach_driver+0xc4/0x160
> bus_for_each_drv+0x80/0xe0
> __device_attach+0xb0/0x1f0
> device_initial_probe+0x1c/0x2c
> bus_probe_device+0xa4/0xb0
> device_add+0x398/0x8ac
> device_register+0x28/0x40
> i2c_new_client_device+0x144/0x290
> of_i2c_register_devices+0x18c/0x230
> i2c_register_adapter+0x1dc/0x6b0
> __i2c_add_numbered_adapter+0x68/0xbc
> i2c_add_adapter+0xb0/0xe0
> lpi2c_imx_probe+0x3a4/0x670
> platform_probe+0x70/0xec
> really_probe+0xc4/0x470
> __driver_probe_device+0x11c/0x190
> driver_probe_device+0x48/0x110
> __device_attach_driver+0xc4/0x160
> bus_for_each_drv+0x80/0xe0
> __device_attach+0xb0/0x1f0
> device_initial_probe+0x1c/0x2c
> bus_probe_device+0xa4/0xb0
> deferred_probe_work_func+0xb8/0x110
> process_one_work+0x27c/0x6c4
> worker_thread+0x7c/0x450
> kthread+0x168/0x17c
> ret_from_fork+0x10/0x20
>
> -> #3 (prepare_lock){+.+.}-{3:3}:
> lock_acquire+0x68/0x84
> __mutex_lock+0xa8/0x4d0
> mutex_lock_nested+0x48/0x54
> clk_prepare_lock+0x50/0xb0
> clk_prepare+0x28/0x50
> fec_runtime_resume+0x3c/0xe0
> pm_generic_runtime_resume+0x34/0x50
> __genpd_runtime_resume+0x38/0x90
> genpd_runtime_resume+0xa0/0x304
> __rpm_callback+0x50/0x1b0
> rpm_callback+0x40/0x80
> rpm_resume+0x448/0x6e0
> __pm_runtime_resume+0x50/0xc0
> fec_enet_mdio_read+0x48/0x190
> __mdiobus_read+0x48/0xb0
> mdiobus_read_nested+0x48/0x70
> mv88e6xxx_smi_dual_direct_read+0x2c/0x50
> mv88e6xxx_read+0x64/0xf0
> mv88e6xxx_g1_read+0x28/0x34
> mv88e6xxx_g1_irq_thread_work+0x50/0x17c
> mv88e6xxx_irq_poll+0x28/0x50
> kthread_worker_fn+0x100/0x480
> kthread+0x168/0x17c
> ret_from_fork+0x10/0x20
>
> -> #2 (&bus->mdio_lock/2){+.+.}-{3:3}:
> lock_acquire+0x68/0x84
> __mutex_lock+0xa8/0x4d0
> mutex_lock_nested+0x48/0x54
> mdiobus_read_nested+0x38/0x70
> mv88e6xxx_smi_dual_direct_read+0x2c/0x50
> mv88e6xxx_read+0x64/0xf0
> mv88e6xxx_port_read+0x24/0x30
> mv88e6xxx_probe+0x250/0x7f0
> mdio_probe+0x3c/0x80
> really_probe+0xc4/0x470
> __driver_probe_device+0x11c/0x190
> driver_probe_device+0x48/0x110
> __device_attach_driver+0xc4/0x160
> bus_for_each_drv+0x80/0xe0
> __device_attach+0xb0/0x1f0
> device_initial_probe+0x1c/0x2c
> bus_probe_device+0xa4/0xb0
> deferred_probe_work_func+0xb8/0x110
> process_one_work+0x27c/0x6c4
> worker_thread+0x7c/0x450
> kthread+0x168/0x17c
> ret_from_fork+0x10/0x20
>
> -> #1 (&chip->reg_lock){+.+.}-{3:3}:
> lock_acquire+0x68/0x84
> __mutex_lock+0xa8/0x4d0
> mutex_lock_nested+0x48/0x54
> mv88e6xxx_g1_irq_bus_lock+0x24/0x30
> __setup_irq+0x6e0/0x8bc
> request_threaded_irq+0xf4/0x1b4
> mv88e6xxx_g2_irq_setup+0x1f0/0x2b4
> mv88e6xxx_probe+0x480/0x7f0
> mdio_probe+0x3c/0x80
> really_probe+0xc4/0x470
> __driver_probe_device+0x11c/0x190
> driver_probe_device+0x48/0x110
> __device_attach_driver+0xc4/0x160
> bus_for_each_drv+0x80/0xe0
> __device_attach+0xb0/0x1f0
> device_initial_probe+0x1c/0x2c
> bus_probe_device+0xa4/0xb0
> deferred_probe_work_func+0xb8/0x110
> process_one_work+0x27c/0x6c4
> worker_thread+0x7c/0x450
> kthread+0x168/0x17c
> ret_from_fork+0x10/0x20
>
> -> #0 (&desc->request_mutex){+.+.}-{3:3}:
> __lock_acquire+0x12a4/0x20a0
> lock_acquire.part.0+0xe0/0x230
> lock_acquire+0x68/0x84
> __mutex_lock+0xa8/0x4d0
> mutex_lock_nested+0x48/0x54
> __setup_irq+0xbc/0x8bc
> request_threaded_irq+0xf4/0x1b4
> devm_request_threaded_irq+0x88/0x104
> lpi2c_runtime_resume+0x70/0xe4
This looks like the NXP downstream kernel, requesting the IRQ from within
runtime resume.
> pm_generic_runtime_resume+0x34/0x50
> __genpd_runtime_resume+0x38/0x90
> genpd_runtime_resume+0xa0/0x304
> __rpm_callback+0x50/0x1b0
> rpm_callback+0x74/0x80
> rpm_resume+0x448/0x6e0
> __pm_runtime_resume+0x50/0xc0
> lpi2c_imx_xfer+0x60/0xa5c
> __i2c_transfer+0x174/0xa80
> i2c_transfer+0x68/0x130
> regmap_i2c_read+0x64/0xb0
> _regmap_raw_read+0x114/0x440
> regmap_raw_read+0x19c/0x28c
> regmap_bulk_read+0x1b8/0x244
> at24_read+0x14c/0x2c4
> nvmem_reg_read+0x2c/0x54
> bin_attr_nvmem_read+0x8c/0xbc
> sysfs_kf_bin_read+0x74/0x94
> kernfs_fop_read_iter+0xb0/0x1d0
> new_sync_read+0xf0/0x184
> vfs_read+0x154/0x1f0
> ksys_read+0x70/0x100
> __arm64_sys_read+0x24/0x30
> invoke_syscall+0x50/0x120
> el0_svc_common.constprop.0+0x68/0x124
> do_el0_svc+0x30/0x9c
> el0_svc+0x54/0x110
> el0t_64_sync_handler+0xa4/0x130
> el0t_64_sync+0x1a0/0x1a4
>
> other info that might help us debug this:
>
> Chain exists of:
> &desc->request_mutex --> rtc_pcf85063:560:(&config->regmap)->lock -->
> i2c_register_adapter
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(i2c_register_adapter);
> lock(rtc_pcf85063:560:(&config->regmap)-
>
> >lock);
>
> lock(i2c_register_adapter);
> lock(&desc->request_mutex);
>
> *** DEADLOCK ***
>
> 4 locks held by xxxxxxx/2238:
> #0: ffff000015a64488 (&of->mutex){+.+.}-{3:3}, at:
> kernfs_fop_read_iter+0x74/0x1d0
> #1: ffff0000119e2400 (kn->active#58){.+.+}-{0:0}, at:
> kernfs_fop_read_iter+0x7c/0x1d0
> #2: ffff0000119a26e8 (&at24->lock){+.+.}-{3:3}, at:
> at24_read+0x8c/0x2c4
> #3: ffff000010ef1100 (i2c_register_adapter){+.+.}-{3:3}, at:
> i2c_adapter_lock_bus+0x2c/0x3c
>
> stack backtrace:
> CPU: 1 PID: 2238 Comm: xxxxxxx Tainted: G O
> 5.15.71+git8e43aee #1
> Hardware name: Siemens PXC5.E24 (DT)
> Call trace:
> dump_backtrace+0x0/0x1d4
> show_stack+0x20/0x2c
> dump_stack_lvl+0x8c/0xb8
> dump_stack+0x18/0x34
> print_circular_bug+0x1f8/0x200
> check_noncircular+0x130/0x144
> __lock_acquire+0x12a4/0x20a0
> lock_acquire.part.0+0xe0/0x230
> lock_acquire+0x68/0x84
> __mutex_lock+0xa8/0x4d0
> mutex_lock_nested+0x48/0x54
> __setup_irq+0xbc/0x8bc
> request_threaded_irq+0xf4/0x1b4
> devm_request_threaded_irq+0x88/0x104
> lpi2c_runtime_resume+0x70/0xe4
It seems your (possible) deadlock is triggered along devm_request_irq() during
resume.
As mainline just enables the clocks, there is nothing we can do here. Your
patch still is sensible, can you send a new version with the review addressed?
Thanks.
Best regards,
Alexander
> pm_generic_runtime_resume+0x34/0x50
> __genpd_runtime_resume+0x38/0x90
> genpd_runtime_resume+0xa0/0x304
> __rpm_callback+0x50/0x1b0
> rpm_callback+0x74/0x80
> rpm_resume+0x448/0x6e0
> __pm_runtime_resume+0x50/0xc0
> lpi2c_imx_xfer+0x60/0xa5c
> __i2c_transfer+0x174/0xa80
> i2c_transfer+0x68/0x130
> regmap_i2c_read+0x64/0xb0
> _regmap_raw_read+0x114/0x440
> regmap_raw_read+0x19c/0x28c
> regmap_bulk_read+0x1b8/0x244
> at24_read+0x14c/0x2c4
> nvmem_reg_read+0x2c/0x54
> bin_attr_nvmem_read+0x8c/0xbc
> sysfs_kf_bin_read+0x74/0x94
> kernfs_fop_read_iter+0xb0/0x1d0
> new_sync_read+0xf0/0x184
> vfs_read+0x154/0x1f0
> ksys_read+0x70/0x100
> __arm64_sys_read+0x24/0x30
> invoke_syscall+0x50/0x120
> el0_svc_common.constprop.0+0x68/0x124
> do_el0_svc+0x30/0x9c
> el0_svc+0x54/0x110
> el0t_64_sync_handler+0xa4/0x130
> el0t_64_sync+0x1a0/0x1a4
>
> So this doesn't look like an end of the story yet, unfortunately.
>
> --
> Alexander Sverdlin
> Siemens AG
> www.siemens.com
--
TQ-Systems GmbH | Mühlstraße 2, Gut Delling | 82229 Seefeld, Germany
Amtsgericht München, HRB 105018
Geschäftsführer: Detlef Schneider, Rüdiger Stahl, Stefan Schneider
http://www.tq-group.com/
Powered by blists - more mailing lists