[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7xRjDAgI3UO8Xuv@google.com>
Date: Mon, 9 Jan 2023 17:40:28 +0000
From: Matthias Kaehlcke <mka@...omium.org>
To: Stefan Wahren <stefan.wahren@...e.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Johan Hovold <johan@...nel.org>, linux-usb@...r.kernel.org,
Alexander Stein <alexander.stein@...tq-group.com>,
Icenowy Zheng <uwu@...nowy.me>,
Douglas Anderson <dianders@...omium.org>,
stable@...r.kernel.org, linux-kernel@...r.kernel.org,
Ravi Chandra Sadineni <ravisadineni@...omium.org>
Subject: Re: [PATCH 2/2] usb: misc: onboard_hub: Move 'attach' work to the
driver
On Sun, Jan 08, 2023 at 11:47:13AM +0100, Stefan Wahren wrote:
> Am 07.01.23 um 18:23 schrieb Stefan Wahren:
> > Hi Matthias,
> >
> > Am 06.01.23 um 00:03 schrieb Matthias Kaehlcke:
> > > Currently each onboard_hub platform device owns an 'attach' work,
> > > which is scheduled when the device probes. With this deadlocks
> > > have been reported on a Raspberry Pi 3 B+ [1], which has nested
> > > onboard hubs.
> > >
> > > The flow of the deadlock is something like this (with the onboard_hub
> > > driver built as a module) [2]:
> > >
> > > - USB root hub is instantiated
> > > - core hub driver calls onboard_hub_create_pdevs(), which creates the
> > > 'raw' platform device for the 1st level hub
> > > - 1st level hub is probed by the core hub driver
> > > - core hub driver calls onboard_hub_create_pdevs(), which creates
> > > the 'raw' platform device for the 2nd level hub
> > >
> > > - onboard_hub platform driver is registered
> > > - platform device for 1st level hub is probed
> > > - schedules 'attach' work
> > > - platform device for 2nd level hub is probed
> > > - schedules 'attach' work
> > >
> > > - onboard_hub USB driver is registered
> > > - device (and parent) lock of hub is held while the device is
> > > re-probed with the onboard_hub driver
> > >
> > > - 'attach' work (running in another thread) calls driver_attach(), which
> > > blocks on one of the hub device locks
> > >
> > > - onboard_hub_destroy_pdevs() is called by the core hub driver when one
> > > of the hubs is detached
> > > - destroying the pdevs invokes onboard_hub_remove(), which waits for the
> > > 'attach' work to complete
> > > - waits forever, since the 'attach' work can't acquire the device
> > > lock
> > >
> > > Use a single work struct for the driver instead of having a work struct
> > > per onboard hub platform driver instance. With that it isn't necessary
> > > to cancel the work in onboard_hub_remove(), which fixes the deadlock.
> > > The work is only cancelled when the driver is unloaded.
> >
> > i applied both patches for this series on top of v6.1
> > (multi_v7_defconfig), but usb is still broken on Raspberry Pi 3 B+
Thanks for testing.
> here is the hung task output:
>
> [ 243.682193] INFO: task kworker/1:0:18 blocked for more than 122 seconds.
> [ 243.682222] Not tainted 6.1.0-00002-gaa61d98d165b #2
> [ 243.682233] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
> [ 243.682242] task:kworker/1:0 state:D stack:0 pid:18 ppid:2
> flags:0x00000000
> [ 243.682267] Workqueue: events onboard_hub_attach_usb_driver
> [onboard_usb_hub]
> [ 243.682317] __schedule from schedule+0x4c/0xe0
> [ 243.682345] schedule from schedule_preempt_disabled+0xc/0x10
> [ 243.682367] schedule_preempt_disabled from
> __mutex_lock.constprop.0+0x244/0x804
> [ 243.682394] __mutex_lock.constprop.0 from __driver_attach+0x7c/0x188
> [ 243.682421] __driver_attach from bus_for_each_dev+0x70/0xb0
> [ 243.682449] bus_for_each_dev from onboard_hub_attach_usb_driver+0xc/0x28
> [onboard_usb_hub]
> [ 243.682494] onboard_hub_attach_usb_driver [onboard_usb_hub] from
> process_one_work+0x1ec/0x4d0
> [ 243.682534] process_one_work from worker_thread+0x50/0x540
> [ 243.682559] worker_thread from kthread+0xd0/0xec
> [ 243.682582] kthread from ret_from_fork+0x14/0x2c
> [ 243.682600] Exception stack(0xf086dfb0 to 0xf086dff8)
> [ 243.682615] dfa0: 00000000 00000000
> 00000000 00000000
> [ 243.682631] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000
> 00000000 00000000
> [ 243.682646] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000
> [ 243.682692] INFO: task kworker/1:2:82 blocked for more than 122 seconds.
> [ 243.682703] Not tainted 6.1.0-00002-gaa61d98d165b #2
> [ 243.682713] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
> [ 243.682721] task:kworker/1:2 state:D stack:0 pid:82 ppid:2
> flags:0x00000000
> [ 243.682741] Workqueue: events_power_efficient hub_init_func2
> [ 243.682764] __schedule from schedule+0x4c/0xe0
> [ 243.682785] schedule from schedule_preempt_disabled+0xc/0x10
> [ 243.682808] schedule_preempt_disabled from
> __mutex_lock.constprop.0+0x244/0x804
> [ 243.682833] __mutex_lock.constprop.0 from hub_activate+0x584/0x8b0
> [ 243.682859] hub_activate from process_one_work+0x1ec/0x4d0
> [ 243.682883] process_one_work from worker_thread+0x50/0x540
> [ 243.682907] worker_thread from kthread+0xd0/0xec
> [ 243.682927] kthread from ret_from_fork+0x14/0x2c
> [ 243.682944] Exception stack(0xf1509fb0 to 0xf1509ff8)
> [ 243.682958] 9fa0: 00000000 00000000
> 00000000 00000000
> [ 243.682974] 9fc0: 00000000 00000000 00000000 00000000 00000000 00000000
> 00000000 00000000
> [ 243.682988] 9fe0: 00000000 00000000 00000000 00000000 00000013 00000000
> [ 243.683023] INFO: task kworker/1:4:257 blocked for more than 122 seconds.
> [ 243.683034] Not tainted 6.1.0-00002-gaa61d98d165b #2
> [ 243.683043] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
> [ 243.683051] task:kworker/1:4 state:D stack:0 pid:257 ppid:2
> flags:0x00000000
> [ 243.683071] Workqueue: events_power_efficient hub_init_func2
> [ 243.683092] __schedule from schedule+0x4c/0xe0
> [ 243.683113] schedule from schedule_preempt_disabled+0xc/0x10
> [ 243.683135] schedule_preempt_disabled from
> __mutex_lock.constprop.0+0x244/0x804
> [ 243.683160] __mutex_lock.constprop.0 from hub_activate+0x584/0x8b0
> [ 243.683184] hub_activate from process_one_work+0x1ec/0x4d0
> [ 243.683209] process_one_work from worker_thread+0x50/0x540
> [ 243.683233] worker_thread from kthread+0xd0/0xec
> [ 243.683253] kthread from ret_from_fork+0x14/0x2c
> [ 243.683270] Exception stack(0xf09d9fb0 to 0xf09d9ff8)
> [ 243.683283] 9fa0: 00000000 00000000
> 00000000 00000000
> [ 243.683299] 9fc0: 00000000 00000000 00000000 00000000 00000000 00000000
> 00000000 00000000
> [ 243.683313] 9fe0: 00000000 00000000 00000000 00000000 00000013 00000000
Does commenting the following help:
while (work_busy(&attach_usb_driver_work) & WORK_BUSY_RUNNING)
msleep(10);
?
I'm wondering if the loop is actually needed. The idea behind it was:
The currently running work might not take into account the USB devices
of the hub that is currently probed, which should probe shortly after
the hub was powered on.
The 'attach' work is only needed for USB devices that were previously
detached through device_release_driver() in onboard_hub_remove(). These
USB device objects only persist in the kernel if the hub is not powered
off by onboard_hub_remove() (powering the hub off should be the usual
case).
If onboard_hub_probe() is invoked and the USB device objects persisted,
then an already running 'attach' work should take them into account. If
they didn't persist the running work might miss them, but that wouldn't
be a problem since the newly created USB devices don't need to be
explicitly attached (since they weren't detached previously).
Powered by blists - more mailing lists