[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <iexpu4kyyq4hwqwtgerihxumdxn5wx2mytqleagmhiogp7g2lp@cismgnobnik6>
Date: Fri, 3 Oct 2025 14:44:22 +0800
From: "Chia-Lin Kao (AceLan)" <acelan.kao@...onical.com>
To: Heikki Krogerus <heikki.krogerus@...ux.intel.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Dmitry Baryshkov <dmitry.baryshkov@....qualcomm.com>,
Andrei Kuchynski <akuchynski@...omium.org>, Ćukasz Bartosik <ukaszb@...omium.org>,
Venkat Jayaraman <venkat.jayaraman@...el.com>, linux-usb@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] usb: typec: ucsi: Fix workqueue destruction race during
connector cleanup
On Thu, Oct 02, 2025 at 09:30:26AM +0800, Chia-Lin Kao (AceLan) wrote:
> During UCSI initialization and operation, there is a race condition where
> delayed work items can be scheduled but attempt to queue work after the
> workqueue has been destroyed. This occurs in multiple code paths.
>
> The race occurs when:
> 1. ucsi_partner_task() or ucsi_poll_worker() schedule delayed work
> 2. Connector cleanup paths call destroy_workqueue()
> 3. Previously scheduled delayed work timers fire after destruction
> 4. This triggers warnings and crashes in __queue_work()
>
> The issue is timing-sensitive and typically manifests when:
> - Port registration fails due to PPM timing issues
> - System shutdown/cleanup occurs with pending delayed work
> - Module removal races with active delayed work
>
> Fix this by:
> 1. Creating ucsi_destroy_connector_wq() helper function that safely
> cancels all pending delayed work before destroying workqueues
> 2. Applying the safe cleanup to all three workqueue destruction paths:
> - ucsi_register_port() error path
> - ucsi_init() error path
> - ucsi_unregister() cleanup path
>
> This prevents both the initial queueing on destroyed workqueues and
> retry attempts from running workers, eliminating the timer races.
>
> Fixes: b9aa02ca39a4 ("usb: typec: ucsi: Add polling mechanism for partner tasks like alt mode checking")
> Cc: stable@...r.kernel.org
> Signed-off-by: Chia-Lin Kao (AceLan) <acelan.kao@...onical.com>
> ---
> drivers/usb/typec/ucsi/ucsi.c | 50 ++++++++++++++++++++++-------------
> 1 file changed, 31 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
> index 5ba3a6c81964..1f71c9983163 100644
> --- a/drivers/usb/typec/ucsi/ucsi.c
> +++ b/drivers/usb/typec/ucsi/ucsi.c
> @@ -283,6 +283,33 @@ static void ucsi_poll_worker(struct work_struct *work)
> mutex_unlock(&con->lock);
> }
>
> +/**
> + * ucsi_destroy_connector_wq - Safely destroy connector workqueue
> + * @con: UCSI connector
> + *
> + * Cancel all pending delayed work and destroy the workqueue to prevent
> + * timer races where delayed work tries to queue on destroyed workqueue.
> + */
> +static void ucsi_destroy_connector_wq(struct ucsi_connector *con)
> +{
> + struct ucsi_work *uwork, *tmp;
> +
> + if (!con->wq)
> + return;
> +
> + /* Cancel any pending delayed work before destroying workqueue */
> + mutex_lock(&con->lock);
> + list_for_each_entry_safe(uwork, tmp, &con->partner_tasks, node) {
> + cancel_delayed_work_sync(&uwork->work);
> + list_del(&uwork->node);
> + kfree(uwork);
> + }
> + mutex_unlock(&con->lock);
It introduces a deadlock here.
- ucsi_destroy_connector_wq() holds con->lock and calls cancel_delayed_work_sync()
- ucsi_poll_worker() (the work being cancelled) also tries to acquire con->lock
I'll submit a v2 to fix this issue.
[ 246.874552] INFO: task kworker/17:0:125 blocked for more than 122 seconds.
[ 246.874565] Not tainted 6.14.0-2014-oem #14
[ 246.874569] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 246.874571] task:kworker/17:0 state:D stack:0 pid:125 tgid:125 ppid:2 task_flags:0x4208060 flags:0x00004000
[ 246.874577] Workqueue: events_long ucsi_init_work [typec_ucsi]
[ 246.874592] Call Trace:
[ 246.874594] <TASK>
[ 246.874598] __schedule+0x2cf/0x640
[ 246.874605] schedule+0x29/0xd0
[ 246.874608] schedule_timeout+0xfb/0x110
[ 246.874611] __wait_for_common+0x91/0x190
[ 246.874614] ? __pfx_schedule_timeout+0x10/0x10
[ 246.874617] wait_for_completion+0x24/0x40
[ 246.874620] __flush_work+0x86/0xe0
[ 246.874624] ? __pfx_wq_barrier_func+0x10/0x10
[ 246.874629] cancel_delayed_work_sync+0x76/0x80
[ 246.874633] ucsi_destroy_connector_wq.part.0+0x61/0xd0 [typec_ucsi]
[ 246.874638] ucsi_init+0x27a/0x330 [typec_ucsi]
[ 246.874643] ucsi_init_work+0x18/0x90 [typec_ucsi]
[ 246.874647] process_one_work+0x178/0x3d0
[ 246.874650] worker_thread+0x2de/0x410
[ 246.874653] ? __pfx_worker_thread+0x10/0x10
[ 246.874657] kthread+0xfb/0x230
[ 246.874659] ? __pfx_kthread+0x10/0x10
[ 246.874662] ret_from_fork+0x44/0x70
[ 246.874665] ? __pfx_kthread+0x10/0x10
[ 246.874667] ret_from_fork_asm+0x1a/0x30
[ 246.874672] </TASK>
> +
> + destroy_workqueue(con->wq);
> + con->wq = NULL;
> +}
> +
> static int ucsi_partner_task(struct ucsi_connector *con,
> int (*cb)(struct ucsi_connector *),
> int retries, unsigned long delay)
> @@ -1798,10 +1825,8 @@ static int ucsi_register_port(struct ucsi *ucsi, struct ucsi_connector *con)
> out_unlock:
> mutex_unlock(&con->lock);
>
> - if (ret && con->wq) {
> - destroy_workqueue(con->wq);
> - con->wq = NULL;
> - }
> + if (ret)
> + ucsi_destroy_connector_wq(con);
>
> return ret;
> }
> @@ -1921,8 +1946,7 @@ static int ucsi_init(struct ucsi *ucsi)
>
> err_unregister:
> for (con = connector; con->port; con++) {
> - if (con->wq)
> - destroy_workqueue(con->wq);
> + ucsi_destroy_connector_wq(con);
> ucsi_unregister_partner(con);
> ucsi_unregister_altmodes(con, UCSI_RECIPIENT_CON);
> ucsi_unregister_port_psy(con);
> @@ -2144,19 +2168,7 @@ void ucsi_unregister(struct ucsi *ucsi)
> for (i = 0; i < ucsi->cap.num_connectors; i++) {
> cancel_work_sync(&ucsi->connector[i].work);
>
> - if (ucsi->connector[i].wq) {
> - struct ucsi_work *uwork;
> -
> - mutex_lock(&ucsi->connector[i].lock);
> - /*
> - * queue delayed items immediately so they can execute
> - * and free themselves before the wq is destroyed
> - */
> - list_for_each_entry(uwork, &ucsi->connector[i].partner_tasks, node)
> - mod_delayed_work(ucsi->connector[i].wq, &uwork->work, 0);
> - mutex_unlock(&ucsi->connector[i].lock);
> - destroy_workqueue(ucsi->connector[i].wq);
> - }
> + ucsi_destroy_connector_wq(&ucsi->connector[i]);
>
> ucsi_unregister_partner(&ucsi->connector[i]);
> ucsi_unregister_altmodes(&ucsi->connector[i],
> --
> 2.43.0
>
Powered by blists - more mailing lists