[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251104105446.110884-1-marco.crivellari@suse.com>
Date: Tue, 4 Nov 2025 11:54:46 +0100
From: Marco Crivellari <marco.crivellari@...e.com>
To: linux-kernel@...r.kernel.org,
linux-serial@...r.kernel.org
Cc: Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Frederic Weisbecker <frederic@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Marco Crivellari <marco.crivellari@...e.com>,
Michal Hocko <mhocko@...e.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jiri Slaby <jirislaby@...nel.org>
Subject: [PATCH] tty: replace use of system_unbound_wq with system_dfl_wq
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
This patch continues the effort to refactor worqueue APIs, which has begun
with the change introducing new workqueues and a new alloc_workqueue flag:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
system_dfl_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.
The old system_unbound_wq will be kept for a few release cycles.
Suggested-by: Tejun Heo <tj@...nel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@...e.com>
---
drivers/tty/serial/8250/8250_dw.c | 4 ++--
drivers/tty/tty_buffer.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
index 710ae4d40aec..27af83f0ff46 100644
--- a/drivers/tty/serial/8250/8250_dw.c
+++ b/drivers/tty/serial/8250/8250_dw.c
@@ -361,7 +361,7 @@ static int dw8250_clk_notifier_cb(struct notifier_block *nb,
* deferred event handling complication.
*/
if (event == POST_RATE_CHANGE) {
- queue_work(system_unbound_wq, &d->clk_work);
+ queue_work(system_dfl_wq, &d->clk_work);
return NOTIFY_OK;
}
@@ -680,7 +680,7 @@ static int dw8250_probe(struct platform_device *pdev)
err = clk_notifier_register(data->clk, &data->clk_notifier);
if (err)
return dev_err_probe(dev, err, "Failed to set the clock notifier\n");
- queue_work(system_unbound_wq, &data->clk_work);
+ queue_work(system_dfl_wq, &data->clk_work);
}
platform_set_drvdata(pdev, data);
diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
index 67271fc0b223..1a5673acd9b1 100644
--- a/drivers/tty/tty_buffer.c
+++ b/drivers/tty/tty_buffer.c
@@ -76,7 +76,7 @@ void tty_buffer_unlock_exclusive(struct tty_port *port)
mutex_unlock(&buf->lock);
if (restart)
- queue_work(system_unbound_wq, &buf->work);
+ queue_work(system_dfl_wq, &buf->work);
}
EXPORT_SYMBOL_GPL(tty_buffer_unlock_exclusive);
@@ -530,7 +530,7 @@ void tty_flip_buffer_push(struct tty_port *port)
struct tty_bufhead *buf = &port->buf;
tty_flip_buffer_commit(buf->tail);
- queue_work(system_unbound_wq, &buf->work);
+ queue_work(system_dfl_wq, &buf->work);
}
EXPORT_SYMBOL(tty_flip_buffer_push);
@@ -560,7 +560,7 @@ int tty_insert_flip_string_and_push_buffer(struct tty_port *port,
tty_flip_buffer_commit(buf->tail);
spin_unlock_irqrestore(&port->lock, flags);
- queue_work(system_unbound_wq, &buf->work);
+ queue_work(system_dfl_wq, &buf->work);
return size;
}
@@ -613,7 +613,7 @@ void tty_buffer_set_lock_subclass(struct tty_port *port)
bool tty_buffer_restart_work(struct tty_port *port)
{
- return queue_work(system_unbound_wq, &port->buf.work);
+ return queue_work(system_dfl_wq, &port->buf.work);
}
bool tty_buffer_cancel_work(struct tty_port *port)
--
2.51.1
Powered by blists - more mailing lists