[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251104102048.79374-1-marco.crivellari@suse.com>
Date: Tue, 4 Nov 2025 11:20:48 +0100
From: Marco Crivellari <marco.crivellari@...e.com>
To: linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org,
kernel@...labora.com
Cc: Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Frederic Weisbecker <frederic@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Marco Crivellari <marco.crivellari@...e.com>,
Michal Hocko <mhocko@...e.com>,
Shreeya Patel <shreeya.patel@...labora.com>,
Mauro Carvalho Chehab <mchehab@...nel.org>
Subject: [PATCH] media: synopsys: hdmirx: replace use of system_unbound_wq with system_dfl_wq
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistency cannot be addressed without refactoring the API.
This patch continues the effort to refactor worqueue APIs, which has begun
with the change introducing new workqueues:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
system_dfl_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.
The old system_unbound_wq will be kept for a few release cycles.
Suggested-by: Tejun Heo <tj@...nel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@...e.com>
---
drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
index b7d278b3889f..da6a725e4fbe 100644
--- a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
+++ b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
@@ -1735,7 +1735,7 @@ static void process_signal_change(struct snps_hdmirx_dev *hdmirx_dev)
FIFO_UNDERFLOW_INT_EN |
HDMIRX_AXI_ERROR_INT_EN, 0);
hdmirx_reset_dma(hdmirx_dev);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&hdmirx_dev->delayed_work_res_change,
msecs_to_jiffies(50));
}
@@ -2190,7 +2190,7 @@ static void hdmirx_delayed_work_res_change(struct work_struct *work)
if (hdmirx_wait_signal_lock(hdmirx_dev)) {
hdmirx_plugout(hdmirx_dev);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&hdmirx_dev->delayed_work_hotplug,
msecs_to_jiffies(200));
} else {
@@ -2209,7 +2209,7 @@ static irqreturn_t hdmirx_5v_det_irq_handler(int irq, void *dev_id)
val = gpiod_get_value(hdmirx_dev->detect_5v_gpio);
v4l2_dbg(3, debug, &hdmirx_dev->v4l2_dev, "%s: 5v:%d\n", __func__, val);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&hdmirx_dev->delayed_work_hotplug,
msecs_to_jiffies(10));
@@ -2441,7 +2441,7 @@ static void hdmirx_enable_irq(struct device *dev)
enable_irq(hdmirx_dev->dma_irq);
enable_irq(hdmirx_dev->det_irq);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&hdmirx_dev->delayed_work_hotplug,
msecs_to_jiffies(110));
}
--
2.51.1
Powered by blists - more mailing lists