[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAofZF7uLAd-tDnQq9joZm5vZunVr64QfF+ZFH-OoYfqG2OrCg@mail.gmail.com>
Date: Tue, 10 Feb 2026 15:36:08 +0100
From: Marco Crivellari <marco.crivellari@...e.com>
To: Dmitry Osipenko <dmitry.osipenko@...labora.com>
Cc: linux-kernel@...r.kernel.org, linux-media@...r.kernel.org,
kernel@...labora.com, Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>, Frederic Weisbecker <frederic@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>, Michal Hocko <mhocko@...e.com>,
Shreeya Patel <shreeya.patel@...labora.com>, Mauro Carvalho Chehab <mchehab@...nel.org>
Subject: Re: [PATCH] media: synopsys: hdmirx: replace use of system_unbound_wq
with system_dfl_wq
On Mon, Feb 9, 2026 at 10:18 PM Dmitry Osipenko
<dmitry.osipenko@...labora.com> wrote:
> Alright, looking further at the code, apparently there is nothing
> special RE the two unbound work queues. See some parts of kernel already
> moved to system_dfl. Would be great is this all was clarified in the
> commit message.
>
> Acked-by: Dmitry Osipenko <dmitry.osipenko@...labora.com>
Hi,
If you want I can send a new version with the improved commit log:
---
This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.
Before that to happen, workqueue users must be converted to the better named
new workqueues with no intended behaviour changes:
system_wq -> system_percpu_wq
system_unbound_wq -> system_dfl_wq
This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.
Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
---
Let me know what's best.
Thanks!
--
Marco Crivellari
L3 Support Engineer
Powered by blists - more mailing lists