[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <176253117139.2510929.8121904506912995233.b4-ty@kernel.org>
Date: Fri, 07 Nov 2025 15:59:31 +0000
From: Mark Brown <broonie@...nel.org>
To: linux-kernel@...r.kernel.org,
Marco Crivellari <marco.crivellari@...e.com>
Cc: Tejun Heo <tj@...nel.org>, Lai Jiangshan <jiangshanlai@...il.com>,
Frederic Weisbecker <frederic@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Michal Hocko <mhocko@...e.com>, Matti Vaittinen <mazziesaccount@...il.com>,
Liam Girdwood <lgirdwood@...il.com>
Subject: Re: [PATCH v2] regulator: irq_helper: replace use of system_wq
with system_dfl_wq
On Thu, 06 Nov 2025 15:29:14 +0100, Marco Crivellari wrote:
> Currently if a user enqueues a work item using schedule_delayed_work() the
> used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> schedule_work() that is using system_wq and queue_work(), that makes use
> again of WORK_CPU_UNBOUND.
>
> This lack of consistency cannot be addressed without refactoring the API.
>
> [...]
Applied to
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git for-next
Thanks!
[1/1] regulator: irq_helper: replace use of system_wq with system_dfl_wq
commit: b6f4bd64f453183954184ffbc2b89d73ed8fb135
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
Powered by blists - more mailing lists